00:00:00.001 Started by upstream project "autotest-per-patch" build number 126195 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "jbp-per-patch" build number 23955 00:00:00.002 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.042 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.043 The recommended git tool is: git 00:00:00.043 using credential 00000000-0000-0000-0000-000000000002 00:00:00.045 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.079 Fetching changes from the remote Git repository 00:00:00.081 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.133 Using shallow fetch with depth 1 00:00:00.133 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.133 > git --version # timeout=10 00:00:00.201 > git --version # 'git version 2.39.2' 00:00:00.201 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.257 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.257 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/51/24051/2 # timeout=5 00:00:03.980 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.991 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.002 Checking out Revision 74850c0aca59a95b8f6e0c0ea246ac78dd77feb5 (FETCH_HEAD) 00:00:04.002 > git config core.sparsecheckout # timeout=10 00:00:04.013 > git read-tree -mu HEAD # timeout=10 00:00:04.029 > git checkout -f 74850c0aca59a95b8f6e0c0ea246ac78dd77feb5 # timeout=5 00:00:04.050 Commit message: "jenkins/jjb: drop nvmf-tcp-vg-autotest" 00:00:04.050 > git rev-list --no-walk b36476c4eef2004836014399fbf414610d5aa128 # timeout=10 00:00:04.137 [Pipeline] Start of Pipeline 00:00:04.151 [Pipeline] library 00:00:04.152 Loading library shm_lib@master 00:00:04.153 Library shm_lib@master is cached. Copying from home. 00:00:04.171 [Pipeline] node 00:00:04.182 Running on WFP22 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.184 [Pipeline] { 00:00:04.192 [Pipeline] catchError 00:00:04.193 [Pipeline] { 00:00:04.205 [Pipeline] wrap 00:00:04.213 [Pipeline] { 00:00:04.222 [Pipeline] stage 00:00:04.224 [Pipeline] { (Prologue) 00:00:04.457 [Pipeline] sh 00:00:04.735 + logger -p user.info -t JENKINS-CI 00:00:04.782 [Pipeline] echo 00:00:04.784 Node: WFP22 00:00:04.793 [Pipeline] sh 00:00:05.137 [Pipeline] setCustomBuildProperty 00:00:05.165 [Pipeline] echo 00:00:05.166 Cleanup processes 00:00:05.170 [Pipeline] sh 00:00:05.448 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.448 2752289 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.463 [Pipeline] sh 00:00:05.742 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.742 ++ awk '{print $1}' 00:00:05.742 ++ grep -v 'sudo pgrep' 00:00:05.742 + sudo kill -9 00:00:05.742 + true 00:00:05.763 [Pipeline] cleanWs 00:00:05.770 [WS-CLEANUP] Deleting project workspace... 00:00:05.770 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.776 [WS-CLEANUP] done 00:00:05.780 [Pipeline] setCustomBuildProperty 00:00:05.793 [Pipeline] sh 00:00:06.068 + sudo git config --global --replace-all safe.directory '*' 00:00:06.148 [Pipeline] httpRequest 00:00:06.181 [Pipeline] echo 00:00:06.183 Sorcerer 10.211.164.101 is alive 00:00:06.190 [Pipeline] httpRequest 00:00:06.195 HttpMethod: GET 00:00:06.195 URL: http://10.211.164.101/packages/jbp_74850c0aca59a95b8f6e0c0ea246ac78dd77feb5.tar.gz 00:00:06.196 Sending request to url: http://10.211.164.101/packages/jbp_74850c0aca59a95b8f6e0c0ea246ac78dd77feb5.tar.gz 00:00:06.215 Response Code: HTTP/1.1 200 OK 00:00:06.215 Success: Status code 200 is in the accepted range: 200,404 00:00:06.215 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_74850c0aca59a95b8f6e0c0ea246ac78dd77feb5.tar.gz 00:00:12.091 [Pipeline] sh 00:00:12.375 + tar --no-same-owner -xf jbp_74850c0aca59a95b8f6e0c0ea246ac78dd77feb5.tar.gz 00:00:12.388 [Pipeline] httpRequest 00:00:12.422 [Pipeline] echo 00:00:12.423 Sorcerer 10.211.164.101 is alive 00:00:12.432 [Pipeline] httpRequest 00:00:12.436 HttpMethod: GET 00:00:12.436 URL: http://10.211.164.101/packages/spdk_248c547d03bd63d26c50240ccfd7f3cfc99bc650.tar.gz 00:00:12.437 Sending request to url: http://10.211.164.101/packages/spdk_248c547d03bd63d26c50240ccfd7f3cfc99bc650.tar.gz 00:00:12.447 Response Code: HTTP/1.1 200 OK 00:00:12.448 Success: Status code 200 is in the accepted range: 200,404 00:00:12.448 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_248c547d03bd63d26c50240ccfd7f3cfc99bc650.tar.gz 00:01:52.995 [Pipeline] sh 00:01:53.279 + tar --no-same-owner -xf spdk_248c547d03bd63d26c50240ccfd7f3cfc99bc650.tar.gz 00:01:55.827 [Pipeline] sh 00:01:56.109 + git -C spdk log --oneline -n5 00:01:56.109 248c547d0 nvmf/tcp: add option for selecting a sock impl 00:01:56.109 2d30d9f83 accel: introduce tasks in sequence limit 00:01:56.109 2728651ee accel: adjust task per ch define name 00:01:56.109 e7cce062d Examples/Perf: correct the calculation of total bandwidth 00:01:56.109 3b4b1d00c libvfio-user: bump MAX_DMA_REGIONS 00:01:56.122 [Pipeline] } 00:01:56.137 [Pipeline] // stage 00:01:56.145 [Pipeline] stage 00:01:56.147 [Pipeline] { (Prepare) 00:01:56.165 [Pipeline] writeFile 00:01:56.180 [Pipeline] sh 00:01:56.458 + logger -p user.info -t JENKINS-CI 00:01:56.474 [Pipeline] sh 00:01:56.795 + logger -p user.info -t JENKINS-CI 00:01:56.807 [Pipeline] sh 00:01:57.086 + cat autorun-spdk.conf 00:01:57.086 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:57.086 SPDK_TEST_NVMF=1 00:01:57.086 SPDK_TEST_NVME_CLI=1 00:01:57.086 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:57.086 SPDK_TEST_NVMF_NICS=e810 00:01:57.086 SPDK_TEST_VFIOUSER=1 00:01:57.086 SPDK_RUN_UBSAN=1 00:01:57.086 NET_TYPE=phy 00:01:57.093 RUN_NIGHTLY=0 00:01:57.099 [Pipeline] readFile 00:01:57.138 [Pipeline] withEnv 00:01:57.141 [Pipeline] { 00:01:57.168 [Pipeline] sh 00:01:57.469 + set -ex 00:01:57.469 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:57.469 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:57.469 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:57.469 ++ SPDK_TEST_NVMF=1 00:01:57.469 ++ SPDK_TEST_NVME_CLI=1 00:01:57.469 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:57.469 ++ SPDK_TEST_NVMF_NICS=e810 00:01:57.469 ++ SPDK_TEST_VFIOUSER=1 00:01:57.469 ++ SPDK_RUN_UBSAN=1 00:01:57.469 ++ NET_TYPE=phy 00:01:57.469 ++ RUN_NIGHTLY=0 00:01:57.469 + case $SPDK_TEST_NVMF_NICS in 00:01:57.469 + DRIVERS=ice 00:01:57.469 + [[ tcp == \r\d\m\a ]] 00:01:57.469 + [[ -n ice ]] 00:01:57.469 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:57.469 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:57.469 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:57.469 rmmod: ERROR: Module irdma is not currently loaded 00:01:57.470 rmmod: ERROR: Module i40iw is not currently loaded 00:01:57.470 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:57.470 + true 00:01:57.470 + for D in $DRIVERS 00:01:57.470 + sudo modprobe ice 00:01:57.470 + exit 0 00:01:57.479 [Pipeline] } 00:01:57.497 [Pipeline] // withEnv 00:01:57.503 [Pipeline] } 00:01:57.521 [Pipeline] // stage 00:01:57.532 [Pipeline] catchError 00:01:57.534 [Pipeline] { 00:01:57.551 [Pipeline] timeout 00:01:57.552 Timeout set to expire in 50 min 00:01:57.554 [Pipeline] { 00:01:57.571 [Pipeline] stage 00:01:57.573 [Pipeline] { (Tests) 00:01:57.591 [Pipeline] sh 00:01:57.875 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:57.875 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:57.875 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:57.875 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:57.875 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:57.875 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:57.875 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:57.875 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:57.875 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:57.875 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:57.875 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:57.875 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:57.875 + source /etc/os-release 00:01:57.875 ++ NAME='Fedora Linux' 00:01:57.875 ++ VERSION='38 (Cloud Edition)' 00:01:57.875 ++ ID=fedora 00:01:57.875 ++ VERSION_ID=38 00:01:57.875 ++ VERSION_CODENAME= 00:01:57.875 ++ PLATFORM_ID=platform:f38 00:01:57.875 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:57.875 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:57.875 ++ LOGO=fedora-logo-icon 00:01:57.875 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:57.875 ++ HOME_URL=https://fedoraproject.org/ 00:01:57.875 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:57.875 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:57.875 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:57.875 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:57.875 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:57.875 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:57.875 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:57.875 ++ SUPPORT_END=2024-05-14 00:01:57.875 ++ VARIANT='Cloud Edition' 00:01:57.875 ++ VARIANT_ID=cloud 00:01:57.875 + uname -a 00:01:57.875 Linux spdk-wfp-22 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:57.875 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:01.164 Hugepages 00:02:01.164 node hugesize free / total 00:02:01.164 node0 1048576kB 0 / 0 00:02:01.164 node0 2048kB 0 / 0 00:02:01.164 node1 1048576kB 0 / 0 00:02:01.164 node1 2048kB 0 / 0 00:02:01.164 00:02:01.164 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:01.164 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:02:01.164 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:02:01.164 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:02:01.164 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:02:01.164 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:02:01.164 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:02:01.164 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:02:01.164 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:02:01.164 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:02:01.164 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:02:01.164 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:02:01.164 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:02:01.164 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:02:01.164 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:02:01.164 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:02:01.164 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:02:01.164 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:02:01.164 + rm -f /tmp/spdk-ld-path 00:02:01.164 + source autorun-spdk.conf 00:02:01.164 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:01.164 ++ SPDK_TEST_NVMF=1 00:02:01.164 ++ SPDK_TEST_NVME_CLI=1 00:02:01.164 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:01.164 ++ SPDK_TEST_NVMF_NICS=e810 00:02:01.164 ++ SPDK_TEST_VFIOUSER=1 00:02:01.164 ++ SPDK_RUN_UBSAN=1 00:02:01.164 ++ NET_TYPE=phy 00:02:01.164 ++ RUN_NIGHTLY=0 00:02:01.164 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:01.164 + [[ -n '' ]] 00:02:01.164 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:01.164 + for M in /var/spdk/build-*-manifest.txt 00:02:01.164 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:01.164 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:01.164 + for M in /var/spdk/build-*-manifest.txt 00:02:01.164 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:01.164 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:01.164 ++ uname 00:02:01.164 + [[ Linux == \L\i\n\u\x ]] 00:02:01.164 + sudo dmesg -T 00:02:01.164 + sudo dmesg --clear 00:02:01.164 + dmesg_pid=2753731 00:02:01.164 + [[ Fedora Linux == FreeBSD ]] 00:02:01.164 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:01.164 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:01.164 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:01.164 + [[ -x /usr/src/fio-static/fio ]] 00:02:01.164 + export FIO_BIN=/usr/src/fio-static/fio 00:02:01.164 + FIO_BIN=/usr/src/fio-static/fio 00:02:01.164 + sudo dmesg -Tw 00:02:01.164 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:01.164 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:01.164 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:01.164 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:01.164 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:01.164 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:01.164 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:01.164 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:01.164 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:01.164 Test configuration: 00:02:01.164 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:01.164 SPDK_TEST_NVMF=1 00:02:01.164 SPDK_TEST_NVME_CLI=1 00:02:01.164 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:01.164 SPDK_TEST_NVMF_NICS=e810 00:02:01.164 SPDK_TEST_VFIOUSER=1 00:02:01.164 SPDK_RUN_UBSAN=1 00:02:01.164 NET_TYPE=phy 00:02:01.164 RUN_NIGHTLY=0 15:07:05 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:01.424 15:07:05 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:01.424 15:07:05 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:01.424 15:07:05 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:01.424 15:07:05 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:01.424 15:07:05 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:01.424 15:07:05 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:01.424 15:07:05 -- paths/export.sh@5 -- $ export PATH 00:02:01.424 15:07:05 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:01.424 15:07:05 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:01.424 15:07:05 -- common/autobuild_common.sh@444 -- $ date +%s 00:02:01.424 15:07:05 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721048825.XXXXXX 00:02:01.424 15:07:05 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721048825.Fup00G 00:02:01.424 15:07:05 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:02:01.424 15:07:05 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:02:01.424 15:07:05 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:02:01.424 15:07:05 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:01.424 15:07:05 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:01.424 15:07:05 -- common/autobuild_common.sh@460 -- $ get_config_params 00:02:01.424 15:07:05 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:02:01.424 15:07:05 -- common/autotest_common.sh@10 -- $ set +x 00:02:01.424 15:07:05 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:02:01.424 15:07:05 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:02:01.424 15:07:05 -- pm/common@17 -- $ local monitor 00:02:01.424 15:07:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:01.424 15:07:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:01.424 15:07:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:01.424 15:07:05 -- pm/common@21 -- $ date +%s 00:02:01.424 15:07:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:01.424 15:07:05 -- pm/common@21 -- $ date +%s 00:02:01.424 15:07:05 -- pm/common@25 -- $ sleep 1 00:02:01.424 15:07:05 -- pm/common@21 -- $ date +%s 00:02:01.424 15:07:05 -- pm/common@21 -- $ date +%s 00:02:01.424 15:07:05 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721048825 00:02:01.424 15:07:05 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721048825 00:02:01.424 15:07:05 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721048825 00:02:01.424 15:07:05 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721048825 00:02:01.424 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721048825_collect-vmstat.pm.log 00:02:01.424 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721048825_collect-cpu-load.pm.log 00:02:01.424 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721048825_collect-cpu-temp.pm.log 00:02:01.424 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721048825_collect-bmc-pm.bmc.pm.log 00:02:02.367 15:07:06 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:02:02.367 15:07:06 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:02.367 15:07:06 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:02.367 15:07:06 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:02.367 15:07:06 -- spdk/autobuild.sh@16 -- $ date -u 00:02:02.367 Mon Jul 15 01:07:06 PM UTC 2024 00:02:02.367 15:07:06 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:02.367 v24.09-pre-208-g248c547d0 00:02:02.367 15:07:06 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:02.367 15:07:06 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:02.367 15:07:06 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:02.367 15:07:06 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:02.367 15:07:06 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:02.367 15:07:06 -- common/autotest_common.sh@10 -- $ set +x 00:02:02.367 ************************************ 00:02:02.367 START TEST ubsan 00:02:02.367 ************************************ 00:02:02.367 15:07:06 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:02:02.367 using ubsan 00:02:02.367 00:02:02.367 real 0m0.001s 00:02:02.367 user 0m0.001s 00:02:02.367 sys 0m0.000s 00:02:02.367 15:07:06 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:02.367 15:07:06 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:02.367 ************************************ 00:02:02.367 END TEST ubsan 00:02:02.367 ************************************ 00:02:02.367 15:07:06 -- common/autotest_common.sh@1142 -- $ return 0 00:02:02.367 15:07:06 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:02.367 15:07:06 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:02.367 15:07:06 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:02.367 15:07:06 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:02.367 15:07:06 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:02.367 15:07:06 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:02.367 15:07:06 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:02.367 15:07:06 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:02.367 15:07:06 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:02:02.625 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:02.625 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:02.882 Using 'verbs' RDMA provider 00:02:16.024 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:30.911 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:30.911 Creating mk/config.mk...done. 00:02:30.911 Creating mk/cc.flags.mk...done. 00:02:30.911 Type 'make' to build. 00:02:30.911 15:07:33 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:02:30.911 15:07:33 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:30.911 15:07:33 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:30.911 15:07:33 -- common/autotest_common.sh@10 -- $ set +x 00:02:30.911 ************************************ 00:02:30.911 START TEST make 00:02:30.911 ************************************ 00:02:30.911 15:07:33 make -- common/autotest_common.sh@1123 -- $ make -j112 00:02:30.911 make[1]: Nothing to be done for 'all'. 00:02:31.168 The Meson build system 00:02:31.168 Version: 1.3.1 00:02:31.168 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:31.168 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:31.168 Build type: native build 00:02:31.168 Project name: libvfio-user 00:02:31.168 Project version: 0.0.1 00:02:31.168 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:31.168 C linker for the host machine: cc ld.bfd 2.39-16 00:02:31.168 Host machine cpu family: x86_64 00:02:31.168 Host machine cpu: x86_64 00:02:31.168 Run-time dependency threads found: YES 00:02:31.168 Library dl found: YES 00:02:31.168 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:31.168 Run-time dependency json-c found: YES 0.17 00:02:31.168 Run-time dependency cmocka found: YES 1.1.7 00:02:31.168 Program pytest-3 found: NO 00:02:31.168 Program flake8 found: NO 00:02:31.168 Program misspell-fixer found: NO 00:02:31.168 Program restructuredtext-lint found: NO 00:02:31.168 Program valgrind found: YES (/usr/bin/valgrind) 00:02:31.168 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:31.168 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:31.168 Compiler for C supports arguments -Wwrite-strings: YES 00:02:31.168 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:31.168 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:31.168 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:31.168 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:31.168 Build targets in project: 8 00:02:31.168 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:31.168 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:31.168 00:02:31.168 libvfio-user 0.0.1 00:02:31.168 00:02:31.168 User defined options 00:02:31.168 buildtype : debug 00:02:31.168 default_library: shared 00:02:31.168 libdir : /usr/local/lib 00:02:31.168 00:02:31.168 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:31.735 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:31.735 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:31.735 [2/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:31.735 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:31.735 [4/37] Compiling C object samples/null.p/null.c.o 00:02:31.735 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:31.735 [6/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:31.735 [7/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:31.735 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:31.735 [9/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:31.735 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:31.735 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:31.735 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:31.735 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:31.735 [14/37] Compiling C object samples/server.p/server.c.o 00:02:31.735 [15/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:31.735 [16/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:31.735 [17/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:31.735 [18/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:31.735 [19/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:31.735 [20/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:31.735 [21/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:31.735 [22/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:31.735 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:31.735 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:31.735 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:31.735 [26/37] Compiling C object samples/client.p/client.c.o 00:02:31.735 [27/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:31.735 [28/37] Linking target samples/client 00:02:31.735 [29/37] Linking target test/unit_tests 00:02:32.004 [30/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:32.004 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:02:32.004 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:32.004 [33/37] Linking target samples/gpio-pci-idio-16 00:02:32.004 [34/37] Linking target samples/lspci 00:02:32.004 [35/37] Linking target samples/server 00:02:32.004 [36/37] Linking target samples/shadow_ioeventfd_server 00:02:32.004 [37/37] Linking target samples/null 00:02:32.004 INFO: autodetecting backend as ninja 00:02:32.004 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:32.280 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:32.538 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:32.538 ninja: no work to do. 00:02:37.809 The Meson build system 00:02:37.809 Version: 1.3.1 00:02:37.809 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:37.809 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:37.809 Build type: native build 00:02:37.809 Program cat found: YES (/usr/bin/cat) 00:02:37.809 Project name: DPDK 00:02:37.809 Project version: 24.03.0 00:02:37.809 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:37.809 C linker for the host machine: cc ld.bfd 2.39-16 00:02:37.810 Host machine cpu family: x86_64 00:02:37.810 Host machine cpu: x86_64 00:02:37.810 Message: ## Building in Developer Mode ## 00:02:37.810 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:37.810 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:37.810 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:37.810 Program python3 found: YES (/usr/bin/python3) 00:02:37.810 Program cat found: YES (/usr/bin/cat) 00:02:37.810 Compiler for C supports arguments -march=native: YES 00:02:37.810 Checking for size of "void *" : 8 00:02:37.810 Checking for size of "void *" : 8 (cached) 00:02:37.810 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:37.810 Library m found: YES 00:02:37.810 Library numa found: YES 00:02:37.810 Has header "numaif.h" : YES 00:02:37.810 Library fdt found: NO 00:02:37.810 Library execinfo found: NO 00:02:37.810 Has header "execinfo.h" : YES 00:02:37.810 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:37.810 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:37.810 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:37.810 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:37.810 Run-time dependency openssl found: YES 3.0.9 00:02:37.810 Run-time dependency libpcap found: YES 1.10.4 00:02:37.810 Has header "pcap.h" with dependency libpcap: YES 00:02:37.810 Compiler for C supports arguments -Wcast-qual: YES 00:02:37.810 Compiler for C supports arguments -Wdeprecated: YES 00:02:37.810 Compiler for C supports arguments -Wformat: YES 00:02:37.810 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:37.810 Compiler for C supports arguments -Wformat-security: NO 00:02:37.810 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:37.810 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:37.810 Compiler for C supports arguments -Wnested-externs: YES 00:02:37.810 Compiler for C supports arguments -Wold-style-definition: YES 00:02:37.810 Compiler for C supports arguments -Wpointer-arith: YES 00:02:37.810 Compiler for C supports arguments -Wsign-compare: YES 00:02:37.810 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:37.810 Compiler for C supports arguments -Wundef: YES 00:02:37.810 Compiler for C supports arguments -Wwrite-strings: YES 00:02:37.810 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:37.810 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:37.810 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:37.810 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:37.810 Program objdump found: YES (/usr/bin/objdump) 00:02:37.810 Compiler for C supports arguments -mavx512f: YES 00:02:37.810 Checking if "AVX512 checking" compiles: YES 00:02:37.810 Fetching value of define "__SSE4_2__" : 1 00:02:37.810 Fetching value of define "__AES__" : 1 00:02:37.810 Fetching value of define "__AVX__" : 1 00:02:37.810 Fetching value of define "__AVX2__" : 1 00:02:37.810 Fetching value of define "__AVX512BW__" : 1 00:02:37.810 Fetching value of define "__AVX512CD__" : 1 00:02:37.810 Fetching value of define "__AVX512DQ__" : 1 00:02:37.810 Fetching value of define "__AVX512F__" : 1 00:02:37.810 Fetching value of define "__AVX512VL__" : 1 00:02:37.810 Fetching value of define "__PCLMUL__" : 1 00:02:37.810 Fetching value of define "__RDRND__" : 1 00:02:37.810 Fetching value of define "__RDSEED__" : 1 00:02:37.810 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:37.810 Fetching value of define "__znver1__" : (undefined) 00:02:37.810 Fetching value of define "__znver2__" : (undefined) 00:02:37.810 Fetching value of define "__znver3__" : (undefined) 00:02:37.810 Fetching value of define "__znver4__" : (undefined) 00:02:37.810 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:37.810 Message: lib/log: Defining dependency "log" 00:02:37.810 Message: lib/kvargs: Defining dependency "kvargs" 00:02:37.810 Message: lib/telemetry: Defining dependency "telemetry" 00:02:37.810 Checking for function "getentropy" : NO 00:02:37.810 Message: lib/eal: Defining dependency "eal" 00:02:37.810 Message: lib/ring: Defining dependency "ring" 00:02:37.810 Message: lib/rcu: Defining dependency "rcu" 00:02:37.810 Message: lib/mempool: Defining dependency "mempool" 00:02:37.810 Message: lib/mbuf: Defining dependency "mbuf" 00:02:37.810 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:37.810 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:37.810 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:37.810 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:37.810 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:37.810 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:37.810 Compiler for C supports arguments -mpclmul: YES 00:02:37.810 Compiler for C supports arguments -maes: YES 00:02:37.810 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:37.810 Compiler for C supports arguments -mavx512bw: YES 00:02:37.810 Compiler for C supports arguments -mavx512dq: YES 00:02:37.810 Compiler for C supports arguments -mavx512vl: YES 00:02:37.810 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:37.810 Compiler for C supports arguments -mavx2: YES 00:02:37.810 Compiler for C supports arguments -mavx: YES 00:02:37.810 Message: lib/net: Defining dependency "net" 00:02:37.810 Message: lib/meter: Defining dependency "meter" 00:02:37.810 Message: lib/ethdev: Defining dependency "ethdev" 00:02:37.810 Message: lib/pci: Defining dependency "pci" 00:02:37.810 Message: lib/cmdline: Defining dependency "cmdline" 00:02:37.810 Message: lib/hash: Defining dependency "hash" 00:02:37.810 Message: lib/timer: Defining dependency "timer" 00:02:37.810 Message: lib/compressdev: Defining dependency "compressdev" 00:02:37.810 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:37.810 Message: lib/dmadev: Defining dependency "dmadev" 00:02:37.810 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:37.810 Message: lib/power: Defining dependency "power" 00:02:37.810 Message: lib/reorder: Defining dependency "reorder" 00:02:37.810 Message: lib/security: Defining dependency "security" 00:02:37.810 Has header "linux/userfaultfd.h" : YES 00:02:37.810 Has header "linux/vduse.h" : YES 00:02:37.810 Message: lib/vhost: Defining dependency "vhost" 00:02:37.810 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:37.810 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:37.810 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:37.810 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:37.810 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:37.810 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:37.810 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:37.810 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:37.810 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:37.810 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:37.810 Program doxygen found: YES (/usr/bin/doxygen) 00:02:37.810 Configuring doxy-api-html.conf using configuration 00:02:37.810 Configuring doxy-api-man.conf using configuration 00:02:37.810 Program mandb found: YES (/usr/bin/mandb) 00:02:37.810 Program sphinx-build found: NO 00:02:37.810 Configuring rte_build_config.h using configuration 00:02:37.810 Message: 00:02:37.810 ================= 00:02:37.810 Applications Enabled 00:02:37.810 ================= 00:02:37.810 00:02:37.810 apps: 00:02:37.810 00:02:37.810 00:02:37.810 Message: 00:02:37.810 ================= 00:02:37.810 Libraries Enabled 00:02:37.810 ================= 00:02:37.810 00:02:37.810 libs: 00:02:37.810 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:37.810 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:37.810 cryptodev, dmadev, power, reorder, security, vhost, 00:02:37.810 00:02:37.810 Message: 00:02:37.810 =============== 00:02:37.810 Drivers Enabled 00:02:37.810 =============== 00:02:37.810 00:02:37.810 common: 00:02:37.810 00:02:37.810 bus: 00:02:37.810 pci, vdev, 00:02:37.810 mempool: 00:02:37.810 ring, 00:02:37.810 dma: 00:02:37.810 00:02:37.810 net: 00:02:37.810 00:02:37.810 crypto: 00:02:37.810 00:02:37.810 compress: 00:02:37.810 00:02:37.810 vdpa: 00:02:37.810 00:02:37.810 00:02:37.810 Message: 00:02:37.810 ================= 00:02:37.810 Content Skipped 00:02:37.810 ================= 00:02:37.810 00:02:37.810 apps: 00:02:37.810 dumpcap: explicitly disabled via build config 00:02:37.810 graph: explicitly disabled via build config 00:02:37.810 pdump: explicitly disabled via build config 00:02:37.810 proc-info: explicitly disabled via build config 00:02:37.810 test-acl: explicitly disabled via build config 00:02:37.810 test-bbdev: explicitly disabled via build config 00:02:37.810 test-cmdline: explicitly disabled via build config 00:02:37.810 test-compress-perf: explicitly disabled via build config 00:02:37.810 test-crypto-perf: explicitly disabled via build config 00:02:37.810 test-dma-perf: explicitly disabled via build config 00:02:37.810 test-eventdev: explicitly disabled via build config 00:02:37.810 test-fib: explicitly disabled via build config 00:02:37.810 test-flow-perf: explicitly disabled via build config 00:02:37.810 test-gpudev: explicitly disabled via build config 00:02:37.810 test-mldev: explicitly disabled via build config 00:02:37.810 test-pipeline: explicitly disabled via build config 00:02:37.810 test-pmd: explicitly disabled via build config 00:02:37.810 test-regex: explicitly disabled via build config 00:02:37.810 test-sad: explicitly disabled via build config 00:02:37.810 test-security-perf: explicitly disabled via build config 00:02:37.810 00:02:37.810 libs: 00:02:37.810 argparse: explicitly disabled via build config 00:02:37.810 metrics: explicitly disabled via build config 00:02:37.810 acl: explicitly disabled via build config 00:02:37.810 bbdev: explicitly disabled via build config 00:02:37.810 bitratestats: explicitly disabled via build config 00:02:37.810 bpf: explicitly disabled via build config 00:02:37.810 cfgfile: explicitly disabled via build config 00:02:37.810 distributor: explicitly disabled via build config 00:02:37.810 efd: explicitly disabled via build config 00:02:37.810 eventdev: explicitly disabled via build config 00:02:37.810 dispatcher: explicitly disabled via build config 00:02:37.810 gpudev: explicitly disabled via build config 00:02:37.810 gro: explicitly disabled via build config 00:02:37.810 gso: explicitly disabled via build config 00:02:37.810 ip_frag: explicitly disabled via build config 00:02:37.810 jobstats: explicitly disabled via build config 00:02:37.810 latencystats: explicitly disabled via build config 00:02:37.810 lpm: explicitly disabled via build config 00:02:37.811 member: explicitly disabled via build config 00:02:37.811 pcapng: explicitly disabled via build config 00:02:37.811 rawdev: explicitly disabled via build config 00:02:37.811 regexdev: explicitly disabled via build config 00:02:37.811 mldev: explicitly disabled via build config 00:02:37.811 rib: explicitly disabled via build config 00:02:37.811 sched: explicitly disabled via build config 00:02:37.811 stack: explicitly disabled via build config 00:02:37.811 ipsec: explicitly disabled via build config 00:02:37.811 pdcp: explicitly disabled via build config 00:02:37.811 fib: explicitly disabled via build config 00:02:37.811 port: explicitly disabled via build config 00:02:37.811 pdump: explicitly disabled via build config 00:02:37.811 table: explicitly disabled via build config 00:02:37.811 pipeline: explicitly disabled via build config 00:02:37.811 graph: explicitly disabled via build config 00:02:37.811 node: explicitly disabled via build config 00:02:37.811 00:02:37.811 drivers: 00:02:37.811 common/cpt: not in enabled drivers build config 00:02:37.811 common/dpaax: not in enabled drivers build config 00:02:37.811 common/iavf: not in enabled drivers build config 00:02:37.811 common/idpf: not in enabled drivers build config 00:02:37.811 common/ionic: not in enabled drivers build config 00:02:37.811 common/mvep: not in enabled drivers build config 00:02:37.811 common/octeontx: not in enabled drivers build config 00:02:37.811 bus/auxiliary: not in enabled drivers build config 00:02:37.811 bus/cdx: not in enabled drivers build config 00:02:37.811 bus/dpaa: not in enabled drivers build config 00:02:37.811 bus/fslmc: not in enabled drivers build config 00:02:37.811 bus/ifpga: not in enabled drivers build config 00:02:37.811 bus/platform: not in enabled drivers build config 00:02:37.811 bus/uacce: not in enabled drivers build config 00:02:37.811 bus/vmbus: not in enabled drivers build config 00:02:37.811 common/cnxk: not in enabled drivers build config 00:02:37.811 common/mlx5: not in enabled drivers build config 00:02:37.811 common/nfp: not in enabled drivers build config 00:02:37.811 common/nitrox: not in enabled drivers build config 00:02:37.811 common/qat: not in enabled drivers build config 00:02:37.811 common/sfc_efx: not in enabled drivers build config 00:02:37.811 mempool/bucket: not in enabled drivers build config 00:02:37.811 mempool/cnxk: not in enabled drivers build config 00:02:37.811 mempool/dpaa: not in enabled drivers build config 00:02:37.811 mempool/dpaa2: not in enabled drivers build config 00:02:37.811 mempool/octeontx: not in enabled drivers build config 00:02:37.811 mempool/stack: not in enabled drivers build config 00:02:37.811 dma/cnxk: not in enabled drivers build config 00:02:37.811 dma/dpaa: not in enabled drivers build config 00:02:37.811 dma/dpaa2: not in enabled drivers build config 00:02:37.811 dma/hisilicon: not in enabled drivers build config 00:02:37.811 dma/idxd: not in enabled drivers build config 00:02:37.811 dma/ioat: not in enabled drivers build config 00:02:37.811 dma/skeleton: not in enabled drivers build config 00:02:37.811 net/af_packet: not in enabled drivers build config 00:02:37.811 net/af_xdp: not in enabled drivers build config 00:02:37.811 net/ark: not in enabled drivers build config 00:02:37.811 net/atlantic: not in enabled drivers build config 00:02:37.811 net/avp: not in enabled drivers build config 00:02:37.811 net/axgbe: not in enabled drivers build config 00:02:37.811 net/bnx2x: not in enabled drivers build config 00:02:37.811 net/bnxt: not in enabled drivers build config 00:02:37.811 net/bonding: not in enabled drivers build config 00:02:37.811 net/cnxk: not in enabled drivers build config 00:02:37.811 net/cpfl: not in enabled drivers build config 00:02:37.811 net/cxgbe: not in enabled drivers build config 00:02:37.811 net/dpaa: not in enabled drivers build config 00:02:37.811 net/dpaa2: not in enabled drivers build config 00:02:37.811 net/e1000: not in enabled drivers build config 00:02:37.811 net/ena: not in enabled drivers build config 00:02:37.811 net/enetc: not in enabled drivers build config 00:02:37.811 net/enetfec: not in enabled drivers build config 00:02:37.811 net/enic: not in enabled drivers build config 00:02:37.811 net/failsafe: not in enabled drivers build config 00:02:37.811 net/fm10k: not in enabled drivers build config 00:02:37.811 net/gve: not in enabled drivers build config 00:02:37.811 net/hinic: not in enabled drivers build config 00:02:37.811 net/hns3: not in enabled drivers build config 00:02:37.811 net/i40e: not in enabled drivers build config 00:02:37.811 net/iavf: not in enabled drivers build config 00:02:37.811 net/ice: not in enabled drivers build config 00:02:37.811 net/idpf: not in enabled drivers build config 00:02:37.811 net/igc: not in enabled drivers build config 00:02:37.811 net/ionic: not in enabled drivers build config 00:02:37.811 net/ipn3ke: not in enabled drivers build config 00:02:37.811 net/ixgbe: not in enabled drivers build config 00:02:37.811 net/mana: not in enabled drivers build config 00:02:37.811 net/memif: not in enabled drivers build config 00:02:37.811 net/mlx4: not in enabled drivers build config 00:02:37.811 net/mlx5: not in enabled drivers build config 00:02:37.811 net/mvneta: not in enabled drivers build config 00:02:37.811 net/mvpp2: not in enabled drivers build config 00:02:37.811 net/netvsc: not in enabled drivers build config 00:02:37.811 net/nfb: not in enabled drivers build config 00:02:37.811 net/nfp: not in enabled drivers build config 00:02:37.811 net/ngbe: not in enabled drivers build config 00:02:37.811 net/null: not in enabled drivers build config 00:02:37.811 net/octeontx: not in enabled drivers build config 00:02:37.811 net/octeon_ep: not in enabled drivers build config 00:02:37.811 net/pcap: not in enabled drivers build config 00:02:37.811 net/pfe: not in enabled drivers build config 00:02:37.811 net/qede: not in enabled drivers build config 00:02:37.811 net/ring: not in enabled drivers build config 00:02:37.811 net/sfc: not in enabled drivers build config 00:02:37.811 net/softnic: not in enabled drivers build config 00:02:37.811 net/tap: not in enabled drivers build config 00:02:37.811 net/thunderx: not in enabled drivers build config 00:02:37.811 net/txgbe: not in enabled drivers build config 00:02:37.811 net/vdev_netvsc: not in enabled drivers build config 00:02:37.811 net/vhost: not in enabled drivers build config 00:02:37.811 net/virtio: not in enabled drivers build config 00:02:37.811 net/vmxnet3: not in enabled drivers build config 00:02:37.811 raw/*: missing internal dependency, "rawdev" 00:02:37.811 crypto/armv8: not in enabled drivers build config 00:02:37.811 crypto/bcmfs: not in enabled drivers build config 00:02:37.811 crypto/caam_jr: not in enabled drivers build config 00:02:37.811 crypto/ccp: not in enabled drivers build config 00:02:37.811 crypto/cnxk: not in enabled drivers build config 00:02:37.811 crypto/dpaa_sec: not in enabled drivers build config 00:02:37.811 crypto/dpaa2_sec: not in enabled drivers build config 00:02:37.811 crypto/ipsec_mb: not in enabled drivers build config 00:02:37.811 crypto/mlx5: not in enabled drivers build config 00:02:37.811 crypto/mvsam: not in enabled drivers build config 00:02:37.811 crypto/nitrox: not in enabled drivers build config 00:02:37.811 crypto/null: not in enabled drivers build config 00:02:37.811 crypto/octeontx: not in enabled drivers build config 00:02:37.811 crypto/openssl: not in enabled drivers build config 00:02:37.811 crypto/scheduler: not in enabled drivers build config 00:02:37.811 crypto/uadk: not in enabled drivers build config 00:02:37.811 crypto/virtio: not in enabled drivers build config 00:02:37.811 compress/isal: not in enabled drivers build config 00:02:37.811 compress/mlx5: not in enabled drivers build config 00:02:37.811 compress/nitrox: not in enabled drivers build config 00:02:37.811 compress/octeontx: not in enabled drivers build config 00:02:37.811 compress/zlib: not in enabled drivers build config 00:02:37.811 regex/*: missing internal dependency, "regexdev" 00:02:37.811 ml/*: missing internal dependency, "mldev" 00:02:37.811 vdpa/ifc: not in enabled drivers build config 00:02:37.811 vdpa/mlx5: not in enabled drivers build config 00:02:37.811 vdpa/nfp: not in enabled drivers build config 00:02:37.811 vdpa/sfc: not in enabled drivers build config 00:02:37.811 event/*: missing internal dependency, "eventdev" 00:02:37.811 baseband/*: missing internal dependency, "bbdev" 00:02:37.811 gpu/*: missing internal dependency, "gpudev" 00:02:37.811 00:02:37.811 00:02:38.070 Build targets in project: 85 00:02:38.070 00:02:38.070 DPDK 24.03.0 00:02:38.070 00:02:38.070 User defined options 00:02:38.070 buildtype : debug 00:02:38.070 default_library : shared 00:02:38.070 libdir : lib 00:02:38.070 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:38.070 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:38.070 c_link_args : 00:02:38.070 cpu_instruction_set: native 00:02:38.070 disable_apps : proc-info,test-fib,graph,test-dma-perf,test-mldev,test,test-regex,dumpcap,test-cmdline,test-acl,test-pipeline,test-flow-perf,pdump,test-sad,test-gpudev,test-security-perf,test-crypto-perf,test-bbdev,test-pmd,test-compress-perf,test-eventdev 00:02:38.070 disable_libs : bbdev,fib,dispatcher,distributor,bpf,latencystats,graph,mldev,efd,eventdev,gso,gpudev,acl,pipeline,stack,jobstats,ipsec,argparse,rib,pdcp,table,pdump,cfgfile,gro,pcapng,bitratestats,ip_frag,member,sched,node,port,metrics,lpm,regexdev,rawdev 00:02:38.070 enable_docs : false 00:02:38.070 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:38.070 enable_kmods : false 00:02:38.070 max_lcores : 128 00:02:38.070 tests : false 00:02:38.070 00:02:38.070 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:38.329 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:38.595 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:38.595 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:38.595 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:38.595 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:38.595 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:38.595 [6/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:38.595 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:38.595 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:38.595 [9/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:38.595 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:38.595 [11/268] Linking static target lib/librte_kvargs.a 00:02:38.857 [12/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:38.857 [13/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:38.857 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:38.857 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:38.857 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:38.857 [17/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:38.857 [18/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:38.857 [19/268] Linking static target lib/librte_log.a 00:02:38.857 [20/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:38.857 [21/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:38.857 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:38.857 [23/268] Linking static target lib/librte_pci.a 00:02:38.857 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:38.857 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:38.858 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:38.858 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:38.858 [28/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:38.858 [29/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:38.858 [30/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:38.858 [31/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:38.858 [32/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:39.117 [33/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:39.117 [34/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:39.117 [35/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:39.117 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:39.117 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:39.117 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:39.117 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:39.117 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:39.117 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:39.117 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:39.117 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:39.117 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:39.117 [45/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:39.117 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:39.117 [47/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:39.117 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:39.117 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:39.117 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:39.117 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:39.117 [52/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:39.117 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:39.117 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:39.117 [55/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:39.117 [56/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:39.117 [57/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:39.117 [58/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:39.117 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:39.117 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:39.117 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:39.117 [62/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.117 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:39.117 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:39.117 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:39.117 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:39.117 [67/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:39.117 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:39.117 [69/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:39.117 [70/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:39.117 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:39.117 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:39.117 [73/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:39.117 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:39.376 [75/268] Linking static target lib/librte_meter.a 00:02:39.376 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:39.376 [77/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:39.376 [78/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:39.376 [79/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.376 [80/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:39.376 [81/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:39.376 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:39.376 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:39.376 [84/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:39.376 [85/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:39.376 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:39.376 [87/268] Linking static target lib/librte_telemetry.a 00:02:39.376 [88/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:39.376 [89/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:39.376 [90/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:39.376 [91/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:39.376 [92/268] Linking static target lib/librte_ring.a 00:02:39.376 [93/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:39.376 [94/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:39.376 [95/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:39.376 [96/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:39.376 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:39.376 [98/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:39.376 [99/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:39.376 [100/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:39.376 [101/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:39.376 [102/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:39.376 [103/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:39.376 [104/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:39.376 [105/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:39.376 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:39.376 [107/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:39.376 [108/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:39.376 [109/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:39.376 [110/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:39.376 [111/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:39.376 [112/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:39.376 [113/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:39.376 [114/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:39.376 [115/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:39.376 [116/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:39.376 [117/268] Linking static target lib/librte_net.a 00:02:39.376 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:39.376 [119/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:39.377 [120/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:39.377 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:39.377 [122/268] Linking static target lib/librte_cmdline.a 00:02:39.377 [123/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:39.377 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:39.377 [125/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:39.377 [126/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:39.377 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:39.377 [128/268] Linking static target lib/librte_timer.a 00:02:39.377 [129/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:39.377 [130/268] Linking static target lib/librte_rcu.a 00:02:39.377 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:39.377 [132/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:39.377 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:39.377 [134/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:39.377 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:39.377 [136/268] Linking static target lib/librte_mempool.a 00:02:39.377 [137/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:39.377 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:39.377 [139/268] Linking static target lib/librte_eal.a 00:02:39.377 [140/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:39.377 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:39.377 [142/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:39.377 [143/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:39.377 [144/268] Linking static target lib/librte_dmadev.a 00:02:39.377 [145/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:39.377 [146/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:39.635 [147/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:39.635 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:39.635 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:39.635 [150/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:39.635 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:39.635 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:39.635 [153/268] Linking static target lib/librte_compressdev.a 00:02:39.635 [154/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:39.635 [155/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:39.635 [156/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.635 [157/268] Linking static target lib/librte_mbuf.a 00:02:39.635 [158/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:39.635 [159/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:39.635 [160/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.635 [161/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:39.635 [162/268] Linking target lib/librte_log.so.24.1 00:02:39.635 [163/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:39.635 [164/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:39.635 [165/268] Linking static target lib/librte_hash.a 00:02:39.635 [166/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:39.635 [167/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.635 [168/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:39.635 [169/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:39.635 [170/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:39.635 [171/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:39.635 [172/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:39.635 [173/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:39.635 [174/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:39.635 [175/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:39.635 [176/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.635 [177/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:39.635 [178/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:39.635 [179/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:39.635 [180/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:39.635 [181/268] Linking static target lib/librte_reorder.a 00:02:39.635 [182/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:39.893 [183/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:39.893 [184/268] Linking target lib/librte_kvargs.so.24.1 00:02:39.893 [185/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:39.893 [186/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.893 [187/268] Linking static target lib/librte_security.a 00:02:39.893 [188/268] Linking static target lib/librte_cryptodev.a 00:02:39.893 [189/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.893 [190/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:39.893 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:39.893 [192/268] Linking static target lib/librte_power.a 00:02:39.893 [193/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:39.893 [194/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:39.893 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:39.893 [196/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:39.893 [197/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.893 [198/268] Linking target lib/librte_telemetry.so.24.1 00:02:39.893 [199/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:39.893 [200/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:39.893 [201/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:39.893 [202/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:39.893 [203/268] Linking static target drivers/librte_bus_vdev.a 00:02:39.893 [204/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:39.893 [205/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:39.893 [206/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:39.893 [207/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:39.893 [208/268] Linking static target drivers/librte_bus_pci.a 00:02:39.893 [209/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:40.150 [210/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:40.150 [211/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:40.150 [212/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:40.150 [213/268] Linking static target drivers/librte_mempool_ring.a 00:02:40.150 [214/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:40.150 [215/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.150 [216/268] Linking static target lib/librte_ethdev.a 00:02:40.150 [217/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.407 [218/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.407 [219/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.407 [220/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.407 [221/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:40.407 [222/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.407 [223/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.665 [224/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.665 [225/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.665 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.921 [227/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.487 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:41.487 [229/268] Linking static target lib/librte_vhost.a 00:02:42.054 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.430 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.012 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.547 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.547 [234/268] Linking target lib/librte_eal.so.24.1 00:02:52.547 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:52.547 [236/268] Linking target lib/librte_timer.so.24.1 00:02:52.547 [237/268] Linking target lib/librte_meter.so.24.1 00:02:52.547 [238/268] Linking target lib/librte_pci.so.24.1 00:02:52.547 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:52.547 [240/268] Linking target lib/librte_ring.so.24.1 00:02:52.547 [241/268] Linking target lib/librte_dmadev.so.24.1 00:02:52.547 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:52.547 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:52.547 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:52.547 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:52.547 [246/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:52.547 [247/268] Linking target lib/librte_mempool.so.24.1 00:02:52.547 [248/268] Linking target lib/librte_rcu.so.24.1 00:02:52.547 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:52.547 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:52.805 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:52.805 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:52.805 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:52.805 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:52.805 [255/268] Linking target lib/librte_compressdev.so.24.1 00:02:52.805 [256/268] Linking target lib/librte_cryptodev.so.24.1 00:02:52.805 [257/268] Linking target lib/librte_net.so.24.1 00:02:52.805 [258/268] Linking target lib/librte_reorder.so.24.1 00:02:53.063 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:53.063 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:53.063 [261/268] Linking target lib/librte_cmdline.so.24.1 00:02:53.063 [262/268] Linking target lib/librte_security.so.24.1 00:02:53.063 [263/268] Linking target lib/librte_hash.so.24.1 00:02:53.063 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:53.322 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:53.322 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:53.322 [267/268] Linking target lib/librte_power.so.24.1 00:02:53.322 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:53.322 INFO: autodetecting backend as ninja 00:02:53.322 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 112 00:02:54.262 CC lib/ut_mock/mock.o 00:02:54.262 CC lib/ut/ut.o 00:02:54.521 CC lib/log/log_flags.o 00:02:54.521 CC lib/log/log.o 00:02:54.521 CC lib/log/log_deprecated.o 00:02:54.521 LIB libspdk_ut_mock.a 00:02:54.521 LIB libspdk_ut.a 00:02:54.521 SO libspdk_ut_mock.so.6.0 00:02:54.521 LIB libspdk_log.a 00:02:54.521 SO libspdk_ut.so.2.0 00:02:54.521 SO libspdk_log.so.7.0 00:02:54.521 SYMLINK libspdk_ut_mock.so 00:02:54.521 SYMLINK libspdk_ut.so 00:02:54.779 SYMLINK libspdk_log.so 00:02:55.036 CC lib/dma/dma.o 00:02:55.036 CC lib/ioat/ioat.o 00:02:55.036 CC lib/util/base64.o 00:02:55.036 CC lib/util/bit_array.o 00:02:55.036 CXX lib/trace_parser/trace.o 00:02:55.036 CC lib/util/cpuset.o 00:02:55.036 CC lib/util/crc16.o 00:02:55.036 CC lib/util/crc32.o 00:02:55.036 CC lib/util/crc32_ieee.o 00:02:55.036 CC lib/util/crc32c.o 00:02:55.036 CC lib/util/crc64.o 00:02:55.036 CC lib/util/dif.o 00:02:55.036 CC lib/util/fd.o 00:02:55.036 CC lib/util/file.o 00:02:55.036 CC lib/util/hexlify.o 00:02:55.036 CC lib/util/iov.o 00:02:55.036 CC lib/util/math.o 00:02:55.036 CC lib/util/pipe.o 00:02:55.036 CC lib/util/strerror_tls.o 00:02:55.036 CC lib/util/string.o 00:02:55.036 CC lib/util/uuid.o 00:02:55.036 CC lib/util/xor.o 00:02:55.036 CC lib/util/fd_group.o 00:02:55.036 CC lib/util/zipf.o 00:02:55.322 CC lib/vfio_user/host/vfio_user_pci.o 00:02:55.322 CC lib/vfio_user/host/vfio_user.o 00:02:55.322 LIB libspdk_dma.a 00:02:55.322 SO libspdk_dma.so.4.0 00:02:55.322 LIB libspdk_ioat.a 00:02:55.322 SYMLINK libspdk_dma.so 00:02:55.322 SO libspdk_ioat.so.7.0 00:02:55.322 SYMLINK libspdk_ioat.so 00:02:55.322 LIB libspdk_vfio_user.a 00:02:55.633 SO libspdk_vfio_user.so.5.0 00:02:55.633 LIB libspdk_util.a 00:02:55.633 SYMLINK libspdk_vfio_user.so 00:02:55.633 SO libspdk_util.so.9.1 00:02:55.633 SYMLINK libspdk_util.so 00:02:55.633 LIB libspdk_trace_parser.a 00:02:55.889 SO libspdk_trace_parser.so.5.0 00:02:55.890 SYMLINK libspdk_trace_parser.so 00:02:56.147 CC lib/rdma_provider/common.o 00:02:56.147 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:56.147 CC lib/env_dpdk/env.o 00:02:56.147 CC lib/env_dpdk/init.o 00:02:56.147 CC lib/env_dpdk/memory.o 00:02:56.147 CC lib/env_dpdk/pci.o 00:02:56.147 CC lib/env_dpdk/threads.o 00:02:56.147 CC lib/env_dpdk/pci_ioat.o 00:02:56.147 CC lib/env_dpdk/pci_vmd.o 00:02:56.147 CC lib/env_dpdk/pci_virtio.o 00:02:56.147 CC lib/env_dpdk/pci_idxd.o 00:02:56.147 CC lib/env_dpdk/sigbus_handler.o 00:02:56.147 CC lib/env_dpdk/pci_event.o 00:02:56.147 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:56.147 CC lib/env_dpdk/pci_dpdk.o 00:02:56.147 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:56.147 CC lib/conf/conf.o 00:02:56.147 CC lib/json/json_parse.o 00:02:56.147 CC lib/rdma_utils/rdma_utils.o 00:02:56.147 CC lib/json/json_util.o 00:02:56.147 CC lib/json/json_write.o 00:02:56.147 CC lib/vmd/vmd.o 00:02:56.147 CC lib/vmd/led.o 00:02:56.147 CC lib/idxd/idxd.o 00:02:56.147 CC lib/idxd/idxd_user.o 00:02:56.147 CC lib/idxd/idxd_kernel.o 00:02:56.147 LIB libspdk_rdma_provider.a 00:02:56.147 SO libspdk_rdma_provider.so.6.0 00:02:56.404 LIB libspdk_conf.a 00:02:56.404 SO libspdk_conf.so.6.0 00:02:56.404 LIB libspdk_rdma_utils.a 00:02:56.404 SYMLINK libspdk_rdma_provider.so 00:02:56.404 LIB libspdk_json.a 00:02:56.404 SO libspdk_rdma_utils.so.1.0 00:02:56.404 SYMLINK libspdk_conf.so 00:02:56.404 SO libspdk_json.so.6.0 00:02:56.404 SYMLINK libspdk_rdma_utils.so 00:02:56.404 SYMLINK libspdk_json.so 00:02:56.404 LIB libspdk_idxd.a 00:02:56.661 SO libspdk_idxd.so.12.0 00:02:56.661 LIB libspdk_vmd.a 00:02:56.661 SO libspdk_vmd.so.6.0 00:02:56.661 SYMLINK libspdk_idxd.so 00:02:56.661 SYMLINK libspdk_vmd.so 00:02:56.919 CC lib/jsonrpc/jsonrpc_server.o 00:02:56.919 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:56.919 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:56.919 CC lib/jsonrpc/jsonrpc_client.o 00:02:57.177 LIB libspdk_env_dpdk.a 00:02:57.177 LIB libspdk_jsonrpc.a 00:02:57.177 SO libspdk_env_dpdk.so.14.1 00:02:57.177 SO libspdk_jsonrpc.so.6.0 00:02:57.177 SYMLINK libspdk_jsonrpc.so 00:02:57.177 SYMLINK libspdk_env_dpdk.so 00:02:57.435 CC lib/rpc/rpc.o 00:02:57.693 LIB libspdk_rpc.a 00:02:57.693 SO libspdk_rpc.so.6.0 00:02:57.950 SYMLINK libspdk_rpc.so 00:02:58.207 CC lib/keyring/keyring.o 00:02:58.207 CC lib/keyring/keyring_rpc.o 00:02:58.207 CC lib/trace/trace_flags.o 00:02:58.207 CC lib/trace/trace.o 00:02:58.207 CC lib/trace/trace_rpc.o 00:02:58.207 CC lib/notify/notify.o 00:02:58.207 CC lib/notify/notify_rpc.o 00:02:58.466 LIB libspdk_notify.a 00:02:58.466 LIB libspdk_keyring.a 00:02:58.466 SO libspdk_notify.so.6.0 00:02:58.466 SO libspdk_keyring.so.1.0 00:02:58.466 LIB libspdk_trace.a 00:02:58.466 SYMLINK libspdk_notify.so 00:02:58.466 SO libspdk_trace.so.10.0 00:02:58.466 SYMLINK libspdk_keyring.so 00:02:58.466 SYMLINK libspdk_trace.so 00:02:59.031 CC lib/sock/sock.o 00:02:59.031 CC lib/sock/sock_rpc.o 00:02:59.031 CC lib/thread/iobuf.o 00:02:59.031 CC lib/thread/thread.o 00:02:59.289 LIB libspdk_sock.a 00:02:59.289 SO libspdk_sock.so.10.0 00:02:59.289 SYMLINK libspdk_sock.so 00:02:59.854 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:59.854 CC lib/nvme/nvme_ns_cmd.o 00:02:59.854 CC lib/nvme/nvme_ctrlr.o 00:02:59.854 CC lib/nvme/nvme_fabric.o 00:02:59.854 CC lib/nvme/nvme_ns.o 00:02:59.854 CC lib/nvme/nvme_pcie_common.o 00:02:59.854 CC lib/nvme/nvme_pcie.o 00:02:59.854 CC lib/nvme/nvme_qpair.o 00:02:59.854 CC lib/nvme/nvme.o 00:02:59.854 CC lib/nvme/nvme_quirks.o 00:02:59.854 CC lib/nvme/nvme_transport.o 00:02:59.854 CC lib/nvme/nvme_discovery.o 00:02:59.854 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:59.854 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:59.854 CC lib/nvme/nvme_opal.o 00:02:59.854 CC lib/nvme/nvme_tcp.o 00:02:59.854 CC lib/nvme/nvme_poll_group.o 00:02:59.854 CC lib/nvme/nvme_io_msg.o 00:02:59.854 CC lib/nvme/nvme_stubs.o 00:02:59.854 CC lib/nvme/nvme_zns.o 00:02:59.854 CC lib/nvme/nvme_vfio_user.o 00:02:59.854 CC lib/nvme/nvme_auth.o 00:02:59.854 CC lib/nvme/nvme_cuse.o 00:02:59.854 CC lib/nvme/nvme_rdma.o 00:02:59.854 LIB libspdk_thread.a 00:03:00.111 SO libspdk_thread.so.10.1 00:03:00.111 SYMLINK libspdk_thread.so 00:03:00.369 CC lib/init/json_config.o 00:03:00.369 CC lib/init/subsystem.o 00:03:00.369 CC lib/init/subsystem_rpc.o 00:03:00.369 CC lib/init/rpc.o 00:03:00.369 CC lib/accel/accel.o 00:03:00.369 CC lib/virtio/virtio.o 00:03:00.369 CC lib/accel/accel_rpc.o 00:03:00.369 CC lib/virtio/virtio_vhost_user.o 00:03:00.369 CC lib/accel/accel_sw.o 00:03:00.369 CC lib/virtio/virtio_vfio_user.o 00:03:00.369 CC lib/virtio/virtio_pci.o 00:03:00.369 CC lib/vfu_tgt/tgt_rpc.o 00:03:00.369 CC lib/vfu_tgt/tgt_endpoint.o 00:03:00.369 CC lib/blob/blobstore.o 00:03:00.369 CC lib/blob/request.o 00:03:00.369 CC lib/blob/zeroes.o 00:03:00.369 CC lib/blob/blob_bs_dev.o 00:03:00.626 LIB libspdk_init.a 00:03:00.626 SO libspdk_init.so.5.0 00:03:00.626 LIB libspdk_virtio.a 00:03:00.626 LIB libspdk_vfu_tgt.a 00:03:00.626 SO libspdk_virtio.so.7.0 00:03:00.626 SYMLINK libspdk_init.so 00:03:00.884 SO libspdk_vfu_tgt.so.3.0 00:03:00.884 SYMLINK libspdk_virtio.so 00:03:00.884 SYMLINK libspdk_vfu_tgt.so 00:03:01.148 CC lib/event/app.o 00:03:01.148 CC lib/event/reactor.o 00:03:01.148 CC lib/event/log_rpc.o 00:03:01.148 CC lib/event/app_rpc.o 00:03:01.148 CC lib/event/scheduler_static.o 00:03:01.148 LIB libspdk_accel.a 00:03:01.148 SO libspdk_accel.so.15.1 00:03:01.148 SYMLINK libspdk_accel.so 00:03:01.406 LIB libspdk_nvme.a 00:03:01.406 LIB libspdk_event.a 00:03:01.406 SO libspdk_event.so.14.0 00:03:01.406 SO libspdk_nvme.so.13.1 00:03:01.406 SYMLINK libspdk_event.so 00:03:01.663 CC lib/bdev/bdev.o 00:03:01.663 CC lib/bdev/bdev_rpc.o 00:03:01.663 CC lib/bdev/bdev_zone.o 00:03:01.663 CC lib/bdev/part.o 00:03:01.663 CC lib/bdev/scsi_nvme.o 00:03:01.663 SYMLINK libspdk_nvme.so 00:03:02.594 LIB libspdk_blob.a 00:03:02.594 SO libspdk_blob.so.11.0 00:03:02.594 SYMLINK libspdk_blob.so 00:03:02.851 CC lib/blobfs/blobfs.o 00:03:02.851 CC lib/blobfs/tree.o 00:03:03.107 CC lib/lvol/lvol.o 00:03:03.363 LIB libspdk_bdev.a 00:03:03.363 SO libspdk_bdev.so.15.1 00:03:03.620 SYMLINK libspdk_bdev.so 00:03:03.620 LIB libspdk_blobfs.a 00:03:03.620 SO libspdk_blobfs.so.10.0 00:03:03.620 LIB libspdk_lvol.a 00:03:03.620 SYMLINK libspdk_blobfs.so 00:03:03.620 SO libspdk_lvol.so.10.0 00:03:03.878 SYMLINK libspdk_lvol.so 00:03:03.878 CC lib/nbd/nbd.o 00:03:03.878 CC lib/nbd/nbd_rpc.o 00:03:03.878 CC lib/scsi/dev.o 00:03:03.878 CC lib/scsi/port.o 00:03:03.878 CC lib/scsi/lun.o 00:03:03.878 CC lib/scsi/scsi.o 00:03:03.878 CC lib/scsi/scsi_bdev.o 00:03:03.878 CC lib/scsi/scsi_pr.o 00:03:03.878 CC lib/scsi/scsi_rpc.o 00:03:03.878 CC lib/scsi/task.o 00:03:03.878 CC lib/nvmf/ctrlr.o 00:03:03.878 CC lib/ublk/ublk.o 00:03:03.878 CC lib/nvmf/ctrlr_discovery.o 00:03:03.878 CC lib/nvmf/subsystem.o 00:03:03.878 CC lib/nvmf/nvmf.o 00:03:03.878 CC lib/ublk/ublk_rpc.o 00:03:03.878 CC lib/nvmf/ctrlr_bdev.o 00:03:03.878 CC lib/nvmf/transport.o 00:03:03.878 CC lib/nvmf/nvmf_rpc.o 00:03:03.878 CC lib/nvmf/tcp.o 00:03:03.878 CC lib/nvmf/stubs.o 00:03:03.878 CC lib/nvmf/mdns_server.o 00:03:03.878 CC lib/ftl/ftl_core.o 00:03:03.878 CC lib/nvmf/vfio_user.o 00:03:03.878 CC lib/ftl/ftl_init.o 00:03:03.878 CC lib/ftl/ftl_layout.o 00:03:03.878 CC lib/nvmf/rdma.o 00:03:03.878 CC lib/nvmf/auth.o 00:03:03.878 CC lib/ftl/ftl_debug.o 00:03:03.878 CC lib/ftl/ftl_io.o 00:03:03.878 CC lib/ftl/ftl_sb.o 00:03:03.878 CC lib/ftl/ftl_l2p.o 00:03:03.878 CC lib/ftl/ftl_l2p_flat.o 00:03:03.878 CC lib/ftl/ftl_band.o 00:03:03.878 CC lib/ftl/ftl_nv_cache.o 00:03:03.878 CC lib/ftl/ftl_band_ops.o 00:03:03.878 CC lib/ftl/ftl_writer.o 00:03:03.878 CC lib/ftl/ftl_rq.o 00:03:03.878 CC lib/ftl/ftl_reloc.o 00:03:03.878 CC lib/ftl/ftl_l2p_cache.o 00:03:03.878 CC lib/ftl/ftl_p2l.o 00:03:03.878 CC lib/ftl/mngt/ftl_mngt.o 00:03:03.878 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:03.878 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:03.878 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:03.878 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:03.878 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:03.878 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:03.878 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:03.878 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:03.878 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:03.878 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:03.878 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:03.878 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:03.878 CC lib/ftl/utils/ftl_conf.o 00:03:03.878 CC lib/ftl/utils/ftl_md.o 00:03:03.878 CC lib/ftl/utils/ftl_mempool.o 00:03:03.878 CC lib/ftl/utils/ftl_bitmap.o 00:03:03.878 CC lib/ftl/utils/ftl_property.o 00:03:03.878 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:03.878 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:03.878 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:03.878 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:03.878 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:03.878 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:03.878 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:03.878 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:03.878 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:03.878 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:03.878 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:03.879 CC lib/ftl/base/ftl_base_dev.o 00:03:03.879 CC lib/ftl/base/ftl_base_bdev.o 00:03:03.879 CC lib/ftl/ftl_trace.o 00:03:04.443 LIB libspdk_nbd.a 00:03:04.443 SO libspdk_nbd.so.7.0 00:03:04.443 LIB libspdk_scsi.a 00:03:04.443 SYMLINK libspdk_nbd.so 00:03:04.443 SO libspdk_scsi.so.9.0 00:03:04.701 LIB libspdk_ublk.a 00:03:04.701 SYMLINK libspdk_scsi.so 00:03:04.701 SO libspdk_ublk.so.3.0 00:03:04.701 SYMLINK libspdk_ublk.so 00:03:04.959 LIB libspdk_ftl.a 00:03:04.959 CC lib/iscsi/conn.o 00:03:04.959 CC lib/iscsi/init_grp.o 00:03:04.959 CC lib/iscsi/md5.o 00:03:04.959 CC lib/iscsi/iscsi.o 00:03:04.959 CC lib/iscsi/tgt_node.o 00:03:04.959 CC lib/iscsi/param.o 00:03:04.959 CC lib/iscsi/portal_grp.o 00:03:04.959 CC lib/iscsi/iscsi_rpc.o 00:03:04.959 CC lib/iscsi/iscsi_subsystem.o 00:03:04.959 CC lib/iscsi/task.o 00:03:04.959 CC lib/vhost/vhost.o 00:03:04.959 CC lib/vhost/vhost_rpc.o 00:03:04.959 CC lib/vhost/vhost_scsi.o 00:03:04.959 CC lib/vhost/vhost_blk.o 00:03:04.959 CC lib/vhost/rte_vhost_user.o 00:03:05.218 SO libspdk_ftl.so.9.0 00:03:05.477 SYMLINK libspdk_ftl.so 00:03:05.477 LIB libspdk_nvmf.a 00:03:05.477 SO libspdk_nvmf.so.19.0 00:03:05.736 LIB libspdk_vhost.a 00:03:05.736 SYMLINK libspdk_nvmf.so 00:03:05.736 SO libspdk_vhost.so.8.0 00:03:05.996 SYMLINK libspdk_vhost.so 00:03:05.996 LIB libspdk_iscsi.a 00:03:05.996 SO libspdk_iscsi.so.8.0 00:03:06.255 SYMLINK libspdk_iscsi.so 00:03:06.515 CC module/env_dpdk/env_dpdk_rpc.o 00:03:06.774 CC module/vfu_device/vfu_virtio.o 00:03:06.774 CC module/vfu_device/vfu_virtio_scsi.o 00:03:06.774 CC module/vfu_device/vfu_virtio_blk.o 00:03:06.774 CC module/vfu_device/vfu_virtio_rpc.o 00:03:06.774 LIB libspdk_env_dpdk_rpc.a 00:03:06.774 CC module/keyring/file/keyring.o 00:03:06.774 CC module/keyring/file/keyring_rpc.o 00:03:06.774 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:06.774 CC module/sock/posix/posix.o 00:03:06.774 CC module/blob/bdev/blob_bdev.o 00:03:06.774 CC module/accel/error/accel_error.o 00:03:06.774 CC module/keyring/linux/keyring_rpc.o 00:03:06.774 CC module/keyring/linux/keyring.o 00:03:06.774 CC module/accel/error/accel_error_rpc.o 00:03:06.774 CC module/scheduler/gscheduler/gscheduler.o 00:03:06.774 CC module/accel/ioat/accel_ioat.o 00:03:06.774 CC module/accel/ioat/accel_ioat_rpc.o 00:03:06.774 SO libspdk_env_dpdk_rpc.so.6.0 00:03:06.774 CC module/accel/dsa/accel_dsa_rpc.o 00:03:06.774 CC module/accel/dsa/accel_dsa.o 00:03:06.774 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:06.774 CC module/accel/iaa/accel_iaa.o 00:03:06.774 CC module/accel/iaa/accel_iaa_rpc.o 00:03:06.774 SYMLINK libspdk_env_dpdk_rpc.so 00:03:07.033 LIB libspdk_keyring_file.a 00:03:07.033 LIB libspdk_scheduler_dpdk_governor.a 00:03:07.033 LIB libspdk_keyring_linux.a 00:03:07.033 LIB libspdk_scheduler_gscheduler.a 00:03:07.033 SO libspdk_keyring_file.so.1.0 00:03:07.033 SO libspdk_keyring_linux.so.1.0 00:03:07.033 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:07.033 SO libspdk_scheduler_gscheduler.so.4.0 00:03:07.033 LIB libspdk_accel_error.a 00:03:07.033 LIB libspdk_accel_ioat.a 00:03:07.033 LIB libspdk_scheduler_dynamic.a 00:03:07.033 SYMLINK libspdk_keyring_file.so 00:03:07.033 LIB libspdk_accel_iaa.a 00:03:07.033 SO libspdk_accel_ioat.so.6.0 00:03:07.033 SO libspdk_accel_error.so.2.0 00:03:07.033 LIB libspdk_accel_dsa.a 00:03:07.033 SO libspdk_scheduler_dynamic.so.4.0 00:03:07.033 LIB libspdk_blob_bdev.a 00:03:07.033 SYMLINK libspdk_keyring_linux.so 00:03:07.033 SYMLINK libspdk_scheduler_gscheduler.so 00:03:07.033 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:07.033 SO libspdk_accel_iaa.so.3.0 00:03:07.033 SO libspdk_accel_dsa.so.5.0 00:03:07.033 SO libspdk_blob_bdev.so.11.0 00:03:07.033 SYMLINK libspdk_accel_ioat.so 00:03:07.033 SYMLINK libspdk_accel_error.so 00:03:07.033 SYMLINK libspdk_scheduler_dynamic.so 00:03:07.293 LIB libspdk_vfu_device.a 00:03:07.293 SYMLINK libspdk_blob_bdev.so 00:03:07.293 SYMLINK libspdk_accel_iaa.so 00:03:07.293 SYMLINK libspdk_accel_dsa.so 00:03:07.293 SO libspdk_vfu_device.so.3.0 00:03:07.293 SYMLINK libspdk_vfu_device.so 00:03:07.293 LIB libspdk_sock_posix.a 00:03:07.552 SO libspdk_sock_posix.so.6.0 00:03:07.552 SYMLINK libspdk_sock_posix.so 00:03:07.552 CC module/bdev/lvol/vbdev_lvol.o 00:03:07.552 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:07.552 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:07.552 CC module/bdev/passthru/vbdev_passthru.o 00:03:07.552 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:07.552 CC module/blobfs/bdev/blobfs_bdev.o 00:03:07.552 CC module/bdev/error/vbdev_error_rpc.o 00:03:07.552 CC module/bdev/null/bdev_null.o 00:03:07.552 CC module/bdev/malloc/bdev_malloc.o 00:03:07.552 CC module/bdev/null/bdev_null_rpc.o 00:03:07.552 CC module/bdev/error/vbdev_error.o 00:03:07.552 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:07.552 CC module/bdev/delay/vbdev_delay.o 00:03:07.552 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:07.552 CC module/bdev/split/vbdev_split.o 00:03:07.552 CC module/bdev/gpt/gpt.o 00:03:07.552 CC module/bdev/split/vbdev_split_rpc.o 00:03:07.810 CC module/bdev/gpt/vbdev_gpt.o 00:03:07.810 CC module/bdev/iscsi/bdev_iscsi.o 00:03:07.810 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:07.810 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:07.810 CC module/bdev/nvme/bdev_nvme.o 00:03:07.810 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:07.810 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:07.810 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:07.810 CC module/bdev/raid/bdev_raid.o 00:03:07.810 CC module/bdev/raid/bdev_raid_sb.o 00:03:07.810 CC module/bdev/nvme/nvme_rpc.o 00:03:07.810 CC module/bdev/raid/bdev_raid_rpc.o 00:03:07.810 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:07.810 CC module/bdev/nvme/bdev_mdns_client.o 00:03:07.810 CC module/bdev/nvme/vbdev_opal.o 00:03:07.810 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:07.810 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:07.810 CC module/bdev/raid/raid0.o 00:03:07.810 CC module/bdev/raid/raid1.o 00:03:07.810 CC module/bdev/raid/concat.o 00:03:07.810 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:07.810 CC module/bdev/aio/bdev_aio.o 00:03:07.810 CC module/bdev/aio/bdev_aio_rpc.o 00:03:07.810 CC module/bdev/ftl/bdev_ftl.o 00:03:07.810 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:07.810 LIB libspdk_blobfs_bdev.a 00:03:08.068 LIB libspdk_bdev_split.a 00:03:08.068 SO libspdk_blobfs_bdev.so.6.0 00:03:08.069 LIB libspdk_bdev_null.a 00:03:08.069 LIB libspdk_bdev_gpt.a 00:03:08.069 LIB libspdk_bdev_error.a 00:03:08.069 SO libspdk_bdev_split.so.6.0 00:03:08.069 SO libspdk_bdev_null.so.6.0 00:03:08.069 LIB libspdk_bdev_passthru.a 00:03:08.069 SO libspdk_bdev_error.so.6.0 00:03:08.069 SYMLINK libspdk_blobfs_bdev.so 00:03:08.069 SO libspdk_bdev_gpt.so.6.0 00:03:08.069 LIB libspdk_bdev_ftl.a 00:03:08.069 LIB libspdk_bdev_zone_block.a 00:03:08.069 SO libspdk_bdev_passthru.so.6.0 00:03:08.069 LIB libspdk_bdev_aio.a 00:03:08.069 LIB libspdk_bdev_malloc.a 00:03:08.069 SO libspdk_bdev_ftl.so.6.0 00:03:08.069 SYMLINK libspdk_bdev_split.so 00:03:08.069 LIB libspdk_bdev_delay.a 00:03:08.069 LIB libspdk_bdev_iscsi.a 00:03:08.069 SYMLINK libspdk_bdev_null.so 00:03:08.069 SYMLINK libspdk_bdev_gpt.so 00:03:08.069 SO libspdk_bdev_zone_block.so.6.0 00:03:08.069 SYMLINK libspdk_bdev_error.so 00:03:08.069 SO libspdk_bdev_malloc.so.6.0 00:03:08.069 SO libspdk_bdev_aio.so.6.0 00:03:08.069 SO libspdk_bdev_iscsi.so.6.0 00:03:08.069 SO libspdk_bdev_delay.so.6.0 00:03:08.069 SYMLINK libspdk_bdev_passthru.so 00:03:08.069 SYMLINK libspdk_bdev_ftl.so 00:03:08.069 LIB libspdk_bdev_lvol.a 00:03:08.069 SYMLINK libspdk_bdev_zone_block.so 00:03:08.069 SYMLINK libspdk_bdev_malloc.so 00:03:08.069 SYMLINK libspdk_bdev_aio.so 00:03:08.069 SYMLINK libspdk_bdev_iscsi.so 00:03:08.069 SO libspdk_bdev_lvol.so.6.0 00:03:08.328 SYMLINK libspdk_bdev_delay.so 00:03:08.328 LIB libspdk_bdev_virtio.a 00:03:08.328 SO libspdk_bdev_virtio.so.6.0 00:03:08.328 SYMLINK libspdk_bdev_lvol.so 00:03:08.328 SYMLINK libspdk_bdev_virtio.so 00:03:08.587 LIB libspdk_bdev_raid.a 00:03:08.587 SO libspdk_bdev_raid.so.6.0 00:03:08.587 SYMLINK libspdk_bdev_raid.so 00:03:09.533 LIB libspdk_bdev_nvme.a 00:03:09.533 SO libspdk_bdev_nvme.so.7.0 00:03:09.533 SYMLINK libspdk_bdev_nvme.so 00:03:10.101 CC module/event/subsystems/vmd/vmd.o 00:03:10.101 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:10.101 CC module/event/subsystems/scheduler/scheduler.o 00:03:10.101 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:10.101 CC module/event/subsystems/keyring/keyring.o 00:03:10.101 CC module/event/subsystems/sock/sock.o 00:03:10.101 CC module/event/subsystems/iobuf/iobuf.o 00:03:10.101 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:10.101 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:10.359 LIB libspdk_event_keyring.a 00:03:10.359 LIB libspdk_event_vfu_tgt.a 00:03:10.359 LIB libspdk_event_scheduler.a 00:03:10.359 LIB libspdk_event_vmd.a 00:03:10.359 LIB libspdk_event_sock.a 00:03:10.359 SO libspdk_event_keyring.so.1.0 00:03:10.359 LIB libspdk_event_vhost_blk.a 00:03:10.359 LIB libspdk_event_iobuf.a 00:03:10.359 SO libspdk_event_sock.so.5.0 00:03:10.359 SO libspdk_event_vfu_tgt.so.3.0 00:03:10.359 SO libspdk_event_scheduler.so.4.0 00:03:10.359 SO libspdk_event_vmd.so.6.0 00:03:10.359 SO libspdk_event_vhost_blk.so.3.0 00:03:10.359 SO libspdk_event_iobuf.so.3.0 00:03:10.360 SYMLINK libspdk_event_keyring.so 00:03:10.360 SYMLINK libspdk_event_vfu_tgt.so 00:03:10.360 SYMLINK libspdk_event_sock.so 00:03:10.360 SYMLINK libspdk_event_scheduler.so 00:03:10.360 SYMLINK libspdk_event_vmd.so 00:03:10.360 SYMLINK libspdk_event_vhost_blk.so 00:03:10.360 SYMLINK libspdk_event_iobuf.so 00:03:10.929 CC module/event/subsystems/accel/accel.o 00:03:10.929 LIB libspdk_event_accel.a 00:03:10.929 SO libspdk_event_accel.so.6.0 00:03:11.187 SYMLINK libspdk_event_accel.so 00:03:11.446 CC module/event/subsystems/bdev/bdev.o 00:03:11.704 LIB libspdk_event_bdev.a 00:03:11.704 SO libspdk_event_bdev.so.6.0 00:03:11.704 SYMLINK libspdk_event_bdev.so 00:03:11.963 CC module/event/subsystems/nbd/nbd.o 00:03:11.963 CC module/event/subsystems/scsi/scsi.o 00:03:11.963 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:11.963 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:11.963 CC module/event/subsystems/ublk/ublk.o 00:03:12.222 LIB libspdk_event_nbd.a 00:03:12.222 LIB libspdk_event_scsi.a 00:03:12.222 SO libspdk_event_nbd.so.6.0 00:03:12.222 LIB libspdk_event_ublk.a 00:03:12.222 SO libspdk_event_scsi.so.6.0 00:03:12.222 SO libspdk_event_ublk.so.3.0 00:03:12.222 LIB libspdk_event_nvmf.a 00:03:12.222 SYMLINK libspdk_event_nbd.so 00:03:12.222 SYMLINK libspdk_event_scsi.so 00:03:12.222 SYMLINK libspdk_event_ublk.so 00:03:12.222 SO libspdk_event_nvmf.so.6.0 00:03:12.482 SYMLINK libspdk_event_nvmf.so 00:03:12.740 CC module/event/subsystems/iscsi/iscsi.o 00:03:12.740 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:12.740 LIB libspdk_event_iscsi.a 00:03:12.740 LIB libspdk_event_vhost_scsi.a 00:03:12.999 SO libspdk_event_iscsi.so.6.0 00:03:12.999 SO libspdk_event_vhost_scsi.so.3.0 00:03:12.999 SYMLINK libspdk_event_iscsi.so 00:03:12.999 SYMLINK libspdk_event_vhost_scsi.so 00:03:13.258 SO libspdk.so.6.0 00:03:13.258 SYMLINK libspdk.so 00:03:13.516 CXX app/trace/trace.o 00:03:13.516 CC app/spdk_nvme_perf/perf.o 00:03:13.516 CC app/trace_record/trace_record.o 00:03:13.516 CC app/spdk_nvme_discover/discovery_aer.o 00:03:13.517 CC app/spdk_top/spdk_top.o 00:03:13.517 CC app/spdk_lspci/spdk_lspci.o 00:03:13.517 CC app/spdk_nvme_identify/identify.o 00:03:13.517 CC test/rpc_client/rpc_client_test.o 00:03:13.517 CC app/iscsi_tgt/iscsi_tgt.o 00:03:13.517 TEST_HEADER include/spdk/accel.h 00:03:13.517 TEST_HEADER include/spdk/accel_module.h 00:03:13.517 TEST_HEADER include/spdk/barrier.h 00:03:13.517 TEST_HEADER include/spdk/assert.h 00:03:13.517 TEST_HEADER include/spdk/base64.h 00:03:13.517 CC app/spdk_dd/spdk_dd.o 00:03:13.517 CC app/nvmf_tgt/nvmf_main.o 00:03:13.517 TEST_HEADER include/spdk/bdev.h 00:03:13.517 TEST_HEADER include/spdk/bdev_zone.h 00:03:13.517 TEST_HEADER include/spdk/bdev_module.h 00:03:13.517 TEST_HEADER include/spdk/bit_pool.h 00:03:13.517 TEST_HEADER include/spdk/bit_array.h 00:03:13.517 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:13.517 TEST_HEADER include/spdk/blob_bdev.h 00:03:13.517 TEST_HEADER include/spdk/blob.h 00:03:13.517 TEST_HEADER include/spdk/blobfs.h 00:03:13.517 TEST_HEADER include/spdk/conf.h 00:03:13.517 TEST_HEADER include/spdk/config.h 00:03:13.517 TEST_HEADER include/spdk/crc16.h 00:03:13.517 TEST_HEADER include/spdk/cpuset.h 00:03:13.517 TEST_HEADER include/spdk/crc32.h 00:03:13.517 TEST_HEADER include/spdk/dif.h 00:03:13.517 TEST_HEADER include/spdk/crc64.h 00:03:13.517 TEST_HEADER include/spdk/endian.h 00:03:13.517 TEST_HEADER include/spdk/dma.h 00:03:13.517 TEST_HEADER include/spdk/env_dpdk.h 00:03:13.517 TEST_HEADER include/spdk/event.h 00:03:13.517 TEST_HEADER include/spdk/env.h 00:03:13.517 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:13.517 TEST_HEADER include/spdk/fd.h 00:03:13.517 CC app/spdk_tgt/spdk_tgt.o 00:03:13.517 TEST_HEADER include/spdk/fd_group.h 00:03:13.517 TEST_HEADER include/spdk/file.h 00:03:13.517 TEST_HEADER include/spdk/ftl.h 00:03:13.517 TEST_HEADER include/spdk/hexlify.h 00:03:13.517 TEST_HEADER include/spdk/gpt_spec.h 00:03:13.517 TEST_HEADER include/spdk/idxd_spec.h 00:03:13.517 TEST_HEADER include/spdk/idxd.h 00:03:13.517 TEST_HEADER include/spdk/histogram_data.h 00:03:13.517 TEST_HEADER include/spdk/init.h 00:03:13.517 TEST_HEADER include/spdk/ioat.h 00:03:13.517 TEST_HEADER include/spdk/iscsi_spec.h 00:03:13.517 TEST_HEADER include/spdk/ioat_spec.h 00:03:13.517 TEST_HEADER include/spdk/jsonrpc.h 00:03:13.517 TEST_HEADER include/spdk/json.h 00:03:13.517 TEST_HEADER include/spdk/keyring.h 00:03:13.517 TEST_HEADER include/spdk/keyring_module.h 00:03:13.517 TEST_HEADER include/spdk/likely.h 00:03:13.517 TEST_HEADER include/spdk/log.h 00:03:13.517 TEST_HEADER include/spdk/mmio.h 00:03:13.517 TEST_HEADER include/spdk/nbd.h 00:03:13.517 TEST_HEADER include/spdk/lvol.h 00:03:13.517 TEST_HEADER include/spdk/memory.h 00:03:13.517 TEST_HEADER include/spdk/notify.h 00:03:13.517 TEST_HEADER include/spdk/nvme_intel.h 00:03:13.517 TEST_HEADER include/spdk/nvme.h 00:03:13.517 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:13.517 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:13.517 TEST_HEADER include/spdk/nvme_spec.h 00:03:13.517 TEST_HEADER include/spdk/nvme_zns.h 00:03:13.517 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:13.517 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:13.517 TEST_HEADER include/spdk/nvmf.h 00:03:13.517 TEST_HEADER include/spdk/nvmf_transport.h 00:03:13.517 TEST_HEADER include/spdk/nvmf_spec.h 00:03:13.517 TEST_HEADER include/spdk/opal.h 00:03:13.517 TEST_HEADER include/spdk/opal_spec.h 00:03:13.517 TEST_HEADER include/spdk/queue.h 00:03:13.517 TEST_HEADER include/spdk/pci_ids.h 00:03:13.517 TEST_HEADER include/spdk/reduce.h 00:03:13.517 TEST_HEADER include/spdk/pipe.h 00:03:13.517 TEST_HEADER include/spdk/scheduler.h 00:03:13.517 TEST_HEADER include/spdk/scsi.h 00:03:13.517 TEST_HEADER include/spdk/scsi_spec.h 00:03:13.517 TEST_HEADER include/spdk/rpc.h 00:03:13.517 TEST_HEADER include/spdk/sock.h 00:03:13.517 TEST_HEADER include/spdk/stdinc.h 00:03:13.517 TEST_HEADER include/spdk/string.h 00:03:13.517 TEST_HEADER include/spdk/thread.h 00:03:13.517 TEST_HEADER include/spdk/trace.h 00:03:13.517 TEST_HEADER include/spdk/tree.h 00:03:13.517 TEST_HEADER include/spdk/trace_parser.h 00:03:13.517 TEST_HEADER include/spdk/ublk.h 00:03:13.517 TEST_HEADER include/spdk/util.h 00:03:13.517 TEST_HEADER include/spdk/version.h 00:03:13.517 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:13.517 TEST_HEADER include/spdk/uuid.h 00:03:13.517 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:13.517 TEST_HEADER include/spdk/vhost.h 00:03:13.517 TEST_HEADER include/spdk/vmd.h 00:03:13.517 TEST_HEADER include/spdk/xor.h 00:03:13.517 TEST_HEADER include/spdk/zipf.h 00:03:13.517 CXX test/cpp_headers/accel.o 00:03:13.517 CXX test/cpp_headers/barrier.o 00:03:13.517 CXX test/cpp_headers/assert.o 00:03:13.517 CXX test/cpp_headers/accel_module.o 00:03:13.517 CXX test/cpp_headers/bdev.o 00:03:13.517 CXX test/cpp_headers/bdev_module.o 00:03:13.517 CXX test/cpp_headers/base64.o 00:03:13.517 CXX test/cpp_headers/bit_pool.o 00:03:13.517 CXX test/cpp_headers/bit_array.o 00:03:13.517 CXX test/cpp_headers/bdev_zone.o 00:03:13.517 CXX test/cpp_headers/blobfs_bdev.o 00:03:13.517 CXX test/cpp_headers/blob_bdev.o 00:03:13.517 CXX test/cpp_headers/blob.o 00:03:13.517 CXX test/cpp_headers/blobfs.o 00:03:13.517 CXX test/cpp_headers/config.o 00:03:13.517 CXX test/cpp_headers/cpuset.o 00:03:13.517 CXX test/cpp_headers/conf.o 00:03:13.790 CXX test/cpp_headers/crc16.o 00:03:13.790 CXX test/cpp_headers/crc32.o 00:03:13.790 CXX test/cpp_headers/crc64.o 00:03:13.790 CXX test/cpp_headers/dma.o 00:03:13.790 CXX test/cpp_headers/dif.o 00:03:13.790 CXX test/cpp_headers/endian.o 00:03:13.790 CXX test/cpp_headers/env.o 00:03:13.790 CXX test/cpp_headers/env_dpdk.o 00:03:13.790 CXX test/cpp_headers/event.o 00:03:13.790 CXX test/cpp_headers/fd_group.o 00:03:13.790 CXX test/cpp_headers/ftl.o 00:03:13.790 CXX test/cpp_headers/fd.o 00:03:13.790 CXX test/cpp_headers/file.o 00:03:13.790 CXX test/cpp_headers/gpt_spec.o 00:03:13.790 CXX test/cpp_headers/hexlify.o 00:03:13.790 CXX test/cpp_headers/histogram_data.o 00:03:13.790 CXX test/cpp_headers/idxd.o 00:03:13.790 CXX test/cpp_headers/idxd_spec.o 00:03:13.790 CXX test/cpp_headers/ioat_spec.o 00:03:13.790 CXX test/cpp_headers/ioat.o 00:03:13.790 CXX test/cpp_headers/init.o 00:03:13.790 CXX test/cpp_headers/iscsi_spec.o 00:03:13.790 CXX test/cpp_headers/json.o 00:03:13.790 CXX test/cpp_headers/keyring_module.o 00:03:13.790 CXX test/cpp_headers/keyring.o 00:03:13.790 CXX test/cpp_headers/jsonrpc.o 00:03:13.790 CXX test/cpp_headers/likely.o 00:03:13.790 CXX test/cpp_headers/memory.o 00:03:13.790 CXX test/cpp_headers/log.o 00:03:13.790 CXX test/cpp_headers/lvol.o 00:03:13.790 CXX test/cpp_headers/mmio.o 00:03:13.790 CXX test/cpp_headers/nbd.o 00:03:13.790 CXX test/cpp_headers/notify.o 00:03:13.790 CXX test/cpp_headers/nvme_intel.o 00:03:13.790 CXX test/cpp_headers/nvme.o 00:03:13.790 CXX test/cpp_headers/nvme_ocssd.o 00:03:13.790 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:13.790 CXX test/cpp_headers/nvme_spec.o 00:03:13.790 CXX test/cpp_headers/nvme_zns.o 00:03:13.790 CXX test/cpp_headers/nvmf_cmd.o 00:03:13.790 CC test/env/vtophys/vtophys.o 00:03:13.790 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:13.790 CXX test/cpp_headers/nvmf_spec.o 00:03:13.790 CXX test/cpp_headers/nvmf.o 00:03:13.790 CC test/app/histogram_perf/histogram_perf.o 00:03:13.790 CXX test/cpp_headers/opal.o 00:03:13.790 CXX test/cpp_headers/nvmf_transport.o 00:03:13.790 CXX test/cpp_headers/pci_ids.o 00:03:13.790 CXX test/cpp_headers/opal_spec.o 00:03:13.790 CXX test/cpp_headers/pipe.o 00:03:13.790 CXX test/cpp_headers/queue.o 00:03:13.790 CXX test/cpp_headers/rpc.o 00:03:13.790 CC app/fio/nvme/fio_plugin.o 00:03:13.790 CXX test/cpp_headers/scheduler.o 00:03:13.790 CXX test/cpp_headers/reduce.o 00:03:13.790 CXX test/cpp_headers/scsi.o 00:03:13.790 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:13.790 CC examples/util/zipf/zipf.o 00:03:13.790 CXX test/cpp_headers/scsi_spec.o 00:03:13.790 CXX test/cpp_headers/sock.o 00:03:13.790 CXX test/cpp_headers/string.o 00:03:13.790 CXX test/cpp_headers/stdinc.o 00:03:13.790 CXX test/cpp_headers/thread.o 00:03:13.790 CC test/app/jsoncat/jsoncat.o 00:03:13.790 CXX test/cpp_headers/trace.o 00:03:13.790 CXX test/cpp_headers/trace_parser.o 00:03:13.790 CXX test/cpp_headers/tree.o 00:03:13.790 CC examples/ioat/perf/perf.o 00:03:13.790 CXX test/cpp_headers/ublk.o 00:03:13.790 CXX test/cpp_headers/util.o 00:03:13.790 CXX test/cpp_headers/uuid.o 00:03:13.790 CXX test/cpp_headers/version.o 00:03:13.790 CXX test/cpp_headers/vfio_user_pci.o 00:03:13.790 CC test/thread/poller_perf/poller_perf.o 00:03:13.790 CC test/env/pci/pci_ut.o 00:03:13.790 CC test/app/stub/stub.o 00:03:13.790 CC test/env/memory/memory_ut.o 00:03:13.790 CC app/fio/bdev/fio_plugin.o 00:03:13.790 CC examples/ioat/verify/verify.o 00:03:13.790 LINK spdk_lspci 00:03:13.790 CC test/app/bdev_svc/bdev_svc.o 00:03:13.790 CXX test/cpp_headers/vfio_user_spec.o 00:03:14.069 CXX test/cpp_headers/vhost.o 00:03:14.069 CC test/dma/test_dma/test_dma.o 00:03:14.069 LINK rpc_client_test 00:03:14.069 LINK spdk_nvme_discover 00:03:14.069 LINK iscsi_tgt 00:03:14.069 LINK nvmf_tgt 00:03:14.069 LINK spdk_trace_record 00:03:14.336 LINK interrupt_tgt 00:03:14.336 CC test/env/mem_callbacks/mem_callbacks.o 00:03:14.336 LINK spdk_tgt 00:03:14.336 LINK vtophys 00:03:14.336 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:14.336 LINK poller_perf 00:03:14.336 LINK zipf 00:03:14.336 CXX test/cpp_headers/vmd.o 00:03:14.336 CXX test/cpp_headers/xor.o 00:03:14.336 LINK env_dpdk_post_init 00:03:14.336 CXX test/cpp_headers/zipf.o 00:03:14.336 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:14.594 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:14.594 LINK stub 00:03:14.594 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:14.594 LINK jsoncat 00:03:14.594 LINK histogram_perf 00:03:14.594 LINK bdev_svc 00:03:14.594 LINK spdk_dd 00:03:14.594 LINK spdk_trace 00:03:14.594 LINK ioat_perf 00:03:14.594 LINK verify 00:03:14.852 LINK pci_ut 00:03:14.852 LINK test_dma 00:03:14.852 LINK vhost_fuzz 00:03:14.852 LINK spdk_bdev 00:03:14.852 LINK nvme_fuzz 00:03:14.852 LINK spdk_nvme_perf 00:03:14.852 LINK spdk_nvme 00:03:14.852 LINK spdk_nvme_identify 00:03:14.852 LINK mem_callbacks 00:03:14.852 CC app/vhost/vhost.o 00:03:14.852 CC examples/sock/hello_world/hello_sock.o 00:03:15.111 LINK spdk_top 00:03:15.111 CC examples/vmd/lsvmd/lsvmd.o 00:03:15.111 CC examples/vmd/led/led.o 00:03:15.111 CC examples/idxd/perf/perf.o 00:03:15.111 CC test/event/reactor_perf/reactor_perf.o 00:03:15.111 CC test/event/event_perf/event_perf.o 00:03:15.111 CC test/event/reactor/reactor.o 00:03:15.111 CC examples/thread/thread/thread_ex.o 00:03:15.111 CC test/event/scheduler/scheduler.o 00:03:15.111 CC test/event/app_repeat/app_repeat.o 00:03:15.111 LINK lsvmd 00:03:15.111 LINK led 00:03:15.111 LINK vhost 00:03:15.111 LINK reactor_perf 00:03:15.111 LINK reactor 00:03:15.111 LINK event_perf 00:03:15.111 LINK hello_sock 00:03:15.369 LINK app_repeat 00:03:15.369 LINK memory_ut 00:03:15.369 CC test/nvme/sgl/sgl.o 00:03:15.369 CC test/nvme/fdp/fdp.o 00:03:15.369 LINK thread 00:03:15.369 CC test/nvme/reset/reset.o 00:03:15.369 CC test/nvme/connect_stress/connect_stress.o 00:03:15.369 CC test/nvme/startup/startup.o 00:03:15.369 CC test/nvme/e2edp/nvme_dp.o 00:03:15.369 CC test/nvme/overhead/overhead.o 00:03:15.369 CC test/nvme/compliance/nvme_compliance.o 00:03:15.369 CC test/accel/dif/dif.o 00:03:15.369 CC test/blobfs/mkfs/mkfs.o 00:03:15.369 CC test/nvme/reserve/reserve.o 00:03:15.369 CC test/nvme/fused_ordering/fused_ordering.o 00:03:15.369 LINK scheduler 00:03:15.369 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:15.369 LINK idxd_perf 00:03:15.369 CC test/nvme/aer/aer.o 00:03:15.369 CC test/nvme/cuse/cuse.o 00:03:15.369 CC test/nvme/simple_copy/simple_copy.o 00:03:15.369 CC test/nvme/err_injection/err_injection.o 00:03:15.369 CC test/nvme/boot_partition/boot_partition.o 00:03:15.369 CC test/lvol/esnap/esnap.o 00:03:15.369 LINK startup 00:03:15.369 LINK connect_stress 00:03:15.369 LINK doorbell_aers 00:03:15.627 LINK reserve 00:03:15.627 LINK fused_ordering 00:03:15.627 LINK boot_partition 00:03:15.627 LINK mkfs 00:03:15.627 LINK sgl 00:03:15.627 LINK reset 00:03:15.627 LINK err_injection 00:03:15.627 LINK nvme_dp 00:03:15.627 LINK simple_copy 00:03:15.627 LINK aer 00:03:15.627 LINK overhead 00:03:15.627 LINK nvme_compliance 00:03:15.627 LINK fdp 00:03:15.627 LINK dif 00:03:15.627 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:15.627 CC examples/nvme/reconnect/reconnect.o 00:03:15.627 CC examples/nvme/arbitration/arbitration.o 00:03:15.627 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:15.627 CC examples/nvme/abort/abort.o 00:03:15.627 CC examples/nvme/hello_world/hello_world.o 00:03:15.627 CC examples/nvme/hotplug/hotplug.o 00:03:15.627 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:15.886 LINK iscsi_fuzz 00:03:15.886 CC examples/accel/perf/accel_perf.o 00:03:15.886 CC examples/blob/cli/blobcli.o 00:03:15.886 CC examples/blob/hello_world/hello_blob.o 00:03:15.886 LINK cmb_copy 00:03:15.886 LINK pmr_persistence 00:03:15.886 LINK hello_world 00:03:15.886 LINK hotplug 00:03:15.886 LINK arbitration 00:03:15.886 LINK reconnect 00:03:16.144 LINK abort 00:03:16.144 LINK nvme_manage 00:03:16.144 LINK hello_blob 00:03:16.144 LINK accel_perf 00:03:16.144 LINK blobcli 00:03:16.144 CC test/bdev/bdevio/bdevio.o 00:03:16.402 LINK cuse 00:03:16.661 LINK bdevio 00:03:16.661 CC examples/bdev/bdevperf/bdevperf.o 00:03:16.661 CC examples/bdev/hello_world/hello_bdev.o 00:03:16.919 LINK hello_bdev 00:03:17.177 LINK bdevperf 00:03:17.745 CC examples/nvmf/nvmf/nvmf.o 00:03:18.312 LINK nvmf 00:03:18.878 LINK esnap 00:03:19.136 00:03:19.136 real 0m49.626s 00:03:19.136 user 6m27.994s 00:03:19.136 sys 4m18.613s 00:03:19.136 15:08:22 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:19.136 15:08:22 make -- common/autotest_common.sh@10 -- $ set +x 00:03:19.136 ************************************ 00:03:19.136 END TEST make 00:03:19.136 ************************************ 00:03:19.136 15:08:22 -- common/autotest_common.sh@1142 -- $ return 0 00:03:19.136 15:08:22 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:19.136 15:08:22 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:19.136 15:08:22 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:19.136 15:08:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:19.136 15:08:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:19.136 15:08:22 -- pm/common@44 -- $ pid=2753766 00:03:19.136 15:08:22 -- pm/common@50 -- $ kill -TERM 2753766 00:03:19.136 15:08:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:19.136 15:08:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:19.136 15:08:22 -- pm/common@44 -- $ pid=2753768 00:03:19.136 15:08:22 -- pm/common@50 -- $ kill -TERM 2753768 00:03:19.136 15:08:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:19.136 15:08:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:19.136 15:08:22 -- pm/common@44 -- $ pid=2753770 00:03:19.136 15:08:22 -- pm/common@50 -- $ kill -TERM 2753770 00:03:19.136 15:08:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:19.136 15:08:22 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:19.136 15:08:22 -- pm/common@44 -- $ pid=2753786 00:03:19.136 15:08:22 -- pm/common@50 -- $ sudo -E kill -TERM 2753786 00:03:19.394 15:08:23 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:19.394 15:08:23 -- nvmf/common.sh@7 -- # uname -s 00:03:19.394 15:08:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:19.394 15:08:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:19.394 15:08:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:19.394 15:08:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:19.394 15:08:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:19.394 15:08:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:19.394 15:08:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:19.394 15:08:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:19.394 15:08:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:19.394 15:08:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:19.394 15:08:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:03:19.395 15:08:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:03:19.395 15:08:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:19.395 15:08:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:19.395 15:08:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:19.395 15:08:23 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:19.395 15:08:23 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:19.395 15:08:23 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:19.395 15:08:23 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:19.395 15:08:23 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:19.395 15:08:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:19.395 15:08:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:19.395 15:08:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:19.395 15:08:23 -- paths/export.sh@5 -- # export PATH 00:03:19.395 15:08:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:19.395 15:08:23 -- nvmf/common.sh@47 -- # : 0 00:03:19.395 15:08:23 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:19.395 15:08:23 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:19.395 15:08:23 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:19.395 15:08:23 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:19.395 15:08:23 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:19.395 15:08:23 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:19.395 15:08:23 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:19.395 15:08:23 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:19.395 15:08:23 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:19.395 15:08:23 -- spdk/autotest.sh@32 -- # uname -s 00:03:19.395 15:08:23 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:19.395 15:08:23 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:19.395 15:08:23 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:19.395 15:08:23 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:19.395 15:08:23 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:19.395 15:08:23 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:19.395 15:08:23 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:19.395 15:08:23 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:19.395 15:08:23 -- spdk/autotest.sh@48 -- # udevadm_pid=2814575 00:03:19.395 15:08:23 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:19.395 15:08:23 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:19.395 15:08:23 -- pm/common@17 -- # local monitor 00:03:19.395 15:08:23 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:19.395 15:08:23 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:19.395 15:08:23 -- pm/common@21 -- # date +%s 00:03:19.395 15:08:23 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:19.395 15:08:23 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:19.395 15:08:23 -- pm/common@21 -- # date +%s 00:03:19.395 15:08:23 -- pm/common@21 -- # date +%s 00:03:19.395 15:08:23 -- pm/common@25 -- # sleep 1 00:03:19.395 15:08:23 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721048903 00:03:19.395 15:08:23 -- pm/common@21 -- # date +%s 00:03:19.395 15:08:23 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721048903 00:03:19.395 15:08:23 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721048903 00:03:19.395 15:08:23 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721048903 00:03:19.395 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721048903_collect-cpu-temp.pm.log 00:03:19.395 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721048903_collect-vmstat.pm.log 00:03:19.395 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721048903_collect-cpu-load.pm.log 00:03:19.395 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721048903_collect-bmc-pm.bmc.pm.log 00:03:20.331 15:08:24 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:20.331 15:08:24 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:20.331 15:08:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:20.331 15:08:24 -- common/autotest_common.sh@10 -- # set +x 00:03:20.331 15:08:24 -- spdk/autotest.sh@59 -- # create_test_list 00:03:20.331 15:08:24 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:20.331 15:08:24 -- common/autotest_common.sh@10 -- # set +x 00:03:20.331 15:08:24 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:20.331 15:08:24 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:20.331 15:08:24 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:20.331 15:08:24 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:20.331 15:08:24 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:20.331 15:08:24 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:20.331 15:08:24 -- common/autotest_common.sh@1455 -- # uname 00:03:20.331 15:08:24 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:20.331 15:08:24 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:20.331 15:08:24 -- common/autotest_common.sh@1475 -- # uname 00:03:20.331 15:08:24 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:20.331 15:08:24 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:20.331 15:08:24 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:20.331 15:08:24 -- spdk/autotest.sh@72 -- # hash lcov 00:03:20.331 15:08:24 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:20.331 15:08:24 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:20.331 --rc lcov_branch_coverage=1 00:03:20.331 --rc lcov_function_coverage=1 00:03:20.331 --rc genhtml_branch_coverage=1 00:03:20.331 --rc genhtml_function_coverage=1 00:03:20.331 --rc genhtml_legend=1 00:03:20.331 --rc geninfo_all_blocks=1 00:03:20.331 ' 00:03:20.331 15:08:24 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:20.331 --rc lcov_branch_coverage=1 00:03:20.331 --rc lcov_function_coverage=1 00:03:20.331 --rc genhtml_branch_coverage=1 00:03:20.331 --rc genhtml_function_coverage=1 00:03:20.331 --rc genhtml_legend=1 00:03:20.331 --rc geninfo_all_blocks=1 00:03:20.331 ' 00:03:20.331 15:08:24 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:20.331 --rc lcov_branch_coverage=1 00:03:20.331 --rc lcov_function_coverage=1 00:03:20.331 --rc genhtml_branch_coverage=1 00:03:20.331 --rc genhtml_function_coverage=1 00:03:20.331 --rc genhtml_legend=1 00:03:20.331 --rc geninfo_all_blocks=1 00:03:20.331 --no-external' 00:03:20.331 15:08:24 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:20.331 --rc lcov_branch_coverage=1 00:03:20.331 --rc lcov_function_coverage=1 00:03:20.331 --rc genhtml_branch_coverage=1 00:03:20.331 --rc genhtml_function_coverage=1 00:03:20.331 --rc genhtml_legend=1 00:03:20.331 --rc geninfo_all_blocks=1 00:03:20.331 --no-external' 00:03:20.331 15:08:24 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:20.590 lcov: LCOV version 1.14 00:03:20.590 15:08:24 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:28.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:28.699 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:36.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:36.868 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:36.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:36.868 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:36.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:36.868 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:36.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:36.868 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:36.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:36.868 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:36.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:36.868 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:36.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:36.868 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:36.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:36.868 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:36.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:36.868 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:36.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:36.868 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:36.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:36.868 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:36.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:36.868 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:36.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:36.868 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:36.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:36.868 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:36.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:36.868 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:36.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:36.868 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:36.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:36.868 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:36.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:36.868 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:36.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:36.868 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:36.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:36.868 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:36.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:36.868 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:36.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:36.868 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:36.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:36.868 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:36.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:36.868 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:36.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:36.868 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:36.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:36.868 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:36.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:36.868 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:36.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:36.868 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:36.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:36.868 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:36.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:36.868 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:36.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:36.868 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:36.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:36.868 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:36.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:36.868 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:36.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:36.868 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:36.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:36.868 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:36.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:36.868 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:36.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:36.868 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:36.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:36.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:36.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:36.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:36.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:36.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:36.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:36.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:36.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:36.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:36.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:36.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:36.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:36.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:36.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:36.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:36.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:36.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:36.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:36.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:36.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:36.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:36.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:36.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:36.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:36.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:36.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:36.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:36.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:36.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:36.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:36.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:36.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:36.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:36.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:36.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:36.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:36.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:36.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:36.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:36.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:36.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:36.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:36.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:36.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:36.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:36.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:36.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:36.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:36.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:36.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:36.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:36.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:36.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:36.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:36.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:36.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:36.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:36.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:36.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:36.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:36.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:36.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:36.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:36.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:36.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:36.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:36.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:36.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:36.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:36.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:36.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:36.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:36.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:36.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:36.869 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:36.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:36.870 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:36.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:36.870 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:36.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:36.870 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:36.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:36.870 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:36.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:36.870 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:36.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:36.870 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:36.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:36.870 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:36.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:36.870 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:36.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:36.870 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:36.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:36.870 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:36.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:36.870 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:36.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:36.870 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:36.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:36.870 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:46.873 15:08:49 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:46.873 15:08:49 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:46.873 15:08:49 -- common/autotest_common.sh@10 -- # set +x 00:03:46.873 15:08:49 -- spdk/autotest.sh@91 -- # rm -f 00:03:46.873 15:08:49 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:48.776 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:48.776 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:48.776 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:48.776 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:48.776 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:48.776 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:48.776 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:48.776 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:48.776 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:48.776 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:48.776 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:48.776 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:49.035 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:49.035 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:49.035 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:49.035 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:49.035 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:03:49.035 15:08:52 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:49.035 15:08:52 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:49.035 15:08:52 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:49.035 15:08:52 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:49.035 15:08:52 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:49.035 15:08:52 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:49.035 15:08:52 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:49.035 15:08:52 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:49.035 15:08:52 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:49.035 15:08:52 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:49.035 15:08:52 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:49.035 15:08:52 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:49.035 15:08:52 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:49.035 15:08:52 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:49.035 15:08:52 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:49.035 No valid GPT data, bailing 00:03:49.035 15:08:52 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:49.035 15:08:52 -- scripts/common.sh@391 -- # pt= 00:03:49.035 15:08:52 -- scripts/common.sh@392 -- # return 1 00:03:49.035 15:08:52 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:49.035 1+0 records in 00:03:49.035 1+0 records out 00:03:49.035 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00872219 s, 120 MB/s 00:03:49.035 15:08:52 -- spdk/autotest.sh@118 -- # sync 00:03:49.035 15:08:52 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:49.035 15:08:52 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:49.035 15:08:52 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:57.149 15:09:00 -- spdk/autotest.sh@124 -- # uname -s 00:03:57.149 15:09:00 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:57.149 15:09:00 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:57.149 15:09:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:57.149 15:09:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:57.149 15:09:00 -- common/autotest_common.sh@10 -- # set +x 00:03:57.149 ************************************ 00:03:57.149 START TEST setup.sh 00:03:57.149 ************************************ 00:03:57.149 15:09:00 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:57.149 * Looking for test storage... 00:03:57.149 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:57.149 15:09:00 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:57.149 15:09:00 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:57.149 15:09:00 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:57.149 15:09:00 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:57.149 15:09:00 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:57.149 15:09:00 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:57.149 ************************************ 00:03:57.149 START TEST acl 00:03:57.149 ************************************ 00:03:57.149 15:09:00 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:57.149 * Looking for test storage... 00:03:57.149 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:57.149 15:09:00 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:57.149 15:09:00 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:57.149 15:09:00 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:57.149 15:09:00 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:57.149 15:09:00 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:57.149 15:09:00 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:57.149 15:09:00 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:57.149 15:09:00 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:57.149 15:09:00 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:57.149 15:09:00 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:57.149 15:09:00 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:57.149 15:09:00 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:57.149 15:09:00 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:57.149 15:09:00 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:57.149 15:09:00 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:57.149 15:09:00 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:00.428 15:09:04 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:00.428 15:09:04 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:00.428 15:09:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:00.428 15:09:04 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:00.428 15:09:04 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.428 15:09:04 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:03.727 Hugepages 00:04:03.727 node hugesize free / total 00:04:03.727 15:09:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:03.727 15:09:07 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:03.727 15:09:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:03.727 15:09:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:03.727 15:09:07 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:03.727 15:09:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:03.727 15:09:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:03.727 15:09:07 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:03.727 15:09:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:03.727 00:04:03.727 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:03.727 15:09:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:03.727 15:09:07 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:03.727 15:09:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:03.727 15:09:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:03.727 15:09:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:03.727 15:09:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:03.727 15:09:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:03.727 15:09:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:03.727 15:09:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:03.727 15:09:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:03.727 15:09:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:03.727 15:09:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:03.727 15:09:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:03.727 15:09:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:03.727 15:09:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:03.727 15:09:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:03.727 15:09:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:03.727 15:09:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:03.727 15:09:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:03.727 15:09:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:03.727 15:09:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:03.727 15:09:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:03.727 15:09:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:03.727 15:09:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:03.727 15:09:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:03.727 15:09:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:03.728 15:09:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:03.728 15:09:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:03.728 15:09:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:03.728 15:09:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:03.728 15:09:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:03.728 15:09:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:03.728 15:09:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:03.728 15:09:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:03.728 15:09:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:03.728 15:09:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:03.728 15:09:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:03.728 15:09:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:03.728 15:09:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:03.728 15:09:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:03.728 15:09:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:03.728 15:09:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:03.728 15:09:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:03.728 15:09:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:03.728 15:09:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:03.728 15:09:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:03.728 15:09:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:03.728 15:09:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:03.728 15:09:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:03.728 15:09:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:03.728 15:09:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:03.728 15:09:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:03.728 15:09:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:03.728 15:09:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:03.728 15:09:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:03.728 15:09:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:03.728 15:09:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:03.728 15:09:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:03.728 15:09:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:03.728 15:09:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:03.728 15:09:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:03.728 15:09:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:03.728 15:09:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:03.728 15:09:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:03.728 15:09:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:03.728 15:09:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:03.728 15:09:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:03.728 15:09:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:04:03.728 15:09:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:03.728 15:09:07 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:04:03.728 15:09:07 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:03.728 15:09:07 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:03.728 15:09:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:03.728 15:09:07 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:03.728 15:09:07 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:03.728 15:09:07 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:03.728 15:09:07 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.728 15:09:07 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:03.728 ************************************ 00:04:03.728 START TEST denied 00:04:03.728 ************************************ 00:04:03.728 15:09:07 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:04:03.728 15:09:07 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:04:03.728 15:09:07 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:03.728 15:09:07 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:04:03.728 15:09:07 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.728 15:09:07 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:07.018 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:04:07.018 15:09:10 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:04:07.018 15:09:10 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:07.018 15:09:10 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:07.018 15:09:10 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:04:07.277 15:09:10 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:04:07.277 15:09:10 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:07.277 15:09:10 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:07.277 15:09:10 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:07.277 15:09:10 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:07.277 15:09:10 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:12.580 00:04:12.580 real 0m8.021s 00:04:12.580 user 0m2.602s 00:04:12.580 sys 0m4.789s 00:04:12.580 15:09:15 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:12.580 15:09:15 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:12.580 ************************************ 00:04:12.580 END TEST denied 00:04:12.580 ************************************ 00:04:12.580 15:09:15 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:12.580 15:09:15 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:12.580 15:09:15 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:12.580 15:09:15 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.580 15:09:15 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:12.580 ************************************ 00:04:12.580 START TEST allowed 00:04:12.580 ************************************ 00:04:12.580 15:09:15 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:04:12.580 15:09:15 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:04:12.580 15:09:15 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:12.580 15:09:15 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:04:12.580 15:09:15 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.580 15:09:15 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:16.762 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:16.762 15:09:20 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:16.762 15:09:20 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:16.762 15:09:20 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:16.762 15:09:20 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:16.762 15:09:20 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:20.951 00:04:20.951 real 0m8.771s 00:04:20.951 user 0m2.607s 00:04:20.951 sys 0m4.840s 00:04:20.951 15:09:24 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:20.951 15:09:24 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:20.951 ************************************ 00:04:20.951 END TEST allowed 00:04:20.951 ************************************ 00:04:20.951 15:09:24 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:20.951 00:04:20.951 real 0m24.216s 00:04:20.951 user 0m7.900s 00:04:20.951 sys 0m14.642s 00:04:20.951 15:09:24 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:20.951 15:09:24 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:20.951 ************************************ 00:04:20.951 END TEST acl 00:04:20.951 ************************************ 00:04:20.951 15:09:24 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:20.951 15:09:24 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:20.951 15:09:24 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:20.951 15:09:24 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.951 15:09:24 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:20.951 ************************************ 00:04:20.951 START TEST hugepages 00:04:20.951 ************************************ 00:04:20.951 15:09:24 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:20.951 * Looking for test storage... 00:04:20.951 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295240 kB' 'MemFree: 40437804 kB' 'MemAvailable: 45386496 kB' 'Buffers: 2704 kB' 'Cached: 11432176 kB' 'SwapCached: 0 kB' 'Active: 7319204 kB' 'Inactive: 4656152 kB' 'Active(anon): 6928156 kB' 'Inactive(anon): 0 kB' 'Active(file): 391048 kB' 'Inactive(file): 4656152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544008 kB' 'Mapped: 208664 kB' 'Shmem: 6387680 kB' 'KReclaimable: 549200 kB' 'Slab: 1190780 kB' 'SReclaimable: 549200 kB' 'SUnreclaim: 641580 kB' 'KernelStack: 22240 kB' 'PageTables: 8828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36439072 kB' 'Committed_AS: 8382732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217224 kB' 'VmallocChunk: 0 kB' 'Percpu: 132608 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3169652 kB' 'DirectMap2M: 23779328 kB' 'DirectMap1G: 41943040 kB' 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.951 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.952 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.953 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.953 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.953 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.953 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.953 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.953 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.953 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.953 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.953 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.953 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.953 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.953 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.953 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.953 15:09:24 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.953 15:09:24 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.953 15:09:24 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:20.953 15:09:24 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:20.953 15:09:24 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:20.953 15:09:24 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:20.953 15:09:24 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:20.953 15:09:24 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:20.953 15:09:24 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:20.953 15:09:24 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:20.953 15:09:24 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:20.953 15:09:24 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:20.953 15:09:24 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:20.953 15:09:24 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:20.953 15:09:24 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:20.953 15:09:24 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:20.953 15:09:24 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:20.953 15:09:24 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:20.953 15:09:24 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:20.953 15:09:24 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:20.953 15:09:24 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:20.953 15:09:24 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:20.953 15:09:24 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:20.953 15:09:24 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:20.953 15:09:24 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:20.953 15:09:24 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:20.953 15:09:24 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:20.953 15:09:24 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:20.953 15:09:24 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:20.953 15:09:24 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:20.953 15:09:24 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:20.953 15:09:24 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:20.953 15:09:24 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:20.953 15:09:24 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:20.953 15:09:24 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:20.953 15:09:24 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.953 15:09:24 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:20.953 ************************************ 00:04:20.953 START TEST default_setup 00:04:20.953 ************************************ 00:04:20.953 15:09:24 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:04:20.953 15:09:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:20.953 15:09:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:20.953 15:09:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:20.953 15:09:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:20.953 15:09:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:20.953 15:09:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:20.953 15:09:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:20.953 15:09:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:20.953 15:09:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:20.953 15:09:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:20.953 15:09:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:20.953 15:09:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:20.953 15:09:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:20.953 15:09:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:20.953 15:09:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:20.953 15:09:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:20.953 15:09:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:20.953 15:09:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:20.953 15:09:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:20.953 15:09:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:20.953 15:09:24 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.953 15:09:24 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:24.239 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:24.239 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:24.239 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:24.498 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:24.498 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:24.498 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:24.498 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:24.498 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:24.498 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:24.498 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:24.498 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:24.498 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:24.498 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:24.498 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:24.498 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:24.498 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:26.410 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295240 kB' 'MemFree: 42629612 kB' 'MemAvailable: 47578304 kB' 'Buffers: 2704 kB' 'Cached: 11432292 kB' 'SwapCached: 0 kB' 'Active: 7339548 kB' 'Inactive: 4656152 kB' 'Active(anon): 6948500 kB' 'Inactive(anon): 0 kB' 'Active(file): 391048 kB' 'Inactive(file): 4656152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563940 kB' 'Mapped: 209336 kB' 'Shmem: 6387796 kB' 'KReclaimable: 549200 kB' 'Slab: 1188336 kB' 'SReclaimable: 549200 kB' 'SUnreclaim: 639136 kB' 'KernelStack: 22496 kB' 'PageTables: 9116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487648 kB' 'Committed_AS: 8402732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217404 kB' 'VmallocChunk: 0 kB' 'Percpu: 132608 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3169652 kB' 'DirectMap2M: 23779328 kB' 'DirectMap1G: 41943040 kB' 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.411 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295240 kB' 'MemFree: 42633196 kB' 'MemAvailable: 47581888 kB' 'Buffers: 2704 kB' 'Cached: 11432296 kB' 'SwapCached: 0 kB' 'Active: 7333792 kB' 'Inactive: 4656152 kB' 'Active(anon): 6942744 kB' 'Inactive(anon): 0 kB' 'Active(file): 391048 kB' 'Inactive(file): 4656152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558308 kB' 'Mapped: 209016 kB' 'Shmem: 6387800 kB' 'KReclaimable: 549200 kB' 'Slab: 1188204 kB' 'SReclaimable: 549200 kB' 'SUnreclaim: 639004 kB' 'KernelStack: 22416 kB' 'PageTables: 8832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487648 kB' 'Committed_AS: 8396632 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217336 kB' 'VmallocChunk: 0 kB' 'Percpu: 132608 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3169652 kB' 'DirectMap2M: 23779328 kB' 'DirectMap1G: 41943040 kB' 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.412 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.413 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295240 kB' 'MemFree: 42630760 kB' 'MemAvailable: 47579452 kB' 'Buffers: 2704 kB' 'Cached: 11432312 kB' 'SwapCached: 0 kB' 'Active: 7333832 kB' 'Inactive: 4656152 kB' 'Active(anon): 6942784 kB' 'Inactive(anon): 0 kB' 'Active(file): 391048 kB' 'Inactive(file): 4656152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558848 kB' 'Mapped: 208752 kB' 'Shmem: 6387816 kB' 'KReclaimable: 549200 kB' 'Slab: 1188204 kB' 'SReclaimable: 549200 kB' 'SUnreclaim: 639004 kB' 'KernelStack: 22288 kB' 'PageTables: 9176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487648 kB' 'Committed_AS: 8396652 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217336 kB' 'VmallocChunk: 0 kB' 'Percpu: 132608 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3169652 kB' 'DirectMap2M: 23779328 kB' 'DirectMap1G: 41943040 kB' 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.414 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.415 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:26.416 nr_hugepages=1024 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:26.416 resv_hugepages=0 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:26.416 surplus_hugepages=0 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:26.416 anon_hugepages=0 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295240 kB' 'MemFree: 42632928 kB' 'MemAvailable: 47581620 kB' 'Buffers: 2704 kB' 'Cached: 11432336 kB' 'SwapCached: 0 kB' 'Active: 7333848 kB' 'Inactive: 4656152 kB' 'Active(anon): 6942800 kB' 'Inactive(anon): 0 kB' 'Active(file): 391048 kB' 'Inactive(file): 4656152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558276 kB' 'Mapped: 208752 kB' 'Shmem: 6387840 kB' 'KReclaimable: 549200 kB' 'Slab: 1187992 kB' 'SReclaimable: 549200 kB' 'SUnreclaim: 638792 kB' 'KernelStack: 22352 kB' 'PageTables: 8948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487648 kB' 'Committed_AS: 8396676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217336 kB' 'VmallocChunk: 0 kB' 'Percpu: 132608 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3169652 kB' 'DirectMap2M: 23779328 kB' 'DirectMap1G: 41943040 kB' 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.416 15:09:29 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.416 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.416 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.416 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.416 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.416 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.416 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.416 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.416 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.416 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.416 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.417 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 25095412 kB' 'MemUsed: 7543728 kB' 'SwapCached: 0 kB' 'Active: 3039460 kB' 'Inactive: 622632 kB' 'Active(anon): 2736068 kB' 'Inactive(anon): 0 kB' 'Active(file): 303392 kB' 'Inactive(file): 622632 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3287384 kB' 'Mapped: 131768 kB' 'AnonPages: 377816 kB' 'Shmem: 2361360 kB' 'KernelStack: 12408 kB' 'PageTables: 5208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 351840 kB' 'Slab: 667632 kB' 'SReclaimable: 351840 kB' 'SUnreclaim: 315792 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.418 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:26.419 node0=1024 expecting 1024 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:26.419 00:04:26.419 real 0m5.324s 00:04:26.419 user 0m1.436s 00:04:26.419 sys 0m2.472s 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:26.419 15:09:30 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:26.419 ************************************ 00:04:26.419 END TEST default_setup 00:04:26.419 ************************************ 00:04:26.419 15:09:30 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:26.419 15:09:30 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:26.419 15:09:30 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:26.419 15:09:30 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.419 15:09:30 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:26.419 ************************************ 00:04:26.419 START TEST per_node_1G_alloc 00:04:26.419 ************************************ 00:04:26.419 15:09:30 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:26.419 15:09:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:26.419 15:09:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:26.420 15:09:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:26.420 15:09:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:26.420 15:09:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:26.420 15:09:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:26.420 15:09:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:26.420 15:09:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:26.420 15:09:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:26.420 15:09:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:26.420 15:09:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:26.420 15:09:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:26.420 15:09:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:26.420 15:09:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:26.420 15:09:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:26.420 15:09:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:26.420 15:09:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:26.420 15:09:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:26.420 15:09:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:26.420 15:09:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:26.420 15:09:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:26.420 15:09:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:26.420 15:09:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:26.420 15:09:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:26.420 15:09:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:26.420 15:09:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.420 15:09:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:28.951 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:28.951 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:28.951 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:28.951 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:28.951 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:28.951 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:28.951 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:28.951 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:28.951 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:28.951 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:28.951 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:28.951 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:28.951 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:28.951 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:28.951 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:28.951 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:28.951 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295240 kB' 'MemFree: 42670032 kB' 'MemAvailable: 47618732 kB' 'Buffers: 2704 kB' 'Cached: 11432440 kB' 'SwapCached: 0 kB' 'Active: 7334008 kB' 'Inactive: 4656152 kB' 'Active(anon): 6942960 kB' 'Inactive(anon): 0 kB' 'Active(file): 391048 kB' 'Inactive(file): 4656152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557800 kB' 'Mapped: 208920 kB' 'Shmem: 6387944 kB' 'KReclaimable: 549208 kB' 'Slab: 1187184 kB' 'SReclaimable: 549208 kB' 'SUnreclaim: 637976 kB' 'KernelStack: 22272 kB' 'PageTables: 8712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487648 kB' 'Committed_AS: 8396112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217352 kB' 'VmallocChunk: 0 kB' 'Percpu: 132608 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3169652 kB' 'DirectMap2M: 23779328 kB' 'DirectMap1G: 41943040 kB' 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.216 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.217 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295240 kB' 'MemFree: 42670580 kB' 'MemAvailable: 47619272 kB' 'Buffers: 2704 kB' 'Cached: 11432444 kB' 'SwapCached: 0 kB' 'Active: 7333608 kB' 'Inactive: 4656152 kB' 'Active(anon): 6942560 kB' 'Inactive(anon): 0 kB' 'Active(file): 391048 kB' 'Inactive(file): 4656152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557888 kB' 'Mapped: 208844 kB' 'Shmem: 6387948 kB' 'KReclaimable: 549200 kB' 'Slab: 1187124 kB' 'SReclaimable: 549200 kB' 'SUnreclaim: 637924 kB' 'KernelStack: 22320 kB' 'PageTables: 8536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487648 kB' 'Committed_AS: 8395968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217256 kB' 'VmallocChunk: 0 kB' 'Percpu: 132608 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3169652 kB' 'DirectMap2M: 23779328 kB' 'DirectMap1G: 41943040 kB' 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295240 kB' 'MemFree: 42670852 kB' 'MemAvailable: 47619544 kB' 'Buffers: 2704 kB' 'Cached: 11432460 kB' 'SwapCached: 0 kB' 'Active: 7334104 kB' 'Inactive: 4656152 kB' 'Active(anon): 6943056 kB' 'Inactive(anon): 0 kB' 'Active(file): 391048 kB' 'Inactive(file): 4656152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558388 kB' 'Mapped: 208844 kB' 'Shmem: 6387964 kB' 'KReclaimable: 549200 kB' 'Slab: 1187124 kB' 'SReclaimable: 549200 kB' 'SUnreclaim: 637924 kB' 'KernelStack: 22416 kB' 'PageTables: 9136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487648 kB' 'Committed_AS: 8396088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217416 kB' 'VmallocChunk: 0 kB' 'Percpu: 132608 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3169652 kB' 'DirectMap2M: 23779328 kB' 'DirectMap1G: 41943040 kB' 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.219 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.220 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:29.221 15:09:32 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:29.221 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:29.221 nr_hugepages=1024 00:04:29.221 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:29.221 resv_hugepages=0 00:04:29.221 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:29.221 surplus_hugepages=0 00:04:29.221 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:29.221 anon_hugepages=0 00:04:29.221 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:29.221 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:29.221 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:29.221 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:29.221 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:29.221 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:29.221 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.221 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.221 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.221 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.221 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.221 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.221 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.221 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.221 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295240 kB' 'MemFree: 42671100 kB' 'MemAvailable: 47619792 kB' 'Buffers: 2704 kB' 'Cached: 11432484 kB' 'SwapCached: 0 kB' 'Active: 7333980 kB' 'Inactive: 4656152 kB' 'Active(anon): 6942932 kB' 'Inactive(anon): 0 kB' 'Active(file): 391048 kB' 'Inactive(file): 4656152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558212 kB' 'Mapped: 208844 kB' 'Shmem: 6387988 kB' 'KReclaimable: 549200 kB' 'Slab: 1187124 kB' 'SReclaimable: 549200 kB' 'SUnreclaim: 637924 kB' 'KernelStack: 22352 kB' 'PageTables: 8848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487648 kB' 'Committed_AS: 8397504 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217400 kB' 'VmallocChunk: 0 kB' 'Percpu: 132608 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3169652 kB' 'DirectMap2M: 23779328 kB' 'DirectMap1G: 41943040 kB' 00:04:29.221 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.221 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.221 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.221 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.221 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.221 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.221 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.221 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.221 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.221 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.221 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.221 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.222 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 26162012 kB' 'MemUsed: 6477128 kB' 'SwapCached: 0 kB' 'Active: 3036960 kB' 'Inactive: 622632 kB' 'Active(anon): 2733568 kB' 'Inactive(anon): 0 kB' 'Active(file): 303392 kB' 'Inactive(file): 622632 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3287516 kB' 'Mapped: 131828 kB' 'AnonPages: 374768 kB' 'Shmem: 2361492 kB' 'KernelStack: 12424 kB' 'PageTables: 5276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 351840 kB' 'Slab: 667016 kB' 'SReclaimable: 351840 kB' 'SUnreclaim: 315176 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.223 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.224 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656100 kB' 'MemFree: 16507896 kB' 'MemUsed: 11148204 kB' 'SwapCached: 0 kB' 'Active: 4296804 kB' 'Inactive: 4033520 kB' 'Active(anon): 4209148 kB' 'Inactive(anon): 0 kB' 'Active(file): 87656 kB' 'Inactive(file): 4033520 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8147696 kB' 'Mapped: 77488 kB' 'AnonPages: 182656 kB' 'Shmem: 4026520 kB' 'KernelStack: 9928 kB' 'PageTables: 3804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 197360 kB' 'Slab: 520108 kB' 'SReclaimable: 197360 kB' 'SUnreclaim: 322748 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.225 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.226 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.226 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.226 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.226 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.226 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.226 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.226 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.226 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.226 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.226 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.226 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.226 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.226 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.226 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.226 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.226 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.226 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.226 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.226 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.226 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.226 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.226 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.226 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.226 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.226 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.226 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.226 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.226 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.226 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.226 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.226 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.226 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.226 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.226 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.226 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.226 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.226 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.226 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.226 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.226 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.226 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:29.226 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.226 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.226 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.226 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.226 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:29.226 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:29.226 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:29.226 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:29.226 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:29.226 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:29.226 node0=512 expecting 512 00:04:29.226 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:29.226 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:29.226 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:29.226 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:29.226 node1=512 expecting 512 00:04:29.226 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:29.226 00:04:29.226 real 0m2.964s 00:04:29.226 user 0m1.025s 00:04:29.226 sys 0m1.793s 00:04:29.226 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:29.226 15:09:33 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:29.226 ************************************ 00:04:29.226 END TEST per_node_1G_alloc 00:04:29.226 ************************************ 00:04:29.484 15:09:33 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:29.484 15:09:33 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:29.484 15:09:33 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:29.484 15:09:33 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.484 15:09:33 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:29.484 ************************************ 00:04:29.484 START TEST even_2G_alloc 00:04:29.484 ************************************ 00:04:29.484 15:09:33 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:29.484 15:09:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:29.484 15:09:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:29.484 15:09:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:29.484 15:09:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:29.484 15:09:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:29.484 15:09:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:29.484 15:09:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:29.484 15:09:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:29.484 15:09:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:29.484 15:09:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:29.484 15:09:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:29.485 15:09:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:29.485 15:09:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:29.485 15:09:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:29.485 15:09:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:29.485 15:09:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:29.485 15:09:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:29.485 15:09:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:29.485 15:09:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:29.485 15:09:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:29.485 15:09:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:29.485 15:09:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:29.485 15:09:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:29.485 15:09:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:29.485 15:09:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:29.485 15:09:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:29.485 15:09:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.485 15:09:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:32.791 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:32.791 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:32.791 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:32.791 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:32.791 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:32.791 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:32.791 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:32.791 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:32.791 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:32.791 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:32.791 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:32.791 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:32.791 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:32.791 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:32.791 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:32.791 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:32.791 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:32.791 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:32.791 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:32.791 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:32.791 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:32.791 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:32.791 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:32.791 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:32.791 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:32.791 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:32.791 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:32.791 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:32.791 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:32.791 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.791 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.791 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.791 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.791 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.791 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.791 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.791 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.791 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295240 kB' 'MemFree: 42660616 kB' 'MemAvailable: 47609308 kB' 'Buffers: 2704 kB' 'Cached: 11432604 kB' 'SwapCached: 0 kB' 'Active: 7332472 kB' 'Inactive: 4656152 kB' 'Active(anon): 6941424 kB' 'Inactive(anon): 0 kB' 'Active(file): 391048 kB' 'Inactive(file): 4656152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556504 kB' 'Mapped: 207708 kB' 'Shmem: 6388108 kB' 'KReclaimable: 549200 kB' 'Slab: 1187072 kB' 'SReclaimable: 549200 kB' 'SUnreclaim: 637872 kB' 'KernelStack: 22320 kB' 'PageTables: 8748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487648 kB' 'Committed_AS: 8389296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217400 kB' 'VmallocChunk: 0 kB' 'Percpu: 132608 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3169652 kB' 'DirectMap2M: 23779328 kB' 'DirectMap1G: 41943040 kB' 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.792 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.793 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295240 kB' 'MemFree: 42661944 kB' 'MemAvailable: 47610636 kB' 'Buffers: 2704 kB' 'Cached: 11432608 kB' 'SwapCached: 0 kB' 'Active: 7332152 kB' 'Inactive: 4656152 kB' 'Active(anon): 6941104 kB' 'Inactive(anon): 0 kB' 'Active(file): 391048 kB' 'Inactive(file): 4656152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556240 kB' 'Mapped: 207676 kB' 'Shmem: 6388112 kB' 'KReclaimable: 549200 kB' 'Slab: 1187120 kB' 'SReclaimable: 549200 kB' 'SUnreclaim: 637920 kB' 'KernelStack: 22272 kB' 'PageTables: 8608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487648 kB' 'Committed_AS: 8389312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217368 kB' 'VmallocChunk: 0 kB' 'Percpu: 132608 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3169652 kB' 'DirectMap2M: 23779328 kB' 'DirectMap1G: 41943040 kB' 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.794 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.795 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295240 kB' 'MemFree: 42660948 kB' 'MemAvailable: 47609640 kB' 'Buffers: 2704 kB' 'Cached: 11432624 kB' 'SwapCached: 0 kB' 'Active: 7332432 kB' 'Inactive: 4656152 kB' 'Active(anon): 6941384 kB' 'Inactive(anon): 0 kB' 'Active(file): 391048 kB' 'Inactive(file): 4656152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556612 kB' 'Mapped: 207676 kB' 'Shmem: 6388128 kB' 'KReclaimable: 549200 kB' 'Slab: 1187120 kB' 'SReclaimable: 549200 kB' 'SUnreclaim: 637920 kB' 'KernelStack: 22320 kB' 'PageTables: 8736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487648 kB' 'Committed_AS: 8390084 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217368 kB' 'VmallocChunk: 0 kB' 'Percpu: 132608 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3169652 kB' 'DirectMap2M: 23779328 kB' 'DirectMap1G: 41943040 kB' 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.796 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:32.797 nr_hugepages=1024 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:32.797 resv_hugepages=0 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:32.797 surplus_hugepages=0 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:32.797 anon_hugepages=0 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.797 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295240 kB' 'MemFree: 42660840 kB' 'MemAvailable: 47609532 kB' 'Buffers: 2704 kB' 'Cached: 11432664 kB' 'SwapCached: 0 kB' 'Active: 7331788 kB' 'Inactive: 4656152 kB' 'Active(anon): 6940740 kB' 'Inactive(anon): 0 kB' 'Active(file): 391048 kB' 'Inactive(file): 4656152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555852 kB' 'Mapped: 207676 kB' 'Shmem: 6388168 kB' 'KReclaimable: 549200 kB' 'Slab: 1187120 kB' 'SReclaimable: 549200 kB' 'SUnreclaim: 637920 kB' 'KernelStack: 22256 kB' 'PageTables: 8544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487648 kB' 'Committed_AS: 8389356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217304 kB' 'VmallocChunk: 0 kB' 'Percpu: 132608 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3169652 kB' 'DirectMap2M: 23779328 kB' 'DirectMap1G: 41943040 kB' 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.798 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 26152852 kB' 'MemUsed: 6486288 kB' 'SwapCached: 0 kB' 'Active: 3036232 kB' 'Inactive: 622632 kB' 'Active(anon): 2732840 kB' 'Inactive(anon): 0 kB' 'Active(file): 303392 kB' 'Inactive(file): 622632 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3287640 kB' 'Mapped: 130772 kB' 'AnonPages: 374456 kB' 'Shmem: 2361616 kB' 'KernelStack: 12440 kB' 'PageTables: 5244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 351840 kB' 'Slab: 666868 kB' 'SReclaimable: 351840 kB' 'SUnreclaim: 315028 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.799 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:32.800 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656100 kB' 'MemFree: 16508712 kB' 'MemUsed: 11147388 kB' 'SwapCached: 0 kB' 'Active: 4295952 kB' 'Inactive: 4033520 kB' 'Active(anon): 4208296 kB' 'Inactive(anon): 0 kB' 'Active(file): 87656 kB' 'Inactive(file): 4033520 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8147732 kB' 'Mapped: 76904 kB' 'AnonPages: 181784 kB' 'Shmem: 4026556 kB' 'KernelStack: 9832 kB' 'PageTables: 3360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 197360 kB' 'Slab: 520252 kB' 'SReclaimable: 197360 kB' 'SUnreclaim: 322892 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:33.059 node0=512 expecting 512 00:04:33.059 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:33.060 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:33.060 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:33.060 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:33.060 node1=512 expecting 512 00:04:33.060 15:09:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:33.060 00:04:33.060 real 0m3.550s 00:04:33.060 user 0m1.370s 00:04:33.060 sys 0m2.250s 00:04:33.060 15:09:36 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:33.060 15:09:36 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:33.060 ************************************ 00:04:33.060 END TEST even_2G_alloc 00:04:33.060 ************************************ 00:04:33.060 15:09:36 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:33.060 15:09:36 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:33.060 15:09:36 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:33.060 15:09:36 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.060 15:09:36 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:33.060 ************************************ 00:04:33.060 START TEST odd_alloc 00:04:33.060 ************************************ 00:04:33.060 15:09:36 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:33.060 15:09:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:33.060 15:09:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:33.060 15:09:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:33.060 15:09:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:33.060 15:09:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:33.060 15:09:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:33.060 15:09:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:33.060 15:09:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:33.060 15:09:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:33.060 15:09:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:33.060 15:09:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:33.060 15:09:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:33.060 15:09:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:33.060 15:09:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:33.060 15:09:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:33.060 15:09:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:33.060 15:09:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:33.060 15:09:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:33.060 15:09:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:33.060 15:09:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:33.060 15:09:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:33.060 15:09:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:33.060 15:09:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:33.060 15:09:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:33.060 15:09:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:33.060 15:09:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:33.060 15:09:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:33.060 15:09:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:36.415 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:36.415 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:36.415 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:36.415 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:36.415 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:36.415 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:36.415 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:36.415 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:36.415 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:36.415 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:36.415 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:36.415 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:36.415 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:36.415 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:36.415 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:36.415 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:36.415 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:36.415 15:09:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:36.415 15:09:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:36.415 15:09:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:36.415 15:09:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:36.415 15:09:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:36.415 15:09:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:36.415 15:09:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:36.415 15:09:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:36.415 15:09:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:36.415 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:36.415 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:36.415 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:36.415 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:36.415 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.415 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.415 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.415 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.415 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.415 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.415 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.415 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295240 kB' 'MemFree: 42660128 kB' 'MemAvailable: 47608820 kB' 'Buffers: 2704 kB' 'Cached: 11432776 kB' 'SwapCached: 0 kB' 'Active: 7333808 kB' 'Inactive: 4656152 kB' 'Active(anon): 6942760 kB' 'Inactive(anon): 0 kB' 'Active(file): 391048 kB' 'Inactive(file): 4656152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557936 kB' 'Mapped: 207728 kB' 'Shmem: 6388280 kB' 'KReclaimable: 549200 kB' 'Slab: 1187636 kB' 'SReclaimable: 549200 kB' 'SUnreclaim: 638436 kB' 'KernelStack: 22272 kB' 'PageTables: 8580 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486624 kB' 'Committed_AS: 8389980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217224 kB' 'VmallocChunk: 0 kB' 'Percpu: 132608 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3169652 kB' 'DirectMap2M: 23779328 kB' 'DirectMap1G: 41943040 kB' 00:04:36.415 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.415 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.415 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.415 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.415 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.415 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.415 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.415 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.415 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.415 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.416 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295240 kB' 'MemFree: 42660060 kB' 'MemAvailable: 47608720 kB' 'Buffers: 2704 kB' 'Cached: 11432792 kB' 'SwapCached: 0 kB' 'Active: 7333176 kB' 'Inactive: 4656152 kB' 'Active(anon): 6942128 kB' 'Inactive(anon): 0 kB' 'Active(file): 391048 kB' 'Inactive(file): 4656152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557296 kB' 'Mapped: 207688 kB' 'Shmem: 6388296 kB' 'KReclaimable: 549168 kB' 'Slab: 1187616 kB' 'SReclaimable: 549168 kB' 'SUnreclaim: 638448 kB' 'KernelStack: 22256 kB' 'PageTables: 8544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486624 kB' 'Committed_AS: 8390000 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217208 kB' 'VmallocChunk: 0 kB' 'Percpu: 132608 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3169652 kB' 'DirectMap2M: 23779328 kB' 'DirectMap1G: 41943040 kB' 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.417 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.418 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295240 kB' 'MemFree: 42659304 kB' 'MemAvailable: 47607964 kB' 'Buffers: 2704 kB' 'Cached: 11432796 kB' 'SwapCached: 0 kB' 'Active: 7333596 kB' 'Inactive: 4656152 kB' 'Active(anon): 6942548 kB' 'Inactive(anon): 0 kB' 'Active(file): 391048 kB' 'Inactive(file): 4656152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557676 kB' 'Mapped: 207688 kB' 'Shmem: 6388300 kB' 'KReclaimable: 549168 kB' 'Slab: 1187616 kB' 'SReclaimable: 549168 kB' 'SUnreclaim: 638448 kB' 'KernelStack: 22272 kB' 'PageTables: 8604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486624 kB' 'Committed_AS: 8390020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217224 kB' 'VmallocChunk: 0 kB' 'Percpu: 132608 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3169652 kB' 'DirectMap2M: 23779328 kB' 'DirectMap1G: 41943040 kB' 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.419 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.420 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.420 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.420 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.420 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.420 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.420 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.420 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.420 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.420 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.420 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.420 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.420 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.420 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.420 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.420 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.420 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.420 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.420 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.420 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.420 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.420 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.420 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.420 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.420 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.420 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.420 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.420 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.420 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.420 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.420 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.420 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.420 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.420 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.420 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.420 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.420 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.420 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.420 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.420 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.420 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.420 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.420 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.420 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.420 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.420 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.420 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.420 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.420 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.420 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.420 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.420 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.420 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.420 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:36.682 nr_hugepages=1025 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:36.682 resv_hugepages=0 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:36.682 surplus_hugepages=0 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:36.682 anon_hugepages=0 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295240 kB' 'MemFree: 42658672 kB' 'MemAvailable: 47607332 kB' 'Buffers: 2704 kB' 'Cached: 11432816 kB' 'SwapCached: 0 kB' 'Active: 7333604 kB' 'Inactive: 4656152 kB' 'Active(anon): 6942556 kB' 'Inactive(anon): 0 kB' 'Active(file): 391048 kB' 'Inactive(file): 4656152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557676 kB' 'Mapped: 207688 kB' 'Shmem: 6388320 kB' 'KReclaimable: 549168 kB' 'Slab: 1187616 kB' 'SReclaimable: 549168 kB' 'SUnreclaim: 638448 kB' 'KernelStack: 22272 kB' 'PageTables: 8604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486624 kB' 'Committed_AS: 8390040 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217240 kB' 'VmallocChunk: 0 kB' 'Percpu: 132608 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3169652 kB' 'DirectMap2M: 23779328 kB' 'DirectMap1G: 41943040 kB' 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.682 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.683 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 26126592 kB' 'MemUsed: 6512548 kB' 'SwapCached: 0 kB' 'Active: 3038220 kB' 'Inactive: 622632 kB' 'Active(anon): 2734828 kB' 'Inactive(anon): 0 kB' 'Active(file): 303392 kB' 'Inactive(file): 622632 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3287804 kB' 'Mapped: 130784 kB' 'AnonPages: 376404 kB' 'Shmem: 2361780 kB' 'KernelStack: 12440 kB' 'PageTables: 5220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 351808 kB' 'Slab: 667480 kB' 'SReclaimable: 351808 kB' 'SUnreclaim: 315672 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.684 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656100 kB' 'MemFree: 16530316 kB' 'MemUsed: 11125784 kB' 'SwapCached: 0 kB' 'Active: 4296068 kB' 'Inactive: 4033520 kB' 'Active(anon): 4208412 kB' 'Inactive(anon): 0 kB' 'Active(file): 87656 kB' 'Inactive(file): 4033520 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8147756 kB' 'Mapped: 76904 kB' 'AnonPages: 181972 kB' 'Shmem: 4026580 kB' 'KernelStack: 9816 kB' 'PageTables: 3348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 197360 kB' 'Slab: 520136 kB' 'SReclaimable: 197360 kB' 'SUnreclaim: 322776 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.685 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:36.686 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.687 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.687 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.687 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:36.687 15:09:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:36.687 15:09:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:36.687 15:09:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:36.687 15:09:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:36.687 15:09:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:36.687 15:09:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:36.687 node0=512 expecting 513 00:04:36.687 15:09:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:36.687 15:09:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:36.687 15:09:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:36.687 15:09:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:36.687 node1=513 expecting 512 00:04:36.687 15:09:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:36.687 00:04:36.687 real 0m3.610s 00:04:36.687 user 0m1.387s 00:04:36.687 sys 0m2.284s 00:04:36.687 15:09:40 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:36.687 15:09:40 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:36.687 ************************************ 00:04:36.687 END TEST odd_alloc 00:04:36.687 ************************************ 00:04:36.687 15:09:40 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:36.687 15:09:40 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:36.687 15:09:40 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:36.687 15:09:40 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.687 15:09:40 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:36.687 ************************************ 00:04:36.687 START TEST custom_alloc 00:04:36.687 ************************************ 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.687 15:09:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:39.970 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:39.970 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:39.970 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:39.970 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:39.970 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:39.970 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:39.970 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:39.970 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:39.970 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:39.970 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:39.970 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:39.970 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:39.970 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:39.970 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:39.970 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:39.970 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:39.970 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:39.970 15:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:39.970 15:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:39.970 15:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:39.970 15:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:39.970 15:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:39.970 15:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:39.970 15:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:39.970 15:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:39.970 15:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:39.970 15:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:39.970 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:39.970 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:39.970 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:39.970 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.970 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.970 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.970 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.970 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.970 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.970 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.970 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.970 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295240 kB' 'MemFree: 41628748 kB' 'MemAvailable: 46577408 kB' 'Buffers: 2704 kB' 'Cached: 11432932 kB' 'SwapCached: 0 kB' 'Active: 7335316 kB' 'Inactive: 4656152 kB' 'Active(anon): 6944268 kB' 'Inactive(anon): 0 kB' 'Active(file): 391048 kB' 'Inactive(file): 4656152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558712 kB' 'Mapped: 207784 kB' 'Shmem: 6388436 kB' 'KReclaimable: 549168 kB' 'Slab: 1187628 kB' 'SReclaimable: 549168 kB' 'SUnreclaim: 638460 kB' 'KernelStack: 22368 kB' 'PageTables: 8496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963360 kB' 'Committed_AS: 8392264 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217448 kB' 'VmallocChunk: 0 kB' 'Percpu: 132608 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3169652 kB' 'DirectMap2M: 23779328 kB' 'DirectMap1G: 41943040 kB' 00:04:39.970 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.971 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295240 kB' 'MemFree: 41630068 kB' 'MemAvailable: 46578728 kB' 'Buffers: 2704 kB' 'Cached: 11432936 kB' 'SwapCached: 0 kB' 'Active: 7334732 kB' 'Inactive: 4656152 kB' 'Active(anon): 6943684 kB' 'Inactive(anon): 0 kB' 'Active(file): 391048 kB' 'Inactive(file): 4656152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558044 kB' 'Mapped: 207784 kB' 'Shmem: 6388440 kB' 'KReclaimable: 549168 kB' 'Slab: 1187624 kB' 'SReclaimable: 549168 kB' 'SUnreclaim: 638456 kB' 'KernelStack: 22240 kB' 'PageTables: 8152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963360 kB' 'Committed_AS: 8393524 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217400 kB' 'VmallocChunk: 0 kB' 'Percpu: 132608 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3169652 kB' 'DirectMap2M: 23779328 kB' 'DirectMap1G: 41943040 kB' 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.972 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.973 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295240 kB' 'MemFree: 41632428 kB' 'MemAvailable: 46581088 kB' 'Buffers: 2704 kB' 'Cached: 11432948 kB' 'SwapCached: 0 kB' 'Active: 7334992 kB' 'Inactive: 4656152 kB' 'Active(anon): 6943944 kB' 'Inactive(anon): 0 kB' 'Active(file): 391048 kB' 'Inactive(file): 4656152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 559520 kB' 'Mapped: 207708 kB' 'Shmem: 6388452 kB' 'KReclaimable: 549168 kB' 'Slab: 1187612 kB' 'SReclaimable: 549168 kB' 'SUnreclaim: 638444 kB' 'KernelStack: 22304 kB' 'PageTables: 8992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963360 kB' 'Committed_AS: 8393544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217480 kB' 'VmallocChunk: 0 kB' 'Percpu: 132608 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3169652 kB' 'DirectMap2M: 23779328 kB' 'DirectMap1G: 41943040 kB' 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.974 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.975 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:39.976 nr_hugepages=1536 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:39.976 resv_hugepages=0 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:39.976 surplus_hugepages=0 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:39.976 anon_hugepages=0 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295240 kB' 'MemFree: 41629212 kB' 'MemAvailable: 46577872 kB' 'Buffers: 2704 kB' 'Cached: 11432972 kB' 'SwapCached: 0 kB' 'Active: 7336940 kB' 'Inactive: 4656152 kB' 'Active(anon): 6945892 kB' 'Inactive(anon): 0 kB' 'Active(file): 391048 kB' 'Inactive(file): 4656152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 559212 kB' 'Mapped: 208272 kB' 'Shmem: 6388476 kB' 'KReclaimable: 549168 kB' 'Slab: 1187612 kB' 'SReclaimable: 549168 kB' 'SUnreclaim: 638444 kB' 'KernelStack: 22512 kB' 'PageTables: 8708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963360 kB' 'Committed_AS: 8409832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217528 kB' 'VmallocChunk: 0 kB' 'Percpu: 132608 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3169652 kB' 'DirectMap2M: 23779328 kB' 'DirectMap1G: 41943040 kB' 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.976 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:39.977 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 26140640 kB' 'MemUsed: 6498500 kB' 'SwapCached: 0 kB' 'Active: 3043088 kB' 'Inactive: 622632 kB' 'Active(anon): 2739696 kB' 'Inactive(anon): 0 kB' 'Active(file): 303392 kB' 'Inactive(file): 622632 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3287904 kB' 'Mapped: 130800 kB' 'AnonPages: 381008 kB' 'Shmem: 2361880 kB' 'KernelStack: 12424 kB' 'PageTables: 5008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 351808 kB' 'Slab: 667416 kB' 'SReclaimable: 351808 kB' 'SUnreclaim: 315608 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.978 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.237 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.237 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.237 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.237 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.237 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.237 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.237 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.237 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.237 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.237 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.237 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.237 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.237 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.237 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.237 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.237 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.237 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.237 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.237 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.237 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.237 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.237 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.237 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.237 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.237 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.237 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.237 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.237 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.237 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.237 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.237 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.237 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.237 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.237 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.237 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.237 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.237 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.237 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.237 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.237 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.237 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.237 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.237 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.237 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.237 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.237 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.237 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.237 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.237 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.237 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.237 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.237 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656100 kB' 'MemFree: 15481572 kB' 'MemUsed: 12174528 kB' 'SwapCached: 0 kB' 'Active: 4297476 kB' 'Inactive: 4033520 kB' 'Active(anon): 4209820 kB' 'Inactive(anon): 0 kB' 'Active(file): 87656 kB' 'Inactive(file): 4033520 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8147792 kB' 'Mapped: 77212 kB' 'AnonPages: 182800 kB' 'Shmem: 4026616 kB' 'KernelStack: 9976 kB' 'PageTables: 3576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 197360 kB' 'Slab: 520188 kB' 'SReclaimable: 197360 kB' 'SUnreclaim: 322828 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.238 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:40.239 node0=512 expecting 512 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:40.239 node1=1024 expecting 1024 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:40.239 00:04:40.239 real 0m3.413s 00:04:40.239 user 0m1.298s 00:04:40.239 sys 0m2.148s 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:40.239 15:09:43 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:40.239 ************************************ 00:04:40.239 END TEST custom_alloc 00:04:40.239 ************************************ 00:04:40.239 15:09:43 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:40.239 15:09:43 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:40.239 15:09:43 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:40.239 15:09:43 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.239 15:09:43 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:40.239 ************************************ 00:04:40.239 START TEST no_shrink_alloc 00:04:40.239 ************************************ 00:04:40.239 15:09:44 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:40.239 15:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:40.239 15:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:40.239 15:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:40.239 15:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:40.239 15:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:40.239 15:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:40.239 15:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:40.239 15:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:40.239 15:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:40.239 15:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:40.239 15:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:40.239 15:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:40.239 15:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:40.239 15:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:40.239 15:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:40.239 15:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:40.239 15:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:40.239 15:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:40.239 15:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:40.239 15:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:40.239 15:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:40.239 15:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:43.524 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:43.524 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:43.524 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:43.524 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:43.524 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:43.524 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:43.524 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:43.524 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:43.524 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:43.524 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:43.524 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:43.524 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:43.524 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:43.524 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:43.524 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:43.524 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:43.524 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:43.524 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:43.524 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:43.524 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:43.524 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:43.524 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:43.524 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:43.524 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:43.524 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:43.524 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:43.524 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:43.524 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:43.524 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:43.524 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:43.524 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.524 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.524 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.524 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.524 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.524 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.524 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.524 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295240 kB' 'MemFree: 42627296 kB' 'MemAvailable: 47575956 kB' 'Buffers: 2704 kB' 'Cached: 11433100 kB' 'SwapCached: 0 kB' 'Active: 7337080 kB' 'Inactive: 4656152 kB' 'Active(anon): 6946032 kB' 'Inactive(anon): 0 kB' 'Active(file): 391048 kB' 'Inactive(file): 4656152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560104 kB' 'Mapped: 208860 kB' 'Shmem: 6388604 kB' 'KReclaimable: 549168 kB' 'Slab: 1187784 kB' 'SReclaimable: 549168 kB' 'SUnreclaim: 638616 kB' 'KernelStack: 22416 kB' 'PageTables: 8524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487648 kB' 'Committed_AS: 8429176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217608 kB' 'VmallocChunk: 0 kB' 'Percpu: 132608 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3169652 kB' 'DirectMap2M: 23779328 kB' 'DirectMap1G: 41943040 kB' 00:04:43.524 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.524 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.524 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.524 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.524 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.524 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.524 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.524 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.524 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.524 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.524 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.525 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295240 kB' 'MemFree: 42629408 kB' 'MemAvailable: 47578068 kB' 'Buffers: 2704 kB' 'Cached: 11433104 kB' 'SwapCached: 0 kB' 'Active: 7337548 kB' 'Inactive: 4656152 kB' 'Active(anon): 6946500 kB' 'Inactive(anon): 0 kB' 'Active(file): 391048 kB' 'Inactive(file): 4656152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560680 kB' 'Mapped: 208824 kB' 'Shmem: 6388608 kB' 'KReclaimable: 549168 kB' 'Slab: 1187980 kB' 'SReclaimable: 549168 kB' 'SUnreclaim: 638812 kB' 'KernelStack: 22560 kB' 'PageTables: 8928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487648 kB' 'Committed_AS: 8427576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217560 kB' 'VmallocChunk: 0 kB' 'Percpu: 132608 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3169652 kB' 'DirectMap2M: 23779328 kB' 'DirectMap1G: 41943040 kB' 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.526 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:43.527 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295240 kB' 'MemFree: 42626828 kB' 'MemAvailable: 47575488 kB' 'Buffers: 2704 kB' 'Cached: 11433120 kB' 'SwapCached: 0 kB' 'Active: 7336936 kB' 'Inactive: 4656152 kB' 'Active(anon): 6945888 kB' 'Inactive(anon): 0 kB' 'Active(file): 391048 kB' 'Inactive(file): 4656152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560472 kB' 'Mapped: 208748 kB' 'Shmem: 6388624 kB' 'KReclaimable: 549168 kB' 'Slab: 1188136 kB' 'SReclaimable: 549168 kB' 'SUnreclaim: 638968 kB' 'KernelStack: 22608 kB' 'PageTables: 9044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487648 kB' 'Committed_AS: 8444740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217576 kB' 'VmallocChunk: 0 kB' 'Percpu: 132608 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3169652 kB' 'DirectMap2M: 23779328 kB' 'DirectMap1G: 41943040 kB' 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.528 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:43.529 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:43.530 nr_hugepages=1024 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:43.530 resv_hugepages=0 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:43.530 surplus_hugepages=0 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:43.530 anon_hugepages=0 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295240 kB' 'MemFree: 42624504 kB' 'MemAvailable: 47573164 kB' 'Buffers: 2704 kB' 'Cached: 11433140 kB' 'SwapCached: 0 kB' 'Active: 7336920 kB' 'Inactive: 4656152 kB' 'Active(anon): 6945872 kB' 'Inactive(anon): 0 kB' 'Active(file): 391048 kB' 'Inactive(file): 4656152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560480 kB' 'Mapped: 208748 kB' 'Shmem: 6388644 kB' 'KReclaimable: 549168 kB' 'Slab: 1188144 kB' 'SReclaimable: 549168 kB' 'SUnreclaim: 638976 kB' 'KernelStack: 22448 kB' 'PageTables: 8956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487648 kB' 'Committed_AS: 8428872 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217512 kB' 'VmallocChunk: 0 kB' 'Percpu: 132608 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3169652 kB' 'DirectMap2M: 23779328 kB' 'DirectMap1G: 41943040 kB' 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.530 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.531 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 25078608 kB' 'MemUsed: 7560532 kB' 'SwapCached: 0 kB' 'Active: 3038092 kB' 'Inactive: 622632 kB' 'Active(anon): 2734700 kB' 'Inactive(anon): 0 kB' 'Active(file): 303392 kB' 'Inactive(file): 622632 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3288032 kB' 'Mapped: 130932 kB' 'AnonPages: 375328 kB' 'Shmem: 2362008 kB' 'KernelStack: 12472 kB' 'PageTables: 5232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 351808 kB' 'Slab: 667524 kB' 'SReclaimable: 351808 kB' 'SUnreclaim: 315716 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.532 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.533 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.533 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.533 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.533 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.533 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.533 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.533 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.533 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.533 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.533 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.533 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.533 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.533 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.533 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.533 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.533 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.533 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.533 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.533 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.533 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.533 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.533 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.533 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:43.533 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:43.533 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:43.533 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:43.533 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:43.533 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:43.533 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:43.533 node0=1024 expecting 1024 00:04:43.533 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:43.533 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:43.533 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:43.533 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:43.533 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.533 15:09:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:46.822 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:46.822 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:46.822 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:46.822 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:46.822 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:46.822 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:46.822 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:46.822 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:46.822 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:46.822 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:46.822 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:46.822 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:46.822 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:46.822 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:46.822 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:46.822 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:46.822 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:46.822 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:46.822 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:46.822 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:46.822 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:46.822 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:46.822 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:46.822 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:46.822 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295240 kB' 'MemFree: 42625164 kB' 'MemAvailable: 47573824 kB' 'Buffers: 2704 kB' 'Cached: 11433240 kB' 'SwapCached: 0 kB' 'Active: 7336940 kB' 'Inactive: 4656152 kB' 'Active(anon): 6945892 kB' 'Inactive(anon): 0 kB' 'Active(file): 391048 kB' 'Inactive(file): 4656152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560380 kB' 'Mapped: 208784 kB' 'Shmem: 6388744 kB' 'KReclaimable: 549168 kB' 'Slab: 1188012 kB' 'SReclaimable: 549168 kB' 'SUnreclaim: 638844 kB' 'KernelStack: 22368 kB' 'PageTables: 8736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487648 kB' 'Committed_AS: 8426632 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217432 kB' 'VmallocChunk: 0 kB' 'Percpu: 132608 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3169652 kB' 'DirectMap2M: 23779328 kB' 'DirectMap1G: 41943040 kB' 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.823 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295240 kB' 'MemFree: 42626008 kB' 'MemAvailable: 47574668 kB' 'Buffers: 2704 kB' 'Cached: 11433244 kB' 'SwapCached: 0 kB' 'Active: 7336552 kB' 'Inactive: 4656152 kB' 'Active(anon): 6945504 kB' 'Inactive(anon): 0 kB' 'Active(file): 391048 kB' 'Inactive(file): 4656152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560032 kB' 'Mapped: 208752 kB' 'Shmem: 6388748 kB' 'KReclaimable: 549168 kB' 'Slab: 1187804 kB' 'SReclaimable: 549168 kB' 'SUnreclaim: 638636 kB' 'KernelStack: 22384 kB' 'PageTables: 8792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487648 kB' 'Committed_AS: 8426648 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217400 kB' 'VmallocChunk: 0 kB' 'Percpu: 132608 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3169652 kB' 'DirectMap2M: 23779328 kB' 'DirectMap1G: 41943040 kB' 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.824 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.825 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295240 kB' 'MemFree: 42625856 kB' 'MemAvailable: 47574516 kB' 'Buffers: 2704 kB' 'Cached: 11433272 kB' 'SwapCached: 0 kB' 'Active: 7336456 kB' 'Inactive: 4656152 kB' 'Active(anon): 6945408 kB' 'Inactive(anon): 0 kB' 'Active(file): 391048 kB' 'Inactive(file): 4656152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 559924 kB' 'Mapped: 208752 kB' 'Shmem: 6388776 kB' 'KReclaimable: 549168 kB' 'Slab: 1187804 kB' 'SReclaimable: 549168 kB' 'SUnreclaim: 638636 kB' 'KernelStack: 22368 kB' 'PageTables: 8744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487648 kB' 'Committed_AS: 8426672 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217400 kB' 'VmallocChunk: 0 kB' 'Percpu: 132608 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3169652 kB' 'DirectMap2M: 23779328 kB' 'DirectMap1G: 41943040 kB' 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.826 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.827 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:46.828 nr_hugepages=1024 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:46.828 resv_hugepages=0 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:46.828 surplus_hugepages=0 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:46.828 anon_hugepages=0 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295240 kB' 'MemFree: 42626120 kB' 'MemAvailable: 47574780 kB' 'Buffers: 2704 kB' 'Cached: 11433284 kB' 'SwapCached: 0 kB' 'Active: 7336480 kB' 'Inactive: 4656152 kB' 'Active(anon): 6945432 kB' 'Inactive(anon): 0 kB' 'Active(file): 391048 kB' 'Inactive(file): 4656152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 559916 kB' 'Mapped: 208752 kB' 'Shmem: 6388788 kB' 'KReclaimable: 549168 kB' 'Slab: 1187804 kB' 'SReclaimable: 549168 kB' 'SUnreclaim: 638636 kB' 'KernelStack: 22368 kB' 'PageTables: 8744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487648 kB' 'Committed_AS: 8426692 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217416 kB' 'VmallocChunk: 0 kB' 'Percpu: 132608 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3169652 kB' 'DirectMap2M: 23779328 kB' 'DirectMap1G: 41943040 kB' 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.828 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.829 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 25092020 kB' 'MemUsed: 7547120 kB' 'SwapCached: 0 kB' 'Active: 3039764 kB' 'Inactive: 622632 kB' 'Active(anon): 2736372 kB' 'Inactive(anon): 0 kB' 'Active(file): 303392 kB' 'Inactive(file): 622632 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3288168 kB' 'Mapped: 130936 kB' 'AnonPages: 377356 kB' 'Shmem: 2362144 kB' 'KernelStack: 12488 kB' 'PageTables: 5292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 351808 kB' 'Slab: 667120 kB' 'SReclaimable: 351808 kB' 'SUnreclaim: 315312 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.830 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:46.831 node0=1024 expecting 1024 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:46.831 00:04:46.831 real 0m6.655s 00:04:46.831 user 0m2.445s 00:04:46.831 sys 0m4.268s 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.831 15:09:50 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:46.831 ************************************ 00:04:46.831 END TEST no_shrink_alloc 00:04:46.831 ************************************ 00:04:46.831 15:09:50 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:46.831 15:09:50 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:46.831 15:09:50 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:46.831 15:09:50 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:46.831 15:09:50 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:46.831 15:09:50 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:46.831 15:09:50 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:46.831 15:09:50 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:46.831 15:09:50 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:46.831 15:09:50 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:46.831 15:09:50 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:46.831 15:09:50 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:46.831 15:09:50 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:46.831 15:09:50 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:46.831 15:09:50 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:46.831 00:04:46.831 real 0m26.200s 00:04:46.831 user 0m9.229s 00:04:46.831 sys 0m15.683s 00:04:46.831 15:09:50 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.831 15:09:50 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:46.831 ************************************ 00:04:46.831 END TEST hugepages 00:04:46.831 ************************************ 00:04:47.089 15:09:50 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:47.089 15:09:50 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:47.089 15:09:50 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:47.090 15:09:50 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.090 15:09:50 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:47.090 ************************************ 00:04:47.090 START TEST driver 00:04:47.090 ************************************ 00:04:47.090 15:09:50 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:47.090 * Looking for test storage... 00:04:47.090 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:47.090 15:09:50 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:47.090 15:09:50 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:47.090 15:09:50 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:52.396 15:09:55 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:52.396 15:09:55 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:52.396 15:09:55 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.396 15:09:55 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:52.396 ************************************ 00:04:52.396 START TEST guess_driver 00:04:52.396 ************************************ 00:04:52.396 15:09:55 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:52.396 15:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:52.396 15:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:52.396 15:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:52.396 15:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:52.396 15:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:52.396 15:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:52.396 15:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:52.396 15:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:52.396 15:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:52.396 15:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 176 > 0 )) 00:04:52.396 15:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:52.396 15:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:52.396 15:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:52.396 15:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:52.396 15:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:52.396 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:52.396 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:52.396 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:52.396 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:52.396 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:52.396 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:52.396 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:52.396 15:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:52.396 15:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:52.396 15:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:52.396 15:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:52.397 15:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:52.397 Looking for driver=vfio-pci 00:04:52.397 15:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:52.397 15:09:55 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:52.397 15:09:55 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:52.397 15:09:55 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:55.676 15:09:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:55.676 15:09:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:55.676 15:09:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:55.676 15:09:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:55.676 15:09:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:55.676 15:09:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:55.676 15:09:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:55.676 15:09:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:55.676 15:09:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:55.676 15:09:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:55.676 15:09:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:55.676 15:09:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:55.676 15:09:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:55.676 15:09:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:55.676 15:09:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:55.676 15:09:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:55.676 15:09:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:55.676 15:09:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:55.676 15:09:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:55.676 15:09:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:55.676 15:09:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:55.676 15:09:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:55.676 15:09:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:55.676 15:09:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:55.676 15:09:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:55.676 15:09:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:55.676 15:09:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:55.676 15:09:59 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:55.676 15:09:59 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:55.676 15:09:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:55.676 15:09:59 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:55.676 15:09:59 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:55.676 15:09:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:55.676 15:09:59 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:55.676 15:09:59 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:55.676 15:09:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:55.676 15:09:59 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:55.676 15:09:59 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:55.676 15:09:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:55.676 15:09:59 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:55.676 15:09:59 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:55.676 15:09:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:55.676 15:09:59 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:55.676 15:09:59 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:55.676 15:09:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:55.676 15:09:59 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:55.676 15:09:59 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:55.676 15:09:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:57.049 15:10:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:57.049 15:10:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:57.049 15:10:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:57.049 15:10:00 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:57.049 15:10:00 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:57.049 15:10:00 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:57.049 15:10:00 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:02.367 00:05:02.367 real 0m9.905s 00:05:02.367 user 0m2.644s 00:05:02.367 sys 0m5.111s 00:05:02.367 15:10:05 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:02.367 15:10:05 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:02.367 ************************************ 00:05:02.367 END TEST guess_driver 00:05:02.367 ************************************ 00:05:02.367 15:10:05 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:05:02.367 00:05:02.367 real 0m14.715s 00:05:02.367 user 0m3.940s 00:05:02.367 sys 0m7.835s 00:05:02.367 15:10:05 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:02.367 15:10:05 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:02.367 ************************************ 00:05:02.367 END TEST driver 00:05:02.367 ************************************ 00:05:02.367 15:10:05 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:02.367 15:10:05 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:02.367 15:10:05 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:02.367 15:10:05 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.367 15:10:05 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:02.367 ************************************ 00:05:02.367 START TEST devices 00:05:02.367 ************************************ 00:05:02.367 15:10:05 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:02.367 * Looking for test storage... 00:05:02.367 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:02.367 15:10:05 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:02.367 15:10:05 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:02.367 15:10:05 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:02.367 15:10:05 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:05.663 15:10:09 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:05.663 15:10:09 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:05.663 15:10:09 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:05.663 15:10:09 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:05.663 15:10:09 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:05.663 15:10:09 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:05.663 15:10:09 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:05.663 15:10:09 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:05.663 15:10:09 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:05.663 15:10:09 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:05.663 15:10:09 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:05.663 15:10:09 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:05.663 15:10:09 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:05.663 15:10:09 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:05.663 15:10:09 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:05.663 15:10:09 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:05.663 15:10:09 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:05.663 15:10:09 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:05:05.663 15:10:09 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:05:05.663 15:10:09 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:05.663 15:10:09 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:05.663 15:10:09 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:05.663 No valid GPT data, bailing 00:05:05.663 15:10:09 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:05.663 15:10:09 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:05.664 15:10:09 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:05.664 15:10:09 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:05.664 15:10:09 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:05.664 15:10:09 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:05.664 15:10:09 setup.sh.devices -- setup/common.sh@80 -- # echo 1600321314816 00:05:05.664 15:10:09 setup.sh.devices -- setup/devices.sh@204 -- # (( 1600321314816 >= min_disk_size )) 00:05:05.664 15:10:09 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:05.664 15:10:09 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:05:05.664 15:10:09 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:05.664 15:10:09 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:05.664 15:10:09 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:05.664 15:10:09 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:05.664 15:10:09 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.664 15:10:09 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:05.664 ************************************ 00:05:05.664 START TEST nvme_mount 00:05:05.664 ************************************ 00:05:05.664 15:10:09 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:05:05.664 15:10:09 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:05.664 15:10:09 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:05.664 15:10:09 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:05.664 15:10:09 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:05.664 15:10:09 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:05.664 15:10:09 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:05.664 15:10:09 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:05.664 15:10:09 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:05.664 15:10:09 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:05.664 15:10:09 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:05.664 15:10:09 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:05.664 15:10:09 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:05.664 15:10:09 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:05.664 15:10:09 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:05.664 15:10:09 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:05.664 15:10:09 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:05.664 15:10:09 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:05.664 15:10:09 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:05.664 15:10:09 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:06.600 Creating new GPT entries in memory. 00:05:06.600 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:06.600 other utilities. 00:05:06.600 15:10:10 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:06.600 15:10:10 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:06.600 15:10:10 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:06.600 15:10:10 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:06.600 15:10:10 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:07.974 Creating new GPT entries in memory. 00:05:07.974 The operation has completed successfully. 00:05:07.974 15:10:11 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:07.974 15:10:11 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:07.974 15:10:11 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2849935 00:05:07.974 15:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:07.974 15:10:11 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:07.974 15:10:11 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:07.974 15:10:11 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:07.974 15:10:11 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:07.974 15:10:11 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:07.974 15:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:07.974 15:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:07.974 15:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:07.974 15:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:07.974 15:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:07.974 15:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:07.974 15:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:07.974 15:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:07.974 15:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:07.974 15:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.974 15:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:07.974 15:10:11 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:07.974 15:10:11 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:07.974 15:10:11 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:11.256 15:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:11.256 15:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.256 15:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:11.256 15:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.256 15:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:11.256 15:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.256 15:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:11.256 15:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.256 15:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:11.256 15:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.256 15:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:11.256 15:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.256 15:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:11.256 15:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.256 15:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:11.256 15:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.256 15:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:11.256 15:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.256 15:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:11.256 15:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.256 15:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:11.256 15:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.256 15:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:11.256 15:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.256 15:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:11.256 15:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.256 15:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:11.256 15:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.256 15:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:11.256 15:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.256 15:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:11.256 15:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.256 15:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:11.256 15:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:11.256 15:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:11.256 15:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.256 15:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:11.256 15:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:11.256 15:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:11.256 15:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:11.256 15:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:11.256 15:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:11.256 15:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:11.256 15:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:11.256 15:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:11.256 15:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:11.256 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:11.256 15:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:11.256 15:10:14 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:11.256 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:11.256 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:05:11.256 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:11.256 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:11.256 15:10:15 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:11.256 15:10:15 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:11.256 15:10:15 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:11.256 15:10:15 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:11.256 15:10:15 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:11.514 15:10:15 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:11.514 15:10:15 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:11.514 15:10:15 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:11.514 15:10:15 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:11.514 15:10:15 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:11.514 15:10:15 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:11.514 15:10:15 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:11.514 15:10:15 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:11.514 15:10:15 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:11.514 15:10:15 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:11.514 15:10:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.514 15:10:15 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:11.514 15:10:15 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:11.514 15:10:15 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:11.514 15:10:15 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:14.818 15:10:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.818 15:10:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.818 15:10:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.818 15:10:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.818 15:10:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.818 15:10:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.818 15:10:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.818 15:10:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.818 15:10:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.818 15:10:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.818 15:10:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.818 15:10:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.818 15:10:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.818 15:10:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.818 15:10:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.818 15:10:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.818 15:10:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.818 15:10:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.818 15:10:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.818 15:10:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.818 15:10:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.818 15:10:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.819 15:10:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.819 15:10:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.819 15:10:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.819 15:10:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.819 15:10:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.819 15:10:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.819 15:10:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.819 15:10:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.819 15:10:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.819 15:10:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.819 15:10:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.819 15:10:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:14.819 15:10:18 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:14.819 15:10:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.819 15:10:18 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:14.819 15:10:18 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:14.819 15:10:18 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:14.819 15:10:18 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:14.819 15:10:18 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:14.819 15:10:18 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:14.819 15:10:18 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:05:14.819 15:10:18 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:14.819 15:10:18 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:14.819 15:10:18 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:14.819 15:10:18 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:14.819 15:10:18 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:14.819 15:10:18 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:14.819 15:10:18 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:14.819 15:10:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.819 15:10:18 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:14.819 15:10:18 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:14.819 15:10:18 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:14.819 15:10:18 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:17.350 15:10:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:17.350 15:10:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.350 15:10:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:17.350 15:10:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.350 15:10:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:17.350 15:10:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.350 15:10:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:17.350 15:10:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.350 15:10:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:17.350 15:10:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.350 15:10:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:17.350 15:10:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.350 15:10:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:17.350 15:10:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.350 15:10:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:17.350 15:10:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.350 15:10:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:17.350 15:10:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.350 15:10:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:17.350 15:10:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.350 15:10:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:17.350 15:10:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.350 15:10:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:17.350 15:10:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.350 15:10:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:17.350 15:10:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.350 15:10:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:17.350 15:10:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.350 15:10:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:17.350 15:10:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.350 15:10:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:17.350 15:10:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.350 15:10:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:17.350 15:10:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:17.350 15:10:21 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:17.350 15:10:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.609 15:10:21 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:17.609 15:10:21 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:17.609 15:10:21 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:17.609 15:10:21 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:17.609 15:10:21 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:17.609 15:10:21 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:17.609 15:10:21 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:17.609 15:10:21 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:17.609 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:17.609 00:05:17.609 real 0m11.892s 00:05:17.609 user 0m3.300s 00:05:17.609 sys 0m6.419s 00:05:17.609 15:10:21 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.609 15:10:21 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:17.609 ************************************ 00:05:17.609 END TEST nvme_mount 00:05:17.609 ************************************ 00:05:17.609 15:10:21 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:17.609 15:10:21 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:17.609 15:10:21 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:17.609 15:10:21 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.609 15:10:21 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:17.609 ************************************ 00:05:17.609 START TEST dm_mount 00:05:17.609 ************************************ 00:05:17.609 15:10:21 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:05:17.609 15:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:17.609 15:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:17.609 15:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:17.609 15:10:21 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:17.609 15:10:21 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:17.609 15:10:21 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:17.609 15:10:21 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:17.609 15:10:21 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:17.609 15:10:21 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:17.609 15:10:21 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:17.609 15:10:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:17.609 15:10:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:17.609 15:10:21 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:17.609 15:10:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:17.609 15:10:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:17.609 15:10:21 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:17.609 15:10:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:17.609 15:10:21 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:17.609 15:10:21 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:17.609 15:10:21 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:17.609 15:10:21 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:18.544 Creating new GPT entries in memory. 00:05:18.544 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:18.544 other utilities. 00:05:18.544 15:10:22 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:18.544 15:10:22 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:18.544 15:10:22 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:18.544 15:10:22 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:18.544 15:10:22 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:19.919 Creating new GPT entries in memory. 00:05:19.919 The operation has completed successfully. 00:05:19.919 15:10:23 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:19.919 15:10:23 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:19.919 15:10:23 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:19.919 15:10:23 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:19.919 15:10:23 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:20.860 The operation has completed successfully. 00:05:20.860 15:10:24 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:20.860 15:10:24 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:20.860 15:10:24 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2854316 00:05:20.860 15:10:24 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:20.860 15:10:24 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:20.860 15:10:24 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:20.860 15:10:24 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:20.860 15:10:24 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:20.860 15:10:24 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:20.861 15:10:24 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:20.861 15:10:24 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:20.861 15:10:24 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:20.861 15:10:24 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:20.861 15:10:24 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:20.861 15:10:24 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:20.861 15:10:24 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:20.861 15:10:24 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:20.861 15:10:24 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:20.861 15:10:24 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:20.861 15:10:24 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:20.861 15:10:24 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:20.861 15:10:24 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:20.861 15:10:24 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:20.861 15:10:24 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:20.861 15:10:24 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:20.861 15:10:24 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:20.861 15:10:24 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:20.861 15:10:24 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:20.861 15:10:24 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:20.861 15:10:24 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:20.861 15:10:24 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:20.861 15:10:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.861 15:10:24 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:20.861 15:10:24 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:20.861 15:10:24 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:20.861 15:10:24 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:23.389 15:10:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:23.389 15:10:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.389 15:10:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:23.389 15:10:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.389 15:10:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:23.389 15:10:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.389 15:10:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:23.389 15:10:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.389 15:10:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:23.389 15:10:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.389 15:10:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:23.389 15:10:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.389 15:10:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:23.389 15:10:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.389 15:10:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:23.389 15:10:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.389 15:10:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:23.389 15:10:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.389 15:10:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:23.389 15:10:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.389 15:10:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:23.389 15:10:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.389 15:10:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:23.389 15:10:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.389 15:10:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:23.389 15:10:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.389 15:10:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:23.389 15:10:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.389 15:10:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:23.389 15:10:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.389 15:10:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:23.389 15:10:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.647 15:10:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:23.647 15:10:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:23.647 15:10:27 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:23.647 15:10:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.647 15:10:27 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:23.647 15:10:27 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:23.647 15:10:27 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:23.647 15:10:27 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:23.647 15:10:27 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:23.648 15:10:27 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:23.648 15:10:27 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:23.648 15:10:27 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:23.648 15:10:27 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:23.648 15:10:27 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:23.648 15:10:27 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:23.648 15:10:27 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:23.648 15:10:27 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:23.648 15:10:27 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:23.648 15:10:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.648 15:10:27 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:23.648 15:10:27 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:23.648 15:10:27 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:23.648 15:10:27 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:26.933 15:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:26.933 15:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.933 15:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:26.933 15:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.933 15:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:26.933 15:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.933 15:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:26.933 15:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.933 15:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:26.933 15:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.933 15:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:26.933 15:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.933 15:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:26.933 15:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.933 15:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:26.933 15:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.933 15:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:26.933 15:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.933 15:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:26.933 15:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.933 15:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:26.933 15:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.933 15:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:26.933 15:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.933 15:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:26.933 15:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.933 15:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:26.933 15:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.933 15:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:26.933 15:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.933 15:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:26.933 15:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.933 15:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:26.933 15:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:26.933 15:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:26.933 15:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.933 15:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:26.933 15:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:26.933 15:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:26.933 15:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:26.933 15:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:26.933 15:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:26.933 15:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:26.933 15:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:26.933 15:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:26.933 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:26.933 15:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:26.933 15:10:30 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:26.933 00:05:26.933 real 0m9.075s 00:05:26.933 user 0m2.004s 00:05:26.933 sys 0m4.044s 00:05:26.933 15:10:30 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.933 15:10:30 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:26.933 ************************************ 00:05:26.933 END TEST dm_mount 00:05:26.933 ************************************ 00:05:26.933 15:10:30 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:26.933 15:10:30 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:26.933 15:10:30 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:26.933 15:10:30 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:26.933 15:10:30 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:26.933 15:10:30 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:26.933 15:10:30 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:26.933 15:10:30 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:26.933 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:26.933 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:05:26.933 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:26.933 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:26.933 15:10:30 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:26.933 15:10:30 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:26.933 15:10:30 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:26.933 15:10:30 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:26.933 15:10:30 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:26.933 15:10:30 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:26.933 15:10:30 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:26.933 00:05:26.933 real 0m25.201s 00:05:26.933 user 0m6.705s 00:05:26.933 sys 0m13.185s 00:05:26.933 15:10:30 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.933 15:10:30 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:26.933 ************************************ 00:05:26.933 END TEST devices 00:05:26.933 ************************************ 00:05:26.933 15:10:30 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:26.933 00:05:26.933 real 1m30.764s 00:05:26.933 user 0m27.924s 00:05:26.933 sys 0m51.663s 00:05:26.933 15:10:30 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.933 15:10:30 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:26.933 ************************************ 00:05:26.933 END TEST setup.sh 00:05:26.933 ************************************ 00:05:27.205 15:10:30 -- common/autotest_common.sh@1142 -- # return 0 00:05:27.205 15:10:30 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:30.499 Hugepages 00:05:30.499 node hugesize free / total 00:05:30.499 node0 1048576kB 0 / 0 00:05:30.499 node0 2048kB 2048 / 2048 00:05:30.499 node1 1048576kB 0 / 0 00:05:30.499 node1 2048kB 0 / 0 00:05:30.499 00:05:30.499 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:30.499 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:05:30.499 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:05:30.499 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:05:30.499 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:05:30.499 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:05:30.499 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:05:30.499 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:05:30.499 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:05:30.499 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:05:30.499 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:05:30.499 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:05:30.499 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:05:30.499 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:05:30.499 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:05:30.499 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:05:30.499 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:05:30.499 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:30.499 15:10:34 -- spdk/autotest.sh@130 -- # uname -s 00:05:30.499 15:10:34 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:30.499 15:10:34 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:30.499 15:10:34 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:33.788 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:33.788 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:33.788 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:33.788 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:33.788 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:33.788 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:33.788 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:33.788 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:33.788 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:33.788 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:33.788 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:33.788 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:33.788 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:33.788 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:33.788 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:33.788 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:35.691 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:35.691 15:10:39 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:36.627 15:10:40 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:36.627 15:10:40 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:36.627 15:10:40 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:36.627 15:10:40 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:36.627 15:10:40 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:36.627 15:10:40 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:36.627 15:10:40 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:36.627 15:10:40 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:36.627 15:10:40 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:36.627 15:10:40 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:36.627 15:10:40 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:d8:00.0 00:05:36.627 15:10:40 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:39.915 Waiting for block devices as requested 00:05:39.915 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:39.915 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:39.915 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:40.197 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:40.197 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:40.197 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:40.197 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:40.455 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:40.455 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:40.455 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:40.713 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:40.713 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:40.713 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:40.971 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:40.971 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:40.971 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:41.229 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:05:41.229 15:10:45 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:41.229 15:10:45 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:05:41.229 15:10:45 -- common/autotest_common.sh@1502 -- # grep 0000:d8:00.0/nvme/nvme 00:05:41.230 15:10:45 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:41.230 15:10:45 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:41.230 15:10:45 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:05:41.230 15:10:45 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:41.230 15:10:45 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:41.230 15:10:45 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:41.230 15:10:45 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:41.230 15:10:45 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:41.230 15:10:45 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:41.230 15:10:45 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:41.230 15:10:45 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:05:41.230 15:10:45 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:41.230 15:10:45 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:41.230 15:10:45 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:41.230 15:10:45 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:41.230 15:10:45 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:41.230 15:10:45 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:41.230 15:10:45 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:41.230 15:10:45 -- common/autotest_common.sh@1557 -- # continue 00:05:41.230 15:10:45 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:41.230 15:10:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:41.230 15:10:45 -- common/autotest_common.sh@10 -- # set +x 00:05:41.486 15:10:45 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:41.486 15:10:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:41.486 15:10:45 -- common/autotest_common.sh@10 -- # set +x 00:05:41.486 15:10:45 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:44.875 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:44.875 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:44.875 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:44.875 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:44.875 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:44.875 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:44.875 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:44.875 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:44.875 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:44.875 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:44.875 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:44.875 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:44.875 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:44.875 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:44.875 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:44.875 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:46.246 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:46.246 15:10:50 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:46.246 15:10:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:46.246 15:10:50 -- common/autotest_common.sh@10 -- # set +x 00:05:46.502 15:10:50 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:46.502 15:10:50 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:46.502 15:10:50 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:46.502 15:10:50 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:46.502 15:10:50 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:46.502 15:10:50 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:46.502 15:10:50 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:46.502 15:10:50 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:46.502 15:10:50 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:46.502 15:10:50 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:46.502 15:10:50 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:46.502 15:10:50 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:46.502 15:10:50 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:d8:00.0 00:05:46.502 15:10:50 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:46.502 15:10:50 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:05:46.502 15:10:50 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:05:46.502 15:10:50 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:46.502 15:10:50 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:05:46.502 15:10:50 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:d8:00.0 00:05:46.502 15:10:50 -- common/autotest_common.sh@1592 -- # [[ -z 0000:d8:00.0 ]] 00:05:46.502 15:10:50 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=2863563 00:05:46.502 15:10:50 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:46.502 15:10:50 -- common/autotest_common.sh@1598 -- # waitforlisten 2863563 00:05:46.502 15:10:50 -- common/autotest_common.sh@829 -- # '[' -z 2863563 ']' 00:05:46.502 15:10:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.502 15:10:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:46.502 15:10:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.502 15:10:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:46.502 15:10:50 -- common/autotest_common.sh@10 -- # set +x 00:05:46.502 [2024-07-15 15:10:50.346902] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:05:46.502 [2024-07-15 15:10:50.346952] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2863563 ] 00:05:46.502 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.759 [2024-07-15 15:10:50.416942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.759 [2024-07-15 15:10:50.491914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.322 15:10:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:47.322 15:10:51 -- common/autotest_common.sh@862 -- # return 0 00:05:47.322 15:10:51 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:05:47.322 15:10:51 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:05:47.322 15:10:51 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:05:50.600 nvme0n1 00:05:50.600 15:10:54 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:50.600 [2024-07-15 15:10:54.286621] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:50.600 request: 00:05:50.600 { 00:05:50.600 "nvme_ctrlr_name": "nvme0", 00:05:50.600 "password": "test", 00:05:50.600 "method": "bdev_nvme_opal_revert", 00:05:50.600 "req_id": 1 00:05:50.600 } 00:05:50.600 Got JSON-RPC error response 00:05:50.600 response: 00:05:50.600 { 00:05:50.600 "code": -32602, 00:05:50.600 "message": "Invalid parameters" 00:05:50.600 } 00:05:50.600 15:10:54 -- common/autotest_common.sh@1604 -- # true 00:05:50.600 15:10:54 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:05:50.600 15:10:54 -- common/autotest_common.sh@1608 -- # killprocess 2863563 00:05:50.600 15:10:54 -- common/autotest_common.sh@948 -- # '[' -z 2863563 ']' 00:05:50.600 15:10:54 -- common/autotest_common.sh@952 -- # kill -0 2863563 00:05:50.600 15:10:54 -- common/autotest_common.sh@953 -- # uname 00:05:50.600 15:10:54 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:50.600 15:10:54 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2863563 00:05:50.600 15:10:54 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:50.600 15:10:54 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:50.600 15:10:54 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2863563' 00:05:50.600 killing process with pid 2863563 00:05:50.600 15:10:54 -- common/autotest_common.sh@967 -- # kill 2863563 00:05:50.600 15:10:54 -- common/autotest_common.sh@972 -- # wait 2863563 00:05:53.129 15:10:56 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:53.129 15:10:56 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:53.129 15:10:56 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:53.129 15:10:56 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:53.129 15:10:56 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:53.129 15:10:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:53.129 15:10:56 -- common/autotest_common.sh@10 -- # set +x 00:05:53.129 15:10:56 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:53.129 15:10:56 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:53.129 15:10:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:53.129 15:10:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.129 15:10:56 -- common/autotest_common.sh@10 -- # set +x 00:05:53.129 ************************************ 00:05:53.129 START TEST env 00:05:53.129 ************************************ 00:05:53.129 15:10:56 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:53.129 * Looking for test storage... 00:05:53.129 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:53.129 15:10:56 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:53.129 15:10:56 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:53.129 15:10:56 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.129 15:10:56 env -- common/autotest_common.sh@10 -- # set +x 00:05:53.129 ************************************ 00:05:53.129 START TEST env_memory 00:05:53.129 ************************************ 00:05:53.129 15:10:56 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:53.129 00:05:53.129 00:05:53.129 CUnit - A unit testing framework for C - Version 2.1-3 00:05:53.129 http://cunit.sourceforge.net/ 00:05:53.129 00:05:53.129 00:05:53.129 Suite: memory 00:05:53.129 Test: alloc and free memory map ...[2024-07-15 15:10:56.790763] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:53.129 passed 00:05:53.129 Test: mem map translation ...[2024-07-15 15:10:56.809253] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:53.129 [2024-07-15 15:10:56.809270] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:53.129 [2024-07-15 15:10:56.809306] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:53.129 [2024-07-15 15:10:56.809315] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:53.129 passed 00:05:53.129 Test: mem map registration ...[2024-07-15 15:10:56.844840] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:53.129 [2024-07-15 15:10:56.844861] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:53.129 passed 00:05:53.129 Test: mem map adjacent registrations ...passed 00:05:53.129 00:05:53.129 Run Summary: Type Total Ran Passed Failed Inactive 00:05:53.129 suites 1 1 n/a 0 0 00:05:53.129 tests 4 4 4 0 0 00:05:53.129 asserts 152 152 152 0 n/a 00:05:53.129 00:05:53.129 Elapsed time = 0.132 seconds 00:05:53.129 00:05:53.129 real 0m0.146s 00:05:53.129 user 0m0.135s 00:05:53.129 sys 0m0.011s 00:05:53.129 15:10:56 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.129 15:10:56 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:53.129 ************************************ 00:05:53.129 END TEST env_memory 00:05:53.129 ************************************ 00:05:53.129 15:10:56 env -- common/autotest_common.sh@1142 -- # return 0 00:05:53.129 15:10:56 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:53.129 15:10:56 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:53.129 15:10:56 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.129 15:10:56 env -- common/autotest_common.sh@10 -- # set +x 00:05:53.129 ************************************ 00:05:53.129 START TEST env_vtophys 00:05:53.129 ************************************ 00:05:53.129 15:10:56 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:53.129 EAL: lib.eal log level changed from notice to debug 00:05:53.129 EAL: Detected lcore 0 as core 0 on socket 0 00:05:53.129 EAL: Detected lcore 1 as core 1 on socket 0 00:05:53.129 EAL: Detected lcore 2 as core 2 on socket 0 00:05:53.129 EAL: Detected lcore 3 as core 3 on socket 0 00:05:53.129 EAL: Detected lcore 4 as core 4 on socket 0 00:05:53.129 EAL: Detected lcore 5 as core 5 on socket 0 00:05:53.129 EAL: Detected lcore 6 as core 6 on socket 0 00:05:53.129 EAL: Detected lcore 7 as core 8 on socket 0 00:05:53.129 EAL: Detected lcore 8 as core 9 on socket 0 00:05:53.129 EAL: Detected lcore 9 as core 10 on socket 0 00:05:53.129 EAL: Detected lcore 10 as core 11 on socket 0 00:05:53.129 EAL: Detected lcore 11 as core 12 on socket 0 00:05:53.129 EAL: Detected lcore 12 as core 13 on socket 0 00:05:53.129 EAL: Detected lcore 13 as core 14 on socket 0 00:05:53.129 EAL: Detected lcore 14 as core 16 on socket 0 00:05:53.129 EAL: Detected lcore 15 as core 17 on socket 0 00:05:53.129 EAL: Detected lcore 16 as core 18 on socket 0 00:05:53.129 EAL: Detected lcore 17 as core 19 on socket 0 00:05:53.129 EAL: Detected lcore 18 as core 20 on socket 0 00:05:53.129 EAL: Detected lcore 19 as core 21 on socket 0 00:05:53.129 EAL: Detected lcore 20 as core 22 on socket 0 00:05:53.129 EAL: Detected lcore 21 as core 24 on socket 0 00:05:53.129 EAL: Detected lcore 22 as core 25 on socket 0 00:05:53.129 EAL: Detected lcore 23 as core 26 on socket 0 00:05:53.129 EAL: Detected lcore 24 as core 27 on socket 0 00:05:53.129 EAL: Detected lcore 25 as core 28 on socket 0 00:05:53.129 EAL: Detected lcore 26 as core 29 on socket 0 00:05:53.129 EAL: Detected lcore 27 as core 30 on socket 0 00:05:53.129 EAL: Detected lcore 28 as core 0 on socket 1 00:05:53.129 EAL: Detected lcore 29 as core 1 on socket 1 00:05:53.129 EAL: Detected lcore 30 as core 2 on socket 1 00:05:53.129 EAL: Detected lcore 31 as core 3 on socket 1 00:05:53.129 EAL: Detected lcore 32 as core 4 on socket 1 00:05:53.129 EAL: Detected lcore 33 as core 5 on socket 1 00:05:53.129 EAL: Detected lcore 34 as core 6 on socket 1 00:05:53.129 EAL: Detected lcore 35 as core 8 on socket 1 00:05:53.129 EAL: Detected lcore 36 as core 9 on socket 1 00:05:53.129 EAL: Detected lcore 37 as core 10 on socket 1 00:05:53.129 EAL: Detected lcore 38 as core 11 on socket 1 00:05:53.129 EAL: Detected lcore 39 as core 12 on socket 1 00:05:53.129 EAL: Detected lcore 40 as core 13 on socket 1 00:05:53.129 EAL: Detected lcore 41 as core 14 on socket 1 00:05:53.129 EAL: Detected lcore 42 as core 16 on socket 1 00:05:53.129 EAL: Detected lcore 43 as core 17 on socket 1 00:05:53.129 EAL: Detected lcore 44 as core 18 on socket 1 00:05:53.129 EAL: Detected lcore 45 as core 19 on socket 1 00:05:53.129 EAL: Detected lcore 46 as core 20 on socket 1 00:05:53.129 EAL: Detected lcore 47 as core 21 on socket 1 00:05:53.129 EAL: Detected lcore 48 as core 22 on socket 1 00:05:53.129 EAL: Detected lcore 49 as core 24 on socket 1 00:05:53.129 EAL: Detected lcore 50 as core 25 on socket 1 00:05:53.130 EAL: Detected lcore 51 as core 26 on socket 1 00:05:53.130 EAL: Detected lcore 52 as core 27 on socket 1 00:05:53.130 EAL: Detected lcore 53 as core 28 on socket 1 00:05:53.130 EAL: Detected lcore 54 as core 29 on socket 1 00:05:53.130 EAL: Detected lcore 55 as core 30 on socket 1 00:05:53.130 EAL: Detected lcore 56 as core 0 on socket 0 00:05:53.130 EAL: Detected lcore 57 as core 1 on socket 0 00:05:53.130 EAL: Detected lcore 58 as core 2 on socket 0 00:05:53.130 EAL: Detected lcore 59 as core 3 on socket 0 00:05:53.130 EAL: Detected lcore 60 as core 4 on socket 0 00:05:53.130 EAL: Detected lcore 61 as core 5 on socket 0 00:05:53.130 EAL: Detected lcore 62 as core 6 on socket 0 00:05:53.130 EAL: Detected lcore 63 as core 8 on socket 0 00:05:53.130 EAL: Detected lcore 64 as core 9 on socket 0 00:05:53.130 EAL: Detected lcore 65 as core 10 on socket 0 00:05:53.130 EAL: Detected lcore 66 as core 11 on socket 0 00:05:53.130 EAL: Detected lcore 67 as core 12 on socket 0 00:05:53.130 EAL: Detected lcore 68 as core 13 on socket 0 00:05:53.130 EAL: Detected lcore 69 as core 14 on socket 0 00:05:53.130 EAL: Detected lcore 70 as core 16 on socket 0 00:05:53.130 EAL: Detected lcore 71 as core 17 on socket 0 00:05:53.130 EAL: Detected lcore 72 as core 18 on socket 0 00:05:53.130 EAL: Detected lcore 73 as core 19 on socket 0 00:05:53.130 EAL: Detected lcore 74 as core 20 on socket 0 00:05:53.130 EAL: Detected lcore 75 as core 21 on socket 0 00:05:53.130 EAL: Detected lcore 76 as core 22 on socket 0 00:05:53.130 EAL: Detected lcore 77 as core 24 on socket 0 00:05:53.130 EAL: Detected lcore 78 as core 25 on socket 0 00:05:53.130 EAL: Detected lcore 79 as core 26 on socket 0 00:05:53.130 EAL: Detected lcore 80 as core 27 on socket 0 00:05:53.130 EAL: Detected lcore 81 as core 28 on socket 0 00:05:53.130 EAL: Detected lcore 82 as core 29 on socket 0 00:05:53.130 EAL: Detected lcore 83 as core 30 on socket 0 00:05:53.130 EAL: Detected lcore 84 as core 0 on socket 1 00:05:53.130 EAL: Detected lcore 85 as core 1 on socket 1 00:05:53.130 EAL: Detected lcore 86 as core 2 on socket 1 00:05:53.130 EAL: Detected lcore 87 as core 3 on socket 1 00:05:53.130 EAL: Detected lcore 88 as core 4 on socket 1 00:05:53.130 EAL: Detected lcore 89 as core 5 on socket 1 00:05:53.130 EAL: Detected lcore 90 as core 6 on socket 1 00:05:53.130 EAL: Detected lcore 91 as core 8 on socket 1 00:05:53.130 EAL: Detected lcore 92 as core 9 on socket 1 00:05:53.130 EAL: Detected lcore 93 as core 10 on socket 1 00:05:53.130 EAL: Detected lcore 94 as core 11 on socket 1 00:05:53.130 EAL: Detected lcore 95 as core 12 on socket 1 00:05:53.130 EAL: Detected lcore 96 as core 13 on socket 1 00:05:53.130 EAL: Detected lcore 97 as core 14 on socket 1 00:05:53.130 EAL: Detected lcore 98 as core 16 on socket 1 00:05:53.130 EAL: Detected lcore 99 as core 17 on socket 1 00:05:53.130 EAL: Detected lcore 100 as core 18 on socket 1 00:05:53.130 EAL: Detected lcore 101 as core 19 on socket 1 00:05:53.130 EAL: Detected lcore 102 as core 20 on socket 1 00:05:53.130 EAL: Detected lcore 103 as core 21 on socket 1 00:05:53.130 EAL: Detected lcore 104 as core 22 on socket 1 00:05:53.130 EAL: Detected lcore 105 as core 24 on socket 1 00:05:53.130 EAL: Detected lcore 106 as core 25 on socket 1 00:05:53.130 EAL: Detected lcore 107 as core 26 on socket 1 00:05:53.130 EAL: Detected lcore 108 as core 27 on socket 1 00:05:53.130 EAL: Detected lcore 109 as core 28 on socket 1 00:05:53.130 EAL: Detected lcore 110 as core 29 on socket 1 00:05:53.130 EAL: Detected lcore 111 as core 30 on socket 1 00:05:53.130 EAL: Maximum logical cores by configuration: 128 00:05:53.130 EAL: Detected CPU lcores: 112 00:05:53.130 EAL: Detected NUMA nodes: 2 00:05:53.130 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:53.130 EAL: Detected shared linkage of DPDK 00:05:53.130 EAL: No shared files mode enabled, IPC will be disabled 00:05:53.130 EAL: Bus pci wants IOVA as 'DC' 00:05:53.130 EAL: Buses did not request a specific IOVA mode. 00:05:53.130 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:53.130 EAL: Selected IOVA mode 'VA' 00:05:53.130 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.130 EAL: Probing VFIO support... 00:05:53.130 EAL: IOMMU type 1 (Type 1) is supported 00:05:53.130 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:53.130 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:53.130 EAL: VFIO support initialized 00:05:53.130 EAL: Ask a virtual area of 0x2e000 bytes 00:05:53.130 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:53.130 EAL: Setting up physically contiguous memory... 00:05:53.130 EAL: Setting maximum number of open files to 524288 00:05:53.130 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:53.130 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:53.130 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:53.130 EAL: Ask a virtual area of 0x61000 bytes 00:05:53.130 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:53.130 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:53.130 EAL: Ask a virtual area of 0x400000000 bytes 00:05:53.130 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:53.130 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:53.130 EAL: Ask a virtual area of 0x61000 bytes 00:05:53.130 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:53.389 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:53.389 EAL: Ask a virtual area of 0x400000000 bytes 00:05:53.389 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:53.389 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:53.389 EAL: Ask a virtual area of 0x61000 bytes 00:05:53.389 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:53.389 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:53.389 EAL: Ask a virtual area of 0x400000000 bytes 00:05:53.389 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:53.389 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:53.389 EAL: Ask a virtual area of 0x61000 bytes 00:05:53.389 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:53.389 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:53.389 EAL: Ask a virtual area of 0x400000000 bytes 00:05:53.389 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:53.389 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:53.389 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:53.389 EAL: Ask a virtual area of 0x61000 bytes 00:05:53.389 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:53.389 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:53.389 EAL: Ask a virtual area of 0x400000000 bytes 00:05:53.389 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:53.389 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:53.389 EAL: Ask a virtual area of 0x61000 bytes 00:05:53.389 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:53.389 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:53.389 EAL: Ask a virtual area of 0x400000000 bytes 00:05:53.389 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:53.389 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:53.389 EAL: Ask a virtual area of 0x61000 bytes 00:05:53.389 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:53.389 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:53.389 EAL: Ask a virtual area of 0x400000000 bytes 00:05:53.389 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:53.389 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:53.389 EAL: Ask a virtual area of 0x61000 bytes 00:05:53.389 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:53.389 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:53.389 EAL: Ask a virtual area of 0x400000000 bytes 00:05:53.389 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:53.389 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:53.389 EAL: Hugepages will be freed exactly as allocated. 00:05:53.389 EAL: No shared files mode enabled, IPC is disabled 00:05:53.389 EAL: No shared files mode enabled, IPC is disabled 00:05:53.389 EAL: TSC frequency is ~2500000 KHz 00:05:53.389 EAL: Main lcore 0 is ready (tid=7fba4bfe7a00;cpuset=[0]) 00:05:53.389 EAL: Trying to obtain current memory policy. 00:05:53.389 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:53.389 EAL: Restoring previous memory policy: 0 00:05:53.389 EAL: request: mp_malloc_sync 00:05:53.389 EAL: No shared files mode enabled, IPC is disabled 00:05:53.389 EAL: Heap on socket 0 was expanded by 2MB 00:05:53.389 EAL: No shared files mode enabled, IPC is disabled 00:05:53.389 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:53.389 EAL: Mem event callback 'spdk:(nil)' registered 00:05:53.389 00:05:53.389 00:05:53.389 CUnit - A unit testing framework for C - Version 2.1-3 00:05:53.389 http://cunit.sourceforge.net/ 00:05:53.389 00:05:53.389 00:05:53.389 Suite: components_suite 00:05:53.389 Test: vtophys_malloc_test ...passed 00:05:53.389 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:53.389 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:53.389 EAL: Restoring previous memory policy: 4 00:05:53.389 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.389 EAL: request: mp_malloc_sync 00:05:53.389 EAL: No shared files mode enabled, IPC is disabled 00:05:53.389 EAL: Heap on socket 0 was expanded by 4MB 00:05:53.389 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.389 EAL: request: mp_malloc_sync 00:05:53.389 EAL: No shared files mode enabled, IPC is disabled 00:05:53.389 EAL: Heap on socket 0 was shrunk by 4MB 00:05:53.389 EAL: Trying to obtain current memory policy. 00:05:53.389 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:53.389 EAL: Restoring previous memory policy: 4 00:05:53.389 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.389 EAL: request: mp_malloc_sync 00:05:53.389 EAL: No shared files mode enabled, IPC is disabled 00:05:53.389 EAL: Heap on socket 0 was expanded by 6MB 00:05:53.389 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.389 EAL: request: mp_malloc_sync 00:05:53.389 EAL: No shared files mode enabled, IPC is disabled 00:05:53.389 EAL: Heap on socket 0 was shrunk by 6MB 00:05:53.389 EAL: Trying to obtain current memory policy. 00:05:53.389 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:53.389 EAL: Restoring previous memory policy: 4 00:05:53.389 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.389 EAL: request: mp_malloc_sync 00:05:53.389 EAL: No shared files mode enabled, IPC is disabled 00:05:53.389 EAL: Heap on socket 0 was expanded by 10MB 00:05:53.389 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.389 EAL: request: mp_malloc_sync 00:05:53.389 EAL: No shared files mode enabled, IPC is disabled 00:05:53.389 EAL: Heap on socket 0 was shrunk by 10MB 00:05:53.389 EAL: Trying to obtain current memory policy. 00:05:53.389 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:53.389 EAL: Restoring previous memory policy: 4 00:05:53.389 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.389 EAL: request: mp_malloc_sync 00:05:53.389 EAL: No shared files mode enabled, IPC is disabled 00:05:53.389 EAL: Heap on socket 0 was expanded by 18MB 00:05:53.389 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.389 EAL: request: mp_malloc_sync 00:05:53.389 EAL: No shared files mode enabled, IPC is disabled 00:05:53.389 EAL: Heap on socket 0 was shrunk by 18MB 00:05:53.389 EAL: Trying to obtain current memory policy. 00:05:53.389 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:53.389 EAL: Restoring previous memory policy: 4 00:05:53.389 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.389 EAL: request: mp_malloc_sync 00:05:53.389 EAL: No shared files mode enabled, IPC is disabled 00:05:53.389 EAL: Heap on socket 0 was expanded by 34MB 00:05:53.389 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.389 EAL: request: mp_malloc_sync 00:05:53.389 EAL: No shared files mode enabled, IPC is disabled 00:05:53.389 EAL: Heap on socket 0 was shrunk by 34MB 00:05:53.389 EAL: Trying to obtain current memory policy. 00:05:53.389 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:53.389 EAL: Restoring previous memory policy: 4 00:05:53.389 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.389 EAL: request: mp_malloc_sync 00:05:53.389 EAL: No shared files mode enabled, IPC is disabled 00:05:53.389 EAL: Heap on socket 0 was expanded by 66MB 00:05:53.389 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.389 EAL: request: mp_malloc_sync 00:05:53.389 EAL: No shared files mode enabled, IPC is disabled 00:05:53.389 EAL: Heap on socket 0 was shrunk by 66MB 00:05:53.389 EAL: Trying to obtain current memory policy. 00:05:53.389 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:53.389 EAL: Restoring previous memory policy: 4 00:05:53.389 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.389 EAL: request: mp_malloc_sync 00:05:53.389 EAL: No shared files mode enabled, IPC is disabled 00:05:53.389 EAL: Heap on socket 0 was expanded by 130MB 00:05:53.389 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.389 EAL: request: mp_malloc_sync 00:05:53.389 EAL: No shared files mode enabled, IPC is disabled 00:05:53.389 EAL: Heap on socket 0 was shrunk by 130MB 00:05:53.389 EAL: Trying to obtain current memory policy. 00:05:53.389 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:53.389 EAL: Restoring previous memory policy: 4 00:05:53.389 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.389 EAL: request: mp_malloc_sync 00:05:53.389 EAL: No shared files mode enabled, IPC is disabled 00:05:53.389 EAL: Heap on socket 0 was expanded by 258MB 00:05:53.389 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.647 EAL: request: mp_malloc_sync 00:05:53.647 EAL: No shared files mode enabled, IPC is disabled 00:05:53.647 EAL: Heap on socket 0 was shrunk by 258MB 00:05:53.647 EAL: Trying to obtain current memory policy. 00:05:53.647 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:53.647 EAL: Restoring previous memory policy: 4 00:05:53.647 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.647 EAL: request: mp_malloc_sync 00:05:53.647 EAL: No shared files mode enabled, IPC is disabled 00:05:53.647 EAL: Heap on socket 0 was expanded by 514MB 00:05:53.647 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.905 EAL: request: mp_malloc_sync 00:05:53.905 EAL: No shared files mode enabled, IPC is disabled 00:05:53.905 EAL: Heap on socket 0 was shrunk by 514MB 00:05:53.905 EAL: Trying to obtain current memory policy. 00:05:53.905 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:53.905 EAL: Restoring previous memory policy: 4 00:05:53.905 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.905 EAL: request: mp_malloc_sync 00:05:53.905 EAL: No shared files mode enabled, IPC is disabled 00:05:53.905 EAL: Heap on socket 0 was expanded by 1026MB 00:05:54.163 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.163 EAL: request: mp_malloc_sync 00:05:54.163 EAL: No shared files mode enabled, IPC is disabled 00:05:54.163 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:54.163 passed 00:05:54.163 00:05:54.163 Run Summary: Type Total Ran Passed Failed Inactive 00:05:54.163 suites 1 1 n/a 0 0 00:05:54.163 tests 2 2 2 0 0 00:05:54.163 asserts 497 497 497 0 n/a 00:05:54.163 00:05:54.163 Elapsed time = 0.962 seconds 00:05:54.163 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.163 EAL: request: mp_malloc_sync 00:05:54.163 EAL: No shared files mode enabled, IPC is disabled 00:05:54.163 EAL: Heap on socket 0 was shrunk by 2MB 00:05:54.163 EAL: No shared files mode enabled, IPC is disabled 00:05:54.163 EAL: No shared files mode enabled, IPC is disabled 00:05:54.163 EAL: No shared files mode enabled, IPC is disabled 00:05:54.163 00:05:54.163 real 0m1.092s 00:05:54.163 user 0m0.633s 00:05:54.163 sys 0m0.433s 00:05:54.163 15:10:58 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.163 15:10:58 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:54.163 ************************************ 00:05:54.163 END TEST env_vtophys 00:05:54.163 ************************************ 00:05:54.421 15:10:58 env -- common/autotest_common.sh@1142 -- # return 0 00:05:54.421 15:10:58 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:54.421 15:10:58 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:54.421 15:10:58 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.421 15:10:58 env -- common/autotest_common.sh@10 -- # set +x 00:05:54.421 ************************************ 00:05:54.421 START TEST env_pci 00:05:54.421 ************************************ 00:05:54.421 15:10:58 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:54.421 00:05:54.421 00:05:54.421 CUnit - A unit testing framework for C - Version 2.1-3 00:05:54.421 http://cunit.sourceforge.net/ 00:05:54.421 00:05:54.421 00:05:54.421 Suite: pci 00:05:54.421 Test: pci_hook ...[2024-07-15 15:10:58.155459] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2865032 has claimed it 00:05:54.421 EAL: Cannot find device (10000:00:01.0) 00:05:54.421 EAL: Failed to attach device on primary process 00:05:54.421 passed 00:05:54.421 00:05:54.421 Run Summary: Type Total Ran Passed Failed Inactive 00:05:54.421 suites 1 1 n/a 0 0 00:05:54.421 tests 1 1 1 0 0 00:05:54.421 asserts 25 25 25 0 n/a 00:05:54.421 00:05:54.421 Elapsed time = 0.035 seconds 00:05:54.421 00:05:54.421 real 0m0.057s 00:05:54.421 user 0m0.012s 00:05:54.421 sys 0m0.044s 00:05:54.421 15:10:58 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.421 15:10:58 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:54.421 ************************************ 00:05:54.421 END TEST env_pci 00:05:54.421 ************************************ 00:05:54.421 15:10:58 env -- common/autotest_common.sh@1142 -- # return 0 00:05:54.421 15:10:58 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:54.421 15:10:58 env -- env/env.sh@15 -- # uname 00:05:54.421 15:10:58 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:54.421 15:10:58 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:54.421 15:10:58 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:54.421 15:10:58 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:54.421 15:10:58 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.421 15:10:58 env -- common/autotest_common.sh@10 -- # set +x 00:05:54.421 ************************************ 00:05:54.421 START TEST env_dpdk_post_init 00:05:54.421 ************************************ 00:05:54.421 15:10:58 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:54.421 EAL: Detected CPU lcores: 112 00:05:54.421 EAL: Detected NUMA nodes: 2 00:05:54.421 EAL: Detected shared linkage of DPDK 00:05:54.421 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:54.680 EAL: Selected IOVA mode 'VA' 00:05:54.680 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.680 EAL: VFIO support initialized 00:05:54.680 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:54.680 EAL: Using IOMMU type 1 (Type 1) 00:05:54.680 EAL: Ignore mapping IO port bar(1) 00:05:54.680 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:05:54.680 EAL: Ignore mapping IO port bar(1) 00:05:54.680 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:05:54.680 EAL: Ignore mapping IO port bar(1) 00:05:54.680 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:05:54.680 EAL: Ignore mapping IO port bar(1) 00:05:54.680 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:05:54.680 EAL: Ignore mapping IO port bar(1) 00:05:54.680 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:05:54.680 EAL: Ignore mapping IO port bar(1) 00:05:54.680 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:05:54.680 EAL: Ignore mapping IO port bar(1) 00:05:54.680 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:05:54.680 EAL: Ignore mapping IO port bar(1) 00:05:54.680 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:05:54.680 EAL: Ignore mapping IO port bar(1) 00:05:54.680 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:05:54.680 EAL: Ignore mapping IO port bar(1) 00:05:54.680 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:05:54.680 EAL: Ignore mapping IO port bar(1) 00:05:54.680 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:05:54.680 EAL: Ignore mapping IO port bar(1) 00:05:54.680 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:05:54.680 EAL: Ignore mapping IO port bar(1) 00:05:54.680 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:05:54.680 EAL: Ignore mapping IO port bar(1) 00:05:54.680 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:05:54.938 EAL: Ignore mapping IO port bar(1) 00:05:54.938 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:05:54.938 EAL: Ignore mapping IO port bar(1) 00:05:54.938 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:05:55.505 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:05:59.690 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:05:59.690 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001040000 00:05:59.690 Starting DPDK initialization... 00:05:59.690 Starting SPDK post initialization... 00:05:59.690 SPDK NVMe probe 00:05:59.690 Attaching to 0000:d8:00.0 00:05:59.690 Attached to 0000:d8:00.0 00:05:59.690 Cleaning up... 00:05:59.690 00:05:59.690 real 0m4.974s 00:05:59.690 user 0m3.675s 00:05:59.690 sys 0m0.355s 00:05:59.690 15:11:03 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.690 15:11:03 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:59.690 ************************************ 00:05:59.690 END TEST env_dpdk_post_init 00:05:59.690 ************************************ 00:05:59.690 15:11:03 env -- common/autotest_common.sh@1142 -- # return 0 00:05:59.690 15:11:03 env -- env/env.sh@26 -- # uname 00:05:59.690 15:11:03 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:59.690 15:11:03 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:59.690 15:11:03 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:59.690 15:11:03 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.690 15:11:03 env -- common/autotest_common.sh@10 -- # set +x 00:05:59.690 ************************************ 00:05:59.690 START TEST env_mem_callbacks 00:05:59.690 ************************************ 00:05:59.690 15:11:03 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:59.690 EAL: Detected CPU lcores: 112 00:05:59.690 EAL: Detected NUMA nodes: 2 00:05:59.690 EAL: Detected shared linkage of DPDK 00:05:59.690 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:59.690 EAL: Selected IOVA mode 'VA' 00:05:59.690 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.690 EAL: VFIO support initialized 00:05:59.690 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:59.690 00:05:59.690 00:05:59.690 CUnit - A unit testing framework for C - Version 2.1-3 00:05:59.690 http://cunit.sourceforge.net/ 00:05:59.690 00:05:59.690 00:05:59.690 Suite: memory 00:05:59.690 Test: test ... 00:05:59.690 register 0x200000200000 2097152 00:05:59.690 malloc 3145728 00:05:59.690 register 0x200000400000 4194304 00:05:59.690 buf 0x200000500000 len 3145728 PASSED 00:05:59.690 malloc 64 00:05:59.690 buf 0x2000004fff40 len 64 PASSED 00:05:59.690 malloc 4194304 00:05:59.690 register 0x200000800000 6291456 00:05:59.690 buf 0x200000a00000 len 4194304 PASSED 00:05:59.690 free 0x200000500000 3145728 00:05:59.690 free 0x2000004fff40 64 00:05:59.690 unregister 0x200000400000 4194304 PASSED 00:05:59.690 free 0x200000a00000 4194304 00:05:59.690 unregister 0x200000800000 6291456 PASSED 00:05:59.690 malloc 8388608 00:05:59.690 register 0x200000400000 10485760 00:05:59.690 buf 0x200000600000 len 8388608 PASSED 00:05:59.690 free 0x200000600000 8388608 00:05:59.690 unregister 0x200000400000 10485760 PASSED 00:05:59.690 passed 00:05:59.690 00:05:59.690 Run Summary: Type Total Ran Passed Failed Inactive 00:05:59.690 suites 1 1 n/a 0 0 00:05:59.690 tests 1 1 1 0 0 00:05:59.690 asserts 15 15 15 0 n/a 00:05:59.690 00:05:59.690 Elapsed time = 0.005 seconds 00:05:59.690 00:05:59.690 real 0m0.050s 00:05:59.690 user 0m0.021s 00:05:59.690 sys 0m0.029s 00:05:59.690 15:11:03 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.690 15:11:03 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:59.690 ************************************ 00:05:59.690 END TEST env_mem_callbacks 00:05:59.690 ************************************ 00:05:59.690 15:11:03 env -- common/autotest_common.sh@1142 -- # return 0 00:05:59.690 00:05:59.690 real 0m6.822s 00:05:59.690 user 0m4.641s 00:05:59.690 sys 0m1.246s 00:05:59.690 15:11:03 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.690 15:11:03 env -- common/autotest_common.sh@10 -- # set +x 00:05:59.690 ************************************ 00:05:59.690 END TEST env 00:05:59.690 ************************************ 00:05:59.690 15:11:03 -- common/autotest_common.sh@1142 -- # return 0 00:05:59.690 15:11:03 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:59.690 15:11:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:59.690 15:11:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.690 15:11:03 -- common/autotest_common.sh@10 -- # set +x 00:05:59.690 ************************************ 00:05:59.690 START TEST rpc 00:05:59.690 ************************************ 00:05:59.690 15:11:03 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:59.949 * Looking for test storage... 00:05:59.949 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:59.949 15:11:03 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2866015 00:05:59.949 15:11:03 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:59.949 15:11:03 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2866015 00:05:59.949 15:11:03 rpc -- common/autotest_common.sh@829 -- # '[' -z 2866015 ']' 00:05:59.949 15:11:03 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.949 15:11:03 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.949 15:11:03 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.949 15:11:03 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.949 15:11:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.949 15:11:03 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:59.949 [2024-07-15 15:11:03.671924] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:05:59.949 [2024-07-15 15:11:03.671978] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2866015 ] 00:05:59.949 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.949 [2024-07-15 15:11:03.741691] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.949 [2024-07-15 15:11:03.814846] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:59.949 [2024-07-15 15:11:03.814883] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2866015' to capture a snapshot of events at runtime. 00:05:59.949 [2024-07-15 15:11:03.814892] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:59.949 [2024-07-15 15:11:03.814900] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:59.949 [2024-07-15 15:11:03.814923] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2866015 for offline analysis/debug. 00:05:59.949 [2024-07-15 15:11:03.814943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.883 15:11:04 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.884 15:11:04 rpc -- common/autotest_common.sh@862 -- # return 0 00:06:00.884 15:11:04 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:00.884 15:11:04 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:00.884 15:11:04 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:00.884 15:11:04 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:00.884 15:11:04 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:00.884 15:11:04 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.884 15:11:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.884 ************************************ 00:06:00.884 START TEST rpc_integrity 00:06:00.884 ************************************ 00:06:00.884 15:11:04 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:06:00.884 15:11:04 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:00.884 15:11:04 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.884 15:11:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.884 15:11:04 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:00.884 15:11:04 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:00.884 15:11:04 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:00.884 15:11:04 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:00.884 15:11:04 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:00.884 15:11:04 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.884 15:11:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.884 15:11:04 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:00.884 15:11:04 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:00.884 15:11:04 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:00.884 15:11:04 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.884 15:11:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.884 15:11:04 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:00.884 15:11:04 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:00.884 { 00:06:00.884 "name": "Malloc0", 00:06:00.884 "aliases": [ 00:06:00.884 "1f554d4c-b29f-47db-99ad-c09441334f13" 00:06:00.884 ], 00:06:00.884 "product_name": "Malloc disk", 00:06:00.884 "block_size": 512, 00:06:00.884 "num_blocks": 16384, 00:06:00.884 "uuid": "1f554d4c-b29f-47db-99ad-c09441334f13", 00:06:00.884 "assigned_rate_limits": { 00:06:00.884 "rw_ios_per_sec": 0, 00:06:00.884 "rw_mbytes_per_sec": 0, 00:06:00.884 "r_mbytes_per_sec": 0, 00:06:00.884 "w_mbytes_per_sec": 0 00:06:00.884 }, 00:06:00.884 "claimed": false, 00:06:00.884 "zoned": false, 00:06:00.884 "supported_io_types": { 00:06:00.884 "read": true, 00:06:00.884 "write": true, 00:06:00.884 "unmap": true, 00:06:00.884 "flush": true, 00:06:00.884 "reset": true, 00:06:00.884 "nvme_admin": false, 00:06:00.884 "nvme_io": false, 00:06:00.884 "nvme_io_md": false, 00:06:00.884 "write_zeroes": true, 00:06:00.884 "zcopy": true, 00:06:00.884 "get_zone_info": false, 00:06:00.884 "zone_management": false, 00:06:00.884 "zone_append": false, 00:06:00.884 "compare": false, 00:06:00.884 "compare_and_write": false, 00:06:00.884 "abort": true, 00:06:00.884 "seek_hole": false, 00:06:00.884 "seek_data": false, 00:06:00.884 "copy": true, 00:06:00.884 "nvme_iov_md": false 00:06:00.884 }, 00:06:00.884 "memory_domains": [ 00:06:00.884 { 00:06:00.884 "dma_device_id": "system", 00:06:00.884 "dma_device_type": 1 00:06:00.884 }, 00:06:00.884 { 00:06:00.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:00.884 "dma_device_type": 2 00:06:00.884 } 00:06:00.884 ], 00:06:00.884 "driver_specific": {} 00:06:00.884 } 00:06:00.884 ]' 00:06:00.884 15:11:04 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:00.884 15:11:04 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:00.884 15:11:04 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:00.884 15:11:04 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.884 15:11:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.884 [2024-07-15 15:11:04.636501] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:00.884 [2024-07-15 15:11:04.636532] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:00.884 [2024-07-15 15:11:04.636546] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x6ab440 00:06:00.884 [2024-07-15 15:11:04.636554] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:00.884 [2024-07-15 15:11:04.637609] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:00.884 [2024-07-15 15:11:04.637631] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:00.884 Passthru0 00:06:00.884 15:11:04 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:00.884 15:11:04 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:00.884 15:11:04 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.884 15:11:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.884 15:11:04 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:00.884 15:11:04 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:00.884 { 00:06:00.884 "name": "Malloc0", 00:06:00.884 "aliases": [ 00:06:00.884 "1f554d4c-b29f-47db-99ad-c09441334f13" 00:06:00.884 ], 00:06:00.884 "product_name": "Malloc disk", 00:06:00.884 "block_size": 512, 00:06:00.884 "num_blocks": 16384, 00:06:00.884 "uuid": "1f554d4c-b29f-47db-99ad-c09441334f13", 00:06:00.884 "assigned_rate_limits": { 00:06:00.884 "rw_ios_per_sec": 0, 00:06:00.884 "rw_mbytes_per_sec": 0, 00:06:00.884 "r_mbytes_per_sec": 0, 00:06:00.884 "w_mbytes_per_sec": 0 00:06:00.884 }, 00:06:00.884 "claimed": true, 00:06:00.884 "claim_type": "exclusive_write", 00:06:00.884 "zoned": false, 00:06:00.884 "supported_io_types": { 00:06:00.884 "read": true, 00:06:00.884 "write": true, 00:06:00.884 "unmap": true, 00:06:00.884 "flush": true, 00:06:00.884 "reset": true, 00:06:00.884 "nvme_admin": false, 00:06:00.884 "nvme_io": false, 00:06:00.884 "nvme_io_md": false, 00:06:00.884 "write_zeroes": true, 00:06:00.884 "zcopy": true, 00:06:00.884 "get_zone_info": false, 00:06:00.884 "zone_management": false, 00:06:00.884 "zone_append": false, 00:06:00.884 "compare": false, 00:06:00.884 "compare_and_write": false, 00:06:00.884 "abort": true, 00:06:00.884 "seek_hole": false, 00:06:00.884 "seek_data": false, 00:06:00.884 "copy": true, 00:06:00.884 "nvme_iov_md": false 00:06:00.884 }, 00:06:00.884 "memory_domains": [ 00:06:00.884 { 00:06:00.884 "dma_device_id": "system", 00:06:00.884 "dma_device_type": 1 00:06:00.884 }, 00:06:00.884 { 00:06:00.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:00.884 "dma_device_type": 2 00:06:00.884 } 00:06:00.884 ], 00:06:00.884 "driver_specific": {} 00:06:00.884 }, 00:06:00.884 { 00:06:00.884 "name": "Passthru0", 00:06:00.884 "aliases": [ 00:06:00.884 "a9510d3e-9932-52ab-9700-f3cedb0091a9" 00:06:00.884 ], 00:06:00.884 "product_name": "passthru", 00:06:00.884 "block_size": 512, 00:06:00.884 "num_blocks": 16384, 00:06:00.884 "uuid": "a9510d3e-9932-52ab-9700-f3cedb0091a9", 00:06:00.884 "assigned_rate_limits": { 00:06:00.884 "rw_ios_per_sec": 0, 00:06:00.884 "rw_mbytes_per_sec": 0, 00:06:00.884 "r_mbytes_per_sec": 0, 00:06:00.884 "w_mbytes_per_sec": 0 00:06:00.884 }, 00:06:00.884 "claimed": false, 00:06:00.884 "zoned": false, 00:06:00.884 "supported_io_types": { 00:06:00.884 "read": true, 00:06:00.884 "write": true, 00:06:00.884 "unmap": true, 00:06:00.884 "flush": true, 00:06:00.884 "reset": true, 00:06:00.884 "nvme_admin": false, 00:06:00.884 "nvme_io": false, 00:06:00.884 "nvme_io_md": false, 00:06:00.884 "write_zeroes": true, 00:06:00.884 "zcopy": true, 00:06:00.884 "get_zone_info": false, 00:06:00.884 "zone_management": false, 00:06:00.884 "zone_append": false, 00:06:00.884 "compare": false, 00:06:00.884 "compare_and_write": false, 00:06:00.884 "abort": true, 00:06:00.884 "seek_hole": false, 00:06:00.884 "seek_data": false, 00:06:00.884 "copy": true, 00:06:00.884 "nvme_iov_md": false 00:06:00.884 }, 00:06:00.884 "memory_domains": [ 00:06:00.884 { 00:06:00.884 "dma_device_id": "system", 00:06:00.884 "dma_device_type": 1 00:06:00.884 }, 00:06:00.884 { 00:06:00.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:00.884 "dma_device_type": 2 00:06:00.884 } 00:06:00.884 ], 00:06:00.884 "driver_specific": { 00:06:00.884 "passthru": { 00:06:00.884 "name": "Passthru0", 00:06:00.884 "base_bdev_name": "Malloc0" 00:06:00.884 } 00:06:00.884 } 00:06:00.884 } 00:06:00.884 ]' 00:06:00.884 15:11:04 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:00.884 15:11:04 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:00.884 15:11:04 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:00.884 15:11:04 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.884 15:11:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.884 15:11:04 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:00.884 15:11:04 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:00.884 15:11:04 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.884 15:11:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.884 15:11:04 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:00.884 15:11:04 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:00.884 15:11:04 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.884 15:11:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.884 15:11:04 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:00.884 15:11:04 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:00.884 15:11:04 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:00.884 15:11:04 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:00.884 00:06:00.884 real 0m0.290s 00:06:00.884 user 0m0.174s 00:06:00.884 sys 0m0.053s 00:06:00.884 15:11:04 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.885 15:11:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.885 ************************************ 00:06:00.885 END TEST rpc_integrity 00:06:00.885 ************************************ 00:06:01.142 15:11:04 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:01.142 15:11:04 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:01.142 15:11:04 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:01.142 15:11:04 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.142 15:11:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.142 ************************************ 00:06:01.142 START TEST rpc_plugins 00:06:01.142 ************************************ 00:06:01.142 15:11:04 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:06:01.142 15:11:04 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:01.142 15:11:04 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.142 15:11:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:01.142 15:11:04 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.142 15:11:04 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:01.142 15:11:04 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:01.142 15:11:04 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.142 15:11:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:01.142 15:11:04 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.142 15:11:04 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:01.142 { 00:06:01.142 "name": "Malloc1", 00:06:01.142 "aliases": [ 00:06:01.142 "98ce95d2-cc44-4666-b821-25fb75d39568" 00:06:01.142 ], 00:06:01.142 "product_name": "Malloc disk", 00:06:01.142 "block_size": 4096, 00:06:01.142 "num_blocks": 256, 00:06:01.142 "uuid": "98ce95d2-cc44-4666-b821-25fb75d39568", 00:06:01.142 "assigned_rate_limits": { 00:06:01.142 "rw_ios_per_sec": 0, 00:06:01.142 "rw_mbytes_per_sec": 0, 00:06:01.142 "r_mbytes_per_sec": 0, 00:06:01.142 "w_mbytes_per_sec": 0 00:06:01.142 }, 00:06:01.142 "claimed": false, 00:06:01.142 "zoned": false, 00:06:01.142 "supported_io_types": { 00:06:01.142 "read": true, 00:06:01.142 "write": true, 00:06:01.142 "unmap": true, 00:06:01.142 "flush": true, 00:06:01.142 "reset": true, 00:06:01.142 "nvme_admin": false, 00:06:01.142 "nvme_io": false, 00:06:01.142 "nvme_io_md": false, 00:06:01.142 "write_zeroes": true, 00:06:01.142 "zcopy": true, 00:06:01.142 "get_zone_info": false, 00:06:01.142 "zone_management": false, 00:06:01.142 "zone_append": false, 00:06:01.142 "compare": false, 00:06:01.142 "compare_and_write": false, 00:06:01.142 "abort": true, 00:06:01.142 "seek_hole": false, 00:06:01.142 "seek_data": false, 00:06:01.142 "copy": true, 00:06:01.142 "nvme_iov_md": false 00:06:01.142 }, 00:06:01.142 "memory_domains": [ 00:06:01.142 { 00:06:01.142 "dma_device_id": "system", 00:06:01.142 "dma_device_type": 1 00:06:01.142 }, 00:06:01.142 { 00:06:01.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:01.142 "dma_device_type": 2 00:06:01.142 } 00:06:01.142 ], 00:06:01.142 "driver_specific": {} 00:06:01.142 } 00:06:01.142 ]' 00:06:01.142 15:11:04 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:01.142 15:11:04 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:01.142 15:11:04 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:01.142 15:11:04 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.142 15:11:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:01.142 15:11:04 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.142 15:11:04 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:01.142 15:11:04 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.142 15:11:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:01.142 15:11:04 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.142 15:11:04 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:01.142 15:11:04 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:01.142 15:11:05 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:01.142 00:06:01.142 real 0m0.132s 00:06:01.142 user 0m0.073s 00:06:01.142 sys 0m0.022s 00:06:01.142 15:11:05 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.142 15:11:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:01.142 ************************************ 00:06:01.142 END TEST rpc_plugins 00:06:01.142 ************************************ 00:06:01.142 15:11:05 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:01.142 15:11:05 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:01.143 15:11:05 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:01.143 15:11:05 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.143 15:11:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.400 ************************************ 00:06:01.400 START TEST rpc_trace_cmd_test 00:06:01.400 ************************************ 00:06:01.400 15:11:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:06:01.400 15:11:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:01.400 15:11:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:01.400 15:11:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.400 15:11:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:01.400 15:11:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.400 15:11:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:01.400 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2866015", 00:06:01.400 "tpoint_group_mask": "0x8", 00:06:01.400 "iscsi_conn": { 00:06:01.400 "mask": "0x2", 00:06:01.400 "tpoint_mask": "0x0" 00:06:01.400 }, 00:06:01.400 "scsi": { 00:06:01.400 "mask": "0x4", 00:06:01.400 "tpoint_mask": "0x0" 00:06:01.400 }, 00:06:01.400 "bdev": { 00:06:01.400 "mask": "0x8", 00:06:01.400 "tpoint_mask": "0xffffffffffffffff" 00:06:01.400 }, 00:06:01.400 "nvmf_rdma": { 00:06:01.400 "mask": "0x10", 00:06:01.400 "tpoint_mask": "0x0" 00:06:01.400 }, 00:06:01.400 "nvmf_tcp": { 00:06:01.400 "mask": "0x20", 00:06:01.400 "tpoint_mask": "0x0" 00:06:01.400 }, 00:06:01.400 "ftl": { 00:06:01.400 "mask": "0x40", 00:06:01.400 "tpoint_mask": "0x0" 00:06:01.400 }, 00:06:01.400 "blobfs": { 00:06:01.400 "mask": "0x80", 00:06:01.400 "tpoint_mask": "0x0" 00:06:01.400 }, 00:06:01.400 "dsa": { 00:06:01.400 "mask": "0x200", 00:06:01.400 "tpoint_mask": "0x0" 00:06:01.400 }, 00:06:01.400 "thread": { 00:06:01.400 "mask": "0x400", 00:06:01.400 "tpoint_mask": "0x0" 00:06:01.400 }, 00:06:01.400 "nvme_pcie": { 00:06:01.400 "mask": "0x800", 00:06:01.400 "tpoint_mask": "0x0" 00:06:01.400 }, 00:06:01.400 "iaa": { 00:06:01.400 "mask": "0x1000", 00:06:01.400 "tpoint_mask": "0x0" 00:06:01.400 }, 00:06:01.400 "nvme_tcp": { 00:06:01.400 "mask": "0x2000", 00:06:01.400 "tpoint_mask": "0x0" 00:06:01.400 }, 00:06:01.400 "bdev_nvme": { 00:06:01.400 "mask": "0x4000", 00:06:01.400 "tpoint_mask": "0x0" 00:06:01.400 }, 00:06:01.400 "sock": { 00:06:01.400 "mask": "0x8000", 00:06:01.400 "tpoint_mask": "0x0" 00:06:01.400 } 00:06:01.400 }' 00:06:01.400 15:11:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:01.400 15:11:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:06:01.400 15:11:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:01.400 15:11:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:01.400 15:11:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:01.400 15:11:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:01.400 15:11:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:01.400 15:11:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:01.400 15:11:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:01.657 15:11:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:01.657 00:06:01.657 real 0m0.230s 00:06:01.657 user 0m0.189s 00:06:01.657 sys 0m0.031s 00:06:01.657 15:11:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.657 15:11:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:01.657 ************************************ 00:06:01.657 END TEST rpc_trace_cmd_test 00:06:01.657 ************************************ 00:06:01.657 15:11:05 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:01.657 15:11:05 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:01.657 15:11:05 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:01.657 15:11:05 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:01.657 15:11:05 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:01.657 15:11:05 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.657 15:11:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.657 ************************************ 00:06:01.657 START TEST rpc_daemon_integrity 00:06:01.657 ************************************ 00:06:01.657 15:11:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:06:01.657 15:11:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:01.657 15:11:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.657 15:11:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.657 15:11:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.657 15:11:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:01.657 15:11:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:01.657 15:11:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:01.657 15:11:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:01.657 15:11:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.657 15:11:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.657 15:11:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.657 15:11:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:01.657 15:11:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:01.657 15:11:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.657 15:11:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.657 15:11:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.657 15:11:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:01.657 { 00:06:01.657 "name": "Malloc2", 00:06:01.657 "aliases": [ 00:06:01.657 "33ef62f7-a7b8-42a8-96e3-fca67f66e9d2" 00:06:01.658 ], 00:06:01.658 "product_name": "Malloc disk", 00:06:01.658 "block_size": 512, 00:06:01.658 "num_blocks": 16384, 00:06:01.658 "uuid": "33ef62f7-a7b8-42a8-96e3-fca67f66e9d2", 00:06:01.658 "assigned_rate_limits": { 00:06:01.658 "rw_ios_per_sec": 0, 00:06:01.658 "rw_mbytes_per_sec": 0, 00:06:01.658 "r_mbytes_per_sec": 0, 00:06:01.658 "w_mbytes_per_sec": 0 00:06:01.658 }, 00:06:01.658 "claimed": false, 00:06:01.658 "zoned": false, 00:06:01.658 "supported_io_types": { 00:06:01.658 "read": true, 00:06:01.658 "write": true, 00:06:01.658 "unmap": true, 00:06:01.658 "flush": true, 00:06:01.658 "reset": true, 00:06:01.658 "nvme_admin": false, 00:06:01.658 "nvme_io": false, 00:06:01.658 "nvme_io_md": false, 00:06:01.658 "write_zeroes": true, 00:06:01.658 "zcopy": true, 00:06:01.658 "get_zone_info": false, 00:06:01.658 "zone_management": false, 00:06:01.658 "zone_append": false, 00:06:01.658 "compare": false, 00:06:01.658 "compare_and_write": false, 00:06:01.658 "abort": true, 00:06:01.658 "seek_hole": false, 00:06:01.658 "seek_data": false, 00:06:01.658 "copy": true, 00:06:01.658 "nvme_iov_md": false 00:06:01.658 }, 00:06:01.658 "memory_domains": [ 00:06:01.658 { 00:06:01.658 "dma_device_id": "system", 00:06:01.658 "dma_device_type": 1 00:06:01.658 }, 00:06:01.658 { 00:06:01.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:01.658 "dma_device_type": 2 00:06:01.658 } 00:06:01.658 ], 00:06:01.658 "driver_specific": {} 00:06:01.658 } 00:06:01.658 ]' 00:06:01.658 15:11:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:01.658 15:11:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:01.658 15:11:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:01.658 15:11:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.658 15:11:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.658 [2024-07-15 15:11:05.522910] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:01.658 [2024-07-15 15:11:05.522938] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:01.658 [2024-07-15 15:11:05.522951] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x850e70 00:06:01.658 [2024-07-15 15:11:05.522959] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:01.658 [2024-07-15 15:11:05.523878] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:01.658 [2024-07-15 15:11:05.523899] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:01.658 Passthru0 00:06:01.658 15:11:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.658 15:11:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:01.658 15:11:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.658 15:11:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.658 15:11:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.658 15:11:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:01.658 { 00:06:01.658 "name": "Malloc2", 00:06:01.658 "aliases": [ 00:06:01.658 "33ef62f7-a7b8-42a8-96e3-fca67f66e9d2" 00:06:01.658 ], 00:06:01.658 "product_name": "Malloc disk", 00:06:01.658 "block_size": 512, 00:06:01.658 "num_blocks": 16384, 00:06:01.658 "uuid": "33ef62f7-a7b8-42a8-96e3-fca67f66e9d2", 00:06:01.658 "assigned_rate_limits": { 00:06:01.658 "rw_ios_per_sec": 0, 00:06:01.658 "rw_mbytes_per_sec": 0, 00:06:01.658 "r_mbytes_per_sec": 0, 00:06:01.658 "w_mbytes_per_sec": 0 00:06:01.658 }, 00:06:01.658 "claimed": true, 00:06:01.658 "claim_type": "exclusive_write", 00:06:01.658 "zoned": false, 00:06:01.658 "supported_io_types": { 00:06:01.658 "read": true, 00:06:01.658 "write": true, 00:06:01.658 "unmap": true, 00:06:01.658 "flush": true, 00:06:01.658 "reset": true, 00:06:01.658 "nvme_admin": false, 00:06:01.658 "nvme_io": false, 00:06:01.658 "nvme_io_md": false, 00:06:01.658 "write_zeroes": true, 00:06:01.658 "zcopy": true, 00:06:01.658 "get_zone_info": false, 00:06:01.658 "zone_management": false, 00:06:01.658 "zone_append": false, 00:06:01.658 "compare": false, 00:06:01.658 "compare_and_write": false, 00:06:01.658 "abort": true, 00:06:01.658 "seek_hole": false, 00:06:01.658 "seek_data": false, 00:06:01.658 "copy": true, 00:06:01.658 "nvme_iov_md": false 00:06:01.658 }, 00:06:01.658 "memory_domains": [ 00:06:01.658 { 00:06:01.658 "dma_device_id": "system", 00:06:01.658 "dma_device_type": 1 00:06:01.658 }, 00:06:01.658 { 00:06:01.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:01.658 "dma_device_type": 2 00:06:01.658 } 00:06:01.658 ], 00:06:01.658 "driver_specific": {} 00:06:01.658 }, 00:06:01.658 { 00:06:01.658 "name": "Passthru0", 00:06:01.658 "aliases": [ 00:06:01.658 "4769870c-177f-5433-b8b2-3af5b3e2719c" 00:06:01.658 ], 00:06:01.658 "product_name": "passthru", 00:06:01.658 "block_size": 512, 00:06:01.658 "num_blocks": 16384, 00:06:01.658 "uuid": "4769870c-177f-5433-b8b2-3af5b3e2719c", 00:06:01.658 "assigned_rate_limits": { 00:06:01.658 "rw_ios_per_sec": 0, 00:06:01.658 "rw_mbytes_per_sec": 0, 00:06:01.658 "r_mbytes_per_sec": 0, 00:06:01.658 "w_mbytes_per_sec": 0 00:06:01.658 }, 00:06:01.658 "claimed": false, 00:06:01.658 "zoned": false, 00:06:01.658 "supported_io_types": { 00:06:01.658 "read": true, 00:06:01.658 "write": true, 00:06:01.658 "unmap": true, 00:06:01.658 "flush": true, 00:06:01.658 "reset": true, 00:06:01.658 "nvme_admin": false, 00:06:01.658 "nvme_io": false, 00:06:01.658 "nvme_io_md": false, 00:06:01.658 "write_zeroes": true, 00:06:01.658 "zcopy": true, 00:06:01.658 "get_zone_info": false, 00:06:01.658 "zone_management": false, 00:06:01.658 "zone_append": false, 00:06:01.658 "compare": false, 00:06:01.658 "compare_and_write": false, 00:06:01.658 "abort": true, 00:06:01.658 "seek_hole": false, 00:06:01.658 "seek_data": false, 00:06:01.658 "copy": true, 00:06:01.658 "nvme_iov_md": false 00:06:01.658 }, 00:06:01.658 "memory_domains": [ 00:06:01.658 { 00:06:01.658 "dma_device_id": "system", 00:06:01.658 "dma_device_type": 1 00:06:01.658 }, 00:06:01.658 { 00:06:01.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:01.658 "dma_device_type": 2 00:06:01.658 } 00:06:01.658 ], 00:06:01.658 "driver_specific": { 00:06:01.658 "passthru": { 00:06:01.658 "name": "Passthru0", 00:06:01.658 "base_bdev_name": "Malloc2" 00:06:01.658 } 00:06:01.658 } 00:06:01.658 } 00:06:01.658 ]' 00:06:01.658 15:11:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:01.916 15:11:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:01.916 15:11:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:01.916 15:11:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.916 15:11:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.916 15:11:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.916 15:11:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:01.916 15:11:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.916 15:11:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.916 15:11:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.916 15:11:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:01.916 15:11:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.916 15:11:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.916 15:11:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.916 15:11:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:01.916 15:11:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:01.916 15:11:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:01.916 00:06:01.916 real 0m0.284s 00:06:01.916 user 0m0.183s 00:06:01.916 sys 0m0.034s 00:06:01.916 15:11:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.916 15:11:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.916 ************************************ 00:06:01.916 END TEST rpc_daemon_integrity 00:06:01.916 ************************************ 00:06:01.916 15:11:05 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:01.916 15:11:05 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:01.916 15:11:05 rpc -- rpc/rpc.sh@84 -- # killprocess 2866015 00:06:01.916 15:11:05 rpc -- common/autotest_common.sh@948 -- # '[' -z 2866015 ']' 00:06:01.916 15:11:05 rpc -- common/autotest_common.sh@952 -- # kill -0 2866015 00:06:01.916 15:11:05 rpc -- common/autotest_common.sh@953 -- # uname 00:06:01.916 15:11:05 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:01.916 15:11:05 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2866015 00:06:01.916 15:11:05 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:01.916 15:11:05 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:01.916 15:11:05 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2866015' 00:06:01.916 killing process with pid 2866015 00:06:01.916 15:11:05 rpc -- common/autotest_common.sh@967 -- # kill 2866015 00:06:01.916 15:11:05 rpc -- common/autotest_common.sh@972 -- # wait 2866015 00:06:02.174 00:06:02.174 real 0m2.562s 00:06:02.174 user 0m3.248s 00:06:02.174 sys 0m0.789s 00:06:02.174 15:11:06 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.174 15:11:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.174 ************************************ 00:06:02.174 END TEST rpc 00:06:02.174 ************************************ 00:06:02.431 15:11:06 -- common/autotest_common.sh@1142 -- # return 0 00:06:02.431 15:11:06 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:02.431 15:11:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:02.432 15:11:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.432 15:11:06 -- common/autotest_common.sh@10 -- # set +x 00:06:02.432 ************************************ 00:06:02.432 START TEST skip_rpc 00:06:02.432 ************************************ 00:06:02.432 15:11:06 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:02.432 * Looking for test storage... 00:06:02.432 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:02.432 15:11:06 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:02.432 15:11:06 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:02.432 15:11:06 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:02.432 15:11:06 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:02.432 15:11:06 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.432 15:11:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.432 ************************************ 00:06:02.432 START TEST skip_rpc 00:06:02.432 ************************************ 00:06:02.432 15:11:06 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:06:02.432 15:11:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2866708 00:06:02.432 15:11:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:02.432 15:11:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:02.432 15:11:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:02.689 [2024-07-15 15:11:06.358255] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:06:02.689 [2024-07-15 15:11:06.358298] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2866708 ] 00:06:02.689 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.689 [2024-07-15 15:11:06.424347] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.689 [2024-07-15 15:11:06.493169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.951 15:11:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:07.951 15:11:11 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:07.951 15:11:11 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:07.951 15:11:11 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:07.951 15:11:11 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:07.951 15:11:11 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:07.951 15:11:11 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:07.951 15:11:11 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:06:07.951 15:11:11 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:07.951 15:11:11 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.951 15:11:11 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:07.951 15:11:11 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:07.951 15:11:11 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:07.951 15:11:11 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:07.951 15:11:11 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:07.951 15:11:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:07.951 15:11:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2866708 00:06:07.951 15:11:11 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 2866708 ']' 00:06:07.951 15:11:11 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 2866708 00:06:07.951 15:11:11 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:06:07.951 15:11:11 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:07.951 15:11:11 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2866708 00:06:07.951 15:11:11 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:07.951 15:11:11 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:07.951 15:11:11 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2866708' 00:06:07.951 killing process with pid 2866708 00:06:07.951 15:11:11 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 2866708 00:06:07.951 15:11:11 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 2866708 00:06:07.951 00:06:07.951 real 0m5.374s 00:06:07.951 user 0m5.137s 00:06:07.951 sys 0m0.274s 00:06:07.951 15:11:11 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.951 15:11:11 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.951 ************************************ 00:06:07.951 END TEST skip_rpc 00:06:07.951 ************************************ 00:06:07.951 15:11:11 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:07.951 15:11:11 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:07.951 15:11:11 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:07.951 15:11:11 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.951 15:11:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.951 ************************************ 00:06:07.951 START TEST skip_rpc_with_json 00:06:07.951 ************************************ 00:06:07.951 15:11:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:06:07.951 15:11:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:07.951 15:11:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2867626 00:06:07.951 15:11:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:07.951 15:11:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:07.951 15:11:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2867626 00:06:07.951 15:11:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 2867626 ']' 00:06:07.951 15:11:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.951 15:11:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:07.951 15:11:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.951 15:11:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:07.951 15:11:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:07.951 [2024-07-15 15:11:11.816817] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:06:07.951 [2024-07-15 15:11:11.816863] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2867626 ] 00:06:07.951 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.210 [2024-07-15 15:11:11.885287] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.210 [2024-07-15 15:11:11.954988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.777 15:11:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:08.777 15:11:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:06:08.777 15:11:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:08.777 15:11:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.777 15:11:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:08.777 [2024-07-15 15:11:12.605877] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:08.777 request: 00:06:08.777 { 00:06:08.777 "trtype": "tcp", 00:06:08.777 "method": "nvmf_get_transports", 00:06:08.777 "req_id": 1 00:06:08.777 } 00:06:08.777 Got JSON-RPC error response 00:06:08.777 response: 00:06:08.777 { 00:06:08.777 "code": -19, 00:06:08.777 "message": "No such device" 00:06:08.777 } 00:06:08.777 15:11:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:08.777 15:11:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:08.777 15:11:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.777 15:11:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:08.777 [2024-07-15 15:11:12.613977] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:08.777 15:11:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.777 15:11:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:08.777 15:11:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.777 15:11:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:09.036 15:11:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.036 15:11:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:09.036 { 00:06:09.036 "subsystems": [ 00:06:09.036 { 00:06:09.036 "subsystem": "vfio_user_target", 00:06:09.036 "config": null 00:06:09.036 }, 00:06:09.036 { 00:06:09.036 "subsystem": "keyring", 00:06:09.036 "config": [] 00:06:09.036 }, 00:06:09.036 { 00:06:09.036 "subsystem": "iobuf", 00:06:09.036 "config": [ 00:06:09.036 { 00:06:09.036 "method": "iobuf_set_options", 00:06:09.036 "params": { 00:06:09.036 "small_pool_count": 8192, 00:06:09.036 "large_pool_count": 1024, 00:06:09.036 "small_bufsize": 8192, 00:06:09.036 "large_bufsize": 135168 00:06:09.036 } 00:06:09.036 } 00:06:09.036 ] 00:06:09.036 }, 00:06:09.036 { 00:06:09.036 "subsystem": "sock", 00:06:09.036 "config": [ 00:06:09.036 { 00:06:09.036 "method": "sock_set_default_impl", 00:06:09.036 "params": { 00:06:09.036 "impl_name": "posix" 00:06:09.036 } 00:06:09.036 }, 00:06:09.036 { 00:06:09.036 "method": "sock_impl_set_options", 00:06:09.036 "params": { 00:06:09.036 "impl_name": "ssl", 00:06:09.036 "recv_buf_size": 4096, 00:06:09.036 "send_buf_size": 4096, 00:06:09.036 "enable_recv_pipe": true, 00:06:09.036 "enable_quickack": false, 00:06:09.036 "enable_placement_id": 0, 00:06:09.036 "enable_zerocopy_send_server": true, 00:06:09.036 "enable_zerocopy_send_client": false, 00:06:09.036 "zerocopy_threshold": 0, 00:06:09.036 "tls_version": 0, 00:06:09.036 "enable_ktls": false 00:06:09.036 } 00:06:09.036 }, 00:06:09.036 { 00:06:09.036 "method": "sock_impl_set_options", 00:06:09.036 "params": { 00:06:09.036 "impl_name": "posix", 00:06:09.036 "recv_buf_size": 2097152, 00:06:09.036 "send_buf_size": 2097152, 00:06:09.036 "enable_recv_pipe": true, 00:06:09.036 "enable_quickack": false, 00:06:09.036 "enable_placement_id": 0, 00:06:09.036 "enable_zerocopy_send_server": true, 00:06:09.036 "enable_zerocopy_send_client": false, 00:06:09.036 "zerocopy_threshold": 0, 00:06:09.036 "tls_version": 0, 00:06:09.036 "enable_ktls": false 00:06:09.036 } 00:06:09.036 } 00:06:09.036 ] 00:06:09.036 }, 00:06:09.036 { 00:06:09.036 "subsystem": "vmd", 00:06:09.036 "config": [] 00:06:09.036 }, 00:06:09.036 { 00:06:09.036 "subsystem": "accel", 00:06:09.036 "config": [ 00:06:09.036 { 00:06:09.036 "method": "accel_set_options", 00:06:09.036 "params": { 00:06:09.036 "small_cache_size": 128, 00:06:09.036 "large_cache_size": 16, 00:06:09.036 "task_count": 2048, 00:06:09.036 "sequence_count": 2048, 00:06:09.036 "buf_count": 2048 00:06:09.036 } 00:06:09.036 } 00:06:09.036 ] 00:06:09.036 }, 00:06:09.036 { 00:06:09.036 "subsystem": "bdev", 00:06:09.036 "config": [ 00:06:09.036 { 00:06:09.036 "method": "bdev_set_options", 00:06:09.036 "params": { 00:06:09.036 "bdev_io_pool_size": 65535, 00:06:09.036 "bdev_io_cache_size": 256, 00:06:09.036 "bdev_auto_examine": true, 00:06:09.036 "iobuf_small_cache_size": 128, 00:06:09.036 "iobuf_large_cache_size": 16 00:06:09.036 } 00:06:09.036 }, 00:06:09.036 { 00:06:09.036 "method": "bdev_raid_set_options", 00:06:09.036 "params": { 00:06:09.036 "process_window_size_kb": 1024 00:06:09.036 } 00:06:09.036 }, 00:06:09.036 { 00:06:09.036 "method": "bdev_iscsi_set_options", 00:06:09.036 "params": { 00:06:09.036 "timeout_sec": 30 00:06:09.036 } 00:06:09.036 }, 00:06:09.036 { 00:06:09.036 "method": "bdev_nvme_set_options", 00:06:09.036 "params": { 00:06:09.036 "action_on_timeout": "none", 00:06:09.036 "timeout_us": 0, 00:06:09.036 "timeout_admin_us": 0, 00:06:09.036 "keep_alive_timeout_ms": 10000, 00:06:09.036 "arbitration_burst": 0, 00:06:09.037 "low_priority_weight": 0, 00:06:09.037 "medium_priority_weight": 0, 00:06:09.037 "high_priority_weight": 0, 00:06:09.037 "nvme_adminq_poll_period_us": 10000, 00:06:09.037 "nvme_ioq_poll_period_us": 0, 00:06:09.037 "io_queue_requests": 0, 00:06:09.037 "delay_cmd_submit": true, 00:06:09.037 "transport_retry_count": 4, 00:06:09.037 "bdev_retry_count": 3, 00:06:09.037 "transport_ack_timeout": 0, 00:06:09.037 "ctrlr_loss_timeout_sec": 0, 00:06:09.037 "reconnect_delay_sec": 0, 00:06:09.037 "fast_io_fail_timeout_sec": 0, 00:06:09.037 "disable_auto_failback": false, 00:06:09.037 "generate_uuids": false, 00:06:09.037 "transport_tos": 0, 00:06:09.037 "nvme_error_stat": false, 00:06:09.037 "rdma_srq_size": 0, 00:06:09.037 "io_path_stat": false, 00:06:09.037 "allow_accel_sequence": false, 00:06:09.037 "rdma_max_cq_size": 0, 00:06:09.037 "rdma_cm_event_timeout_ms": 0, 00:06:09.037 "dhchap_digests": [ 00:06:09.037 "sha256", 00:06:09.037 "sha384", 00:06:09.037 "sha512" 00:06:09.037 ], 00:06:09.037 "dhchap_dhgroups": [ 00:06:09.037 "null", 00:06:09.037 "ffdhe2048", 00:06:09.037 "ffdhe3072", 00:06:09.037 "ffdhe4096", 00:06:09.037 "ffdhe6144", 00:06:09.037 "ffdhe8192" 00:06:09.037 ] 00:06:09.037 } 00:06:09.037 }, 00:06:09.037 { 00:06:09.037 "method": "bdev_nvme_set_hotplug", 00:06:09.037 "params": { 00:06:09.037 "period_us": 100000, 00:06:09.037 "enable": false 00:06:09.037 } 00:06:09.037 }, 00:06:09.037 { 00:06:09.037 "method": "bdev_wait_for_examine" 00:06:09.037 } 00:06:09.037 ] 00:06:09.037 }, 00:06:09.037 { 00:06:09.037 "subsystem": "scsi", 00:06:09.037 "config": null 00:06:09.037 }, 00:06:09.037 { 00:06:09.037 "subsystem": "scheduler", 00:06:09.037 "config": [ 00:06:09.037 { 00:06:09.037 "method": "framework_set_scheduler", 00:06:09.037 "params": { 00:06:09.037 "name": "static" 00:06:09.037 } 00:06:09.037 } 00:06:09.037 ] 00:06:09.037 }, 00:06:09.037 { 00:06:09.037 "subsystem": "vhost_scsi", 00:06:09.037 "config": [] 00:06:09.037 }, 00:06:09.037 { 00:06:09.037 "subsystem": "vhost_blk", 00:06:09.037 "config": [] 00:06:09.037 }, 00:06:09.037 { 00:06:09.037 "subsystem": "ublk", 00:06:09.037 "config": [] 00:06:09.037 }, 00:06:09.037 { 00:06:09.037 "subsystem": "nbd", 00:06:09.037 "config": [] 00:06:09.037 }, 00:06:09.037 { 00:06:09.037 "subsystem": "nvmf", 00:06:09.037 "config": [ 00:06:09.037 { 00:06:09.037 "method": "nvmf_set_config", 00:06:09.037 "params": { 00:06:09.037 "discovery_filter": "match_any", 00:06:09.037 "admin_cmd_passthru": { 00:06:09.037 "identify_ctrlr": false 00:06:09.037 } 00:06:09.037 } 00:06:09.037 }, 00:06:09.037 { 00:06:09.037 "method": "nvmf_set_max_subsystems", 00:06:09.037 "params": { 00:06:09.037 "max_subsystems": 1024 00:06:09.037 } 00:06:09.037 }, 00:06:09.037 { 00:06:09.037 "method": "nvmf_set_crdt", 00:06:09.037 "params": { 00:06:09.037 "crdt1": 0, 00:06:09.037 "crdt2": 0, 00:06:09.037 "crdt3": 0 00:06:09.037 } 00:06:09.037 }, 00:06:09.037 { 00:06:09.037 "method": "nvmf_create_transport", 00:06:09.037 "params": { 00:06:09.037 "trtype": "TCP", 00:06:09.037 "max_queue_depth": 128, 00:06:09.037 "max_io_qpairs_per_ctrlr": 127, 00:06:09.037 "in_capsule_data_size": 4096, 00:06:09.037 "max_io_size": 131072, 00:06:09.037 "io_unit_size": 131072, 00:06:09.037 "max_aq_depth": 128, 00:06:09.037 "num_shared_buffers": 511, 00:06:09.037 "buf_cache_size": 4294967295, 00:06:09.037 "dif_insert_or_strip": false, 00:06:09.037 "zcopy": false, 00:06:09.037 "c2h_success": true, 00:06:09.037 "sock_priority": 0, 00:06:09.037 "abort_timeout_sec": 1, 00:06:09.037 "ack_timeout": 0, 00:06:09.037 "data_wr_pool_size": 0 00:06:09.037 } 00:06:09.037 } 00:06:09.037 ] 00:06:09.037 }, 00:06:09.037 { 00:06:09.037 "subsystem": "iscsi", 00:06:09.037 "config": [ 00:06:09.037 { 00:06:09.037 "method": "iscsi_set_options", 00:06:09.037 "params": { 00:06:09.037 "node_base": "iqn.2016-06.io.spdk", 00:06:09.037 "max_sessions": 128, 00:06:09.037 "max_connections_per_session": 2, 00:06:09.037 "max_queue_depth": 64, 00:06:09.037 "default_time2wait": 2, 00:06:09.037 "default_time2retain": 20, 00:06:09.037 "first_burst_length": 8192, 00:06:09.037 "immediate_data": true, 00:06:09.037 "allow_duplicated_isid": false, 00:06:09.037 "error_recovery_level": 0, 00:06:09.037 "nop_timeout": 60, 00:06:09.037 "nop_in_interval": 30, 00:06:09.037 "disable_chap": false, 00:06:09.037 "require_chap": false, 00:06:09.037 "mutual_chap": false, 00:06:09.037 "chap_group": 0, 00:06:09.037 "max_large_datain_per_connection": 64, 00:06:09.037 "max_r2t_per_connection": 4, 00:06:09.037 "pdu_pool_size": 36864, 00:06:09.037 "immediate_data_pool_size": 16384, 00:06:09.037 "data_out_pool_size": 2048 00:06:09.037 } 00:06:09.037 } 00:06:09.037 ] 00:06:09.037 } 00:06:09.037 ] 00:06:09.037 } 00:06:09.037 15:11:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:09.037 15:11:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2867626 00:06:09.037 15:11:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2867626 ']' 00:06:09.037 15:11:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2867626 00:06:09.037 15:11:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:06:09.037 15:11:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:09.037 15:11:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2867626 00:06:09.037 15:11:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:09.037 15:11:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:09.037 15:11:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2867626' 00:06:09.037 killing process with pid 2867626 00:06:09.037 15:11:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2867626 00:06:09.037 15:11:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2867626 00:06:09.296 15:11:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2867829 00:06:09.296 15:11:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:09.296 15:11:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:14.561 15:11:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2867829 00:06:14.561 15:11:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2867829 ']' 00:06:14.561 15:11:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2867829 00:06:14.561 15:11:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:06:14.561 15:11:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:14.561 15:11:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2867829 00:06:14.561 15:11:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:14.561 15:11:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:14.561 15:11:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2867829' 00:06:14.561 killing process with pid 2867829 00:06:14.561 15:11:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2867829 00:06:14.561 15:11:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2867829 00:06:14.819 15:11:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:14.819 15:11:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:14.819 00:06:14.819 real 0m6.734s 00:06:14.819 user 0m6.506s 00:06:14.819 sys 0m0.637s 00:06:14.819 15:11:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.819 15:11:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:14.819 ************************************ 00:06:14.819 END TEST skip_rpc_with_json 00:06:14.819 ************************************ 00:06:14.819 15:11:18 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:14.819 15:11:18 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:14.819 15:11:18 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:14.819 15:11:18 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.819 15:11:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.819 ************************************ 00:06:14.819 START TEST skip_rpc_with_delay 00:06:14.819 ************************************ 00:06:14.819 15:11:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:06:14.819 15:11:18 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:14.819 15:11:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:06:14.819 15:11:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:14.819 15:11:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:14.819 15:11:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:14.819 15:11:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:14.819 15:11:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:14.819 15:11:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:14.819 15:11:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:14.819 15:11:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:14.819 15:11:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:14.819 15:11:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:14.819 [2024-07-15 15:11:18.624233] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:14.819 [2024-07-15 15:11:18.624299] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:14.819 15:11:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:06:14.819 15:11:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:14.819 15:11:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:14.819 15:11:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:14.819 00:06:14.819 real 0m0.065s 00:06:14.819 user 0m0.039s 00:06:14.819 sys 0m0.026s 00:06:14.819 15:11:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.819 15:11:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:14.819 ************************************ 00:06:14.819 END TEST skip_rpc_with_delay 00:06:14.819 ************************************ 00:06:14.819 15:11:18 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:14.819 15:11:18 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:14.819 15:11:18 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:14.819 15:11:18 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:14.819 15:11:18 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:14.819 15:11:18 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.819 15:11:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.819 ************************************ 00:06:14.819 START TEST exit_on_failed_rpc_init 00:06:14.819 ************************************ 00:06:14.819 15:11:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:06:14.819 15:11:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2868910 00:06:14.819 15:11:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2868910 00:06:14.819 15:11:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 2868910 ']' 00:06:14.819 15:11:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.819 15:11:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:14.819 15:11:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.819 15:11:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:14.819 15:11:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:14.819 15:11:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:15.078 [2024-07-15 15:11:18.757369] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:06:15.078 [2024-07-15 15:11:18.757413] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2868910 ] 00:06:15.078 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.078 [2024-07-15 15:11:18.826484] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.078 [2024-07-15 15:11:18.900597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.645 15:11:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:15.645 15:11:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:06:15.645 15:11:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:15.645 15:11:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:15.645 15:11:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:06:15.645 15:11:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:15.645 15:11:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:15.645 15:11:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:15.645 15:11:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:15.645 15:11:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:15.645 15:11:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:15.645 15:11:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:15.645 15:11:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:15.645 15:11:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:15.645 15:11:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:15.903 [2024-07-15 15:11:19.570067] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:06:15.903 [2024-07-15 15:11:19.570119] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2869041 ] 00:06:15.903 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.903 [2024-07-15 15:11:19.638715] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.903 [2024-07-15 15:11:19.708716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.903 [2024-07-15 15:11:19.708787] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:15.903 [2024-07-15 15:11:19.708798] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:15.903 [2024-07-15 15:11:19.708806] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:15.903 15:11:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:06:15.903 15:11:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:15.903 15:11:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:06:15.903 15:11:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:06:15.903 15:11:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:06:15.903 15:11:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:15.903 15:11:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:15.903 15:11:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2868910 00:06:15.903 15:11:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 2868910 ']' 00:06:15.903 15:11:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 2868910 00:06:15.903 15:11:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:06:15.903 15:11:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:15.903 15:11:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2868910 00:06:16.160 15:11:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:16.160 15:11:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:16.160 15:11:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2868910' 00:06:16.160 killing process with pid 2868910 00:06:16.160 15:11:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 2868910 00:06:16.160 15:11:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 2868910 00:06:16.432 00:06:16.432 real 0m1.432s 00:06:16.432 user 0m1.610s 00:06:16.432 sys 0m0.421s 00:06:16.432 15:11:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.432 15:11:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:16.432 ************************************ 00:06:16.432 END TEST exit_on_failed_rpc_init 00:06:16.432 ************************************ 00:06:16.432 15:11:20 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:16.432 15:11:20 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:16.432 00:06:16.432 real 0m14.022s 00:06:16.432 user 0m13.439s 00:06:16.432 sys 0m1.660s 00:06:16.432 15:11:20 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.432 15:11:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.432 ************************************ 00:06:16.432 END TEST skip_rpc 00:06:16.432 ************************************ 00:06:16.432 15:11:20 -- common/autotest_common.sh@1142 -- # return 0 00:06:16.432 15:11:20 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:16.432 15:11:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:16.432 15:11:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.432 15:11:20 -- common/autotest_common.sh@10 -- # set +x 00:06:16.432 ************************************ 00:06:16.432 START TEST rpc_client 00:06:16.432 ************************************ 00:06:16.432 15:11:20 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:16.691 * Looking for test storage... 00:06:16.691 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:16.691 15:11:20 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:16.691 OK 00:06:16.691 15:11:20 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:16.691 00:06:16.691 real 0m0.130s 00:06:16.691 user 0m0.055s 00:06:16.691 sys 0m0.085s 00:06:16.691 15:11:20 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.691 15:11:20 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:16.691 ************************************ 00:06:16.691 END TEST rpc_client 00:06:16.691 ************************************ 00:06:16.691 15:11:20 -- common/autotest_common.sh@1142 -- # return 0 00:06:16.691 15:11:20 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:16.691 15:11:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:16.691 15:11:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.691 15:11:20 -- common/autotest_common.sh@10 -- # set +x 00:06:16.691 ************************************ 00:06:16.691 START TEST json_config 00:06:16.691 ************************************ 00:06:16.691 15:11:20 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:16.691 15:11:20 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:16.691 15:11:20 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:16.691 15:11:20 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:16.691 15:11:20 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:16.691 15:11:20 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:16.691 15:11:20 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:16.691 15:11:20 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:16.691 15:11:20 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:16.691 15:11:20 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:16.691 15:11:20 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:16.691 15:11:20 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:16.691 15:11:20 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:16.691 15:11:20 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:06:16.691 15:11:20 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:06:16.691 15:11:20 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:16.691 15:11:20 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:16.691 15:11:20 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:16.691 15:11:20 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:16.691 15:11:20 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:16.691 15:11:20 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:16.691 15:11:20 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:16.691 15:11:20 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:16.691 15:11:20 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.691 15:11:20 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.691 15:11:20 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.691 15:11:20 json_config -- paths/export.sh@5 -- # export PATH 00:06:16.691 15:11:20 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.691 15:11:20 json_config -- nvmf/common.sh@47 -- # : 0 00:06:16.691 15:11:20 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:16.691 15:11:20 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:16.691 15:11:20 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:16.691 15:11:20 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:16.691 15:11:20 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:16.691 15:11:20 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:16.691 15:11:20 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:16.691 15:11:20 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:16.691 15:11:20 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:16.691 15:11:20 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:16.691 15:11:20 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:16.691 15:11:20 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:16.691 15:11:20 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:16.691 15:11:20 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:16.691 15:11:20 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:16.691 15:11:20 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:16.691 15:11:20 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:16.691 15:11:20 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:16.691 15:11:20 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:16.691 15:11:20 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:16.691 15:11:20 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:16.691 15:11:20 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:16.691 15:11:20 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:16.691 15:11:20 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:06:16.691 INFO: JSON configuration test init 00:06:16.691 15:11:20 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:06:16.691 15:11:20 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:06:16.691 15:11:20 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:16.691 15:11:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:16.691 15:11:20 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:06:16.691 15:11:20 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:16.691 15:11:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:16.691 15:11:20 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:06:16.691 15:11:20 json_config -- json_config/common.sh@9 -- # local app=target 00:06:16.691 15:11:20 json_config -- json_config/common.sh@10 -- # shift 00:06:16.691 15:11:20 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:16.691 15:11:20 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:16.691 15:11:20 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:16.691 15:11:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:16.691 15:11:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:16.691 15:11:20 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2869301 00:06:16.691 15:11:20 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:16.691 Waiting for target to run... 00:06:16.692 15:11:20 json_config -- json_config/common.sh@25 -- # waitforlisten 2869301 /var/tmp/spdk_tgt.sock 00:06:16.692 15:11:20 json_config -- common/autotest_common.sh@829 -- # '[' -z 2869301 ']' 00:06:16.692 15:11:20 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:16.692 15:11:20 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:16.692 15:11:20 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:16.692 15:11:20 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:16.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:16.692 15:11:20 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:16.692 15:11:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:16.985 [2024-07-15 15:11:20.640618] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:06:16.985 [2024-07-15 15:11:20.640665] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2869301 ] 00:06:16.985 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.271 [2024-07-15 15:11:21.080430] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.271 [2024-07-15 15:11:21.164890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.529 15:11:21 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:17.529 15:11:21 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:17.529 15:11:21 json_config -- json_config/common.sh@26 -- # echo '' 00:06:17.529 00:06:17.529 15:11:21 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:06:17.529 15:11:21 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:06:17.529 15:11:21 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:17.529 15:11:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:17.796 15:11:21 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:06:17.796 15:11:21 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:06:17.796 15:11:21 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:17.797 15:11:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:17.797 15:11:21 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:17.797 15:11:21 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:06:17.797 15:11:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:21.089 15:11:24 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:06:21.089 15:11:24 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:21.089 15:11:24 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:21.089 15:11:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:21.089 15:11:24 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:21.089 15:11:24 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:21.089 15:11:24 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:21.089 15:11:24 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:21.089 15:11:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:21.089 15:11:24 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:21.089 15:11:24 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:21.089 15:11:24 json_config -- json_config/json_config.sh@48 -- # local get_types 00:06:21.089 15:11:24 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:06:21.089 15:11:24 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:06:21.089 15:11:24 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:21.089 15:11:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:21.089 15:11:24 json_config -- json_config/json_config.sh@55 -- # return 0 00:06:21.089 15:11:24 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:06:21.089 15:11:24 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:21.089 15:11:24 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:21.089 15:11:24 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:06:21.089 15:11:24 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:06:21.089 15:11:24 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:06:21.089 15:11:24 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:21.089 15:11:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:21.089 15:11:24 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:21.089 15:11:24 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:06:21.089 15:11:24 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:06:21.089 15:11:24 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:21.089 15:11:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:21.089 MallocForNvmf0 00:06:21.089 15:11:24 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:21.089 15:11:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:21.347 MallocForNvmf1 00:06:21.347 15:11:25 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:21.348 15:11:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:21.605 [2024-07-15 15:11:25.263171] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:21.605 15:11:25 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:21.605 15:11:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:21.605 15:11:25 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:21.605 15:11:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:21.863 15:11:25 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:21.863 15:11:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:22.120 15:11:25 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:22.120 15:11:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:22.120 [2024-07-15 15:11:25.961394] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:22.120 15:11:25 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:06:22.120 15:11:25 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:22.120 15:11:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:22.120 15:11:26 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:06:22.120 15:11:26 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:22.120 15:11:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:22.377 15:11:26 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:06:22.377 15:11:26 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:22.377 15:11:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:22.377 MallocBdevForConfigChangeCheck 00:06:22.377 15:11:26 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:06:22.377 15:11:26 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:22.377 15:11:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:22.377 15:11:26 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:06:22.377 15:11:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:22.941 15:11:26 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:06:22.941 INFO: shutting down applications... 00:06:22.941 15:11:26 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:06:22.941 15:11:26 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:06:22.941 15:11:26 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:06:22.941 15:11:26 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:24.839 Calling clear_iscsi_subsystem 00:06:24.839 Calling clear_nvmf_subsystem 00:06:24.839 Calling clear_nbd_subsystem 00:06:24.839 Calling clear_ublk_subsystem 00:06:24.839 Calling clear_vhost_blk_subsystem 00:06:24.839 Calling clear_vhost_scsi_subsystem 00:06:24.839 Calling clear_bdev_subsystem 00:06:24.839 15:11:28 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:24.839 15:11:28 json_config -- json_config/json_config.sh@343 -- # count=100 00:06:24.839 15:11:28 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:06:24.839 15:11:28 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:24.839 15:11:28 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:24.839 15:11:28 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:25.096 15:11:28 json_config -- json_config/json_config.sh@345 -- # break 00:06:25.096 15:11:28 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:06:25.096 15:11:28 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:06:25.096 15:11:28 json_config -- json_config/common.sh@31 -- # local app=target 00:06:25.096 15:11:28 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:25.096 15:11:28 json_config -- json_config/common.sh@35 -- # [[ -n 2869301 ]] 00:06:25.096 15:11:28 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2869301 00:06:25.096 15:11:28 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:25.096 15:11:28 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:25.096 15:11:28 json_config -- json_config/common.sh@41 -- # kill -0 2869301 00:06:25.096 15:11:28 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:25.663 15:11:29 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:25.663 15:11:29 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:25.663 15:11:29 json_config -- json_config/common.sh@41 -- # kill -0 2869301 00:06:25.663 15:11:29 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:25.663 15:11:29 json_config -- json_config/common.sh@43 -- # break 00:06:25.663 15:11:29 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:25.663 15:11:29 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:25.663 SPDK target shutdown done 00:06:25.663 15:11:29 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:06:25.663 INFO: relaunching applications... 00:06:25.663 15:11:29 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:25.663 15:11:29 json_config -- json_config/common.sh@9 -- # local app=target 00:06:25.663 15:11:29 json_config -- json_config/common.sh@10 -- # shift 00:06:25.663 15:11:29 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:25.663 15:11:29 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:25.663 15:11:29 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:25.663 15:11:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:25.663 15:11:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:25.663 15:11:29 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2871008 00:06:25.663 15:11:29 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:25.663 Waiting for target to run... 00:06:25.663 15:11:29 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:25.663 15:11:29 json_config -- json_config/common.sh@25 -- # waitforlisten 2871008 /var/tmp/spdk_tgt.sock 00:06:25.663 15:11:29 json_config -- common/autotest_common.sh@829 -- # '[' -z 2871008 ']' 00:06:25.663 15:11:29 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:25.663 15:11:29 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:25.663 15:11:29 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:25.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:25.663 15:11:29 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:25.663 15:11:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:25.663 [2024-07-15 15:11:29.536137] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:06:25.663 [2024-07-15 15:11:29.536199] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2871008 ] 00:06:25.663 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.230 [2024-07-15 15:11:29.977526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.230 [2024-07-15 15:11:30.062578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.519 [2024-07-15 15:11:33.094062] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:29.519 [2024-07-15 15:11:33.126475] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:30.082 15:11:33 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:30.082 15:11:33 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:30.082 15:11:33 json_config -- json_config/common.sh@26 -- # echo '' 00:06:30.082 00:06:30.082 15:11:33 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:06:30.082 15:11:33 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:30.082 INFO: Checking if target configuration is the same... 00:06:30.082 15:11:33 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:30.082 15:11:33 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:06:30.082 15:11:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:30.082 + '[' 2 -ne 2 ']' 00:06:30.082 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:30.082 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:30.082 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:30.082 +++ basename /dev/fd/62 00:06:30.082 ++ mktemp /tmp/62.XXX 00:06:30.082 + tmp_file_1=/tmp/62.46c 00:06:30.082 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:30.082 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:30.082 + tmp_file_2=/tmp/spdk_tgt_config.json.1dE 00:06:30.082 + ret=0 00:06:30.082 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:30.341 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:30.341 + diff -u /tmp/62.46c /tmp/spdk_tgt_config.json.1dE 00:06:30.341 + echo 'INFO: JSON config files are the same' 00:06:30.341 INFO: JSON config files are the same 00:06:30.341 + rm /tmp/62.46c /tmp/spdk_tgt_config.json.1dE 00:06:30.341 + exit 0 00:06:30.341 15:11:34 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:06:30.341 15:11:34 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:30.341 INFO: changing configuration and checking if this can be detected... 00:06:30.341 15:11:34 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:30.341 15:11:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:30.341 15:11:34 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:06:30.341 15:11:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:30.341 15:11:34 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:30.341 + '[' 2 -ne 2 ']' 00:06:30.341 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:30.341 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:30.341 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:30.341 +++ basename /dev/fd/62 00:06:30.600 ++ mktemp /tmp/62.XXX 00:06:30.600 + tmp_file_1=/tmp/62.fPe 00:06:30.600 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:30.600 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:30.600 + tmp_file_2=/tmp/spdk_tgt_config.json.kAV 00:06:30.600 + ret=0 00:06:30.600 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:30.859 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:30.859 + diff -u /tmp/62.fPe /tmp/spdk_tgt_config.json.kAV 00:06:30.859 + ret=1 00:06:30.859 + echo '=== Start of file: /tmp/62.fPe ===' 00:06:30.859 + cat /tmp/62.fPe 00:06:30.859 + echo '=== End of file: /tmp/62.fPe ===' 00:06:30.859 + echo '' 00:06:30.859 + echo '=== Start of file: /tmp/spdk_tgt_config.json.kAV ===' 00:06:30.859 + cat /tmp/spdk_tgt_config.json.kAV 00:06:30.859 + echo '=== End of file: /tmp/spdk_tgt_config.json.kAV ===' 00:06:30.859 + echo '' 00:06:30.859 + rm /tmp/62.fPe /tmp/spdk_tgt_config.json.kAV 00:06:30.859 + exit 1 00:06:30.859 15:11:34 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:06:30.859 INFO: configuration change detected. 00:06:30.859 15:11:34 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:06:30.859 15:11:34 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:06:30.859 15:11:34 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:30.859 15:11:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:30.859 15:11:34 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:06:30.859 15:11:34 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:06:30.859 15:11:34 json_config -- json_config/json_config.sh@317 -- # [[ -n 2871008 ]] 00:06:30.859 15:11:34 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:06:30.859 15:11:34 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:06:30.859 15:11:34 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:30.859 15:11:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:30.859 15:11:34 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:06:30.859 15:11:34 json_config -- json_config/json_config.sh@193 -- # uname -s 00:06:30.859 15:11:34 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:06:30.859 15:11:34 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:06:30.859 15:11:34 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:06:30.859 15:11:34 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:06:30.859 15:11:34 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:30.859 15:11:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:30.859 15:11:34 json_config -- json_config/json_config.sh@323 -- # killprocess 2871008 00:06:30.859 15:11:34 json_config -- common/autotest_common.sh@948 -- # '[' -z 2871008 ']' 00:06:30.859 15:11:34 json_config -- common/autotest_common.sh@952 -- # kill -0 2871008 00:06:30.859 15:11:34 json_config -- common/autotest_common.sh@953 -- # uname 00:06:30.859 15:11:34 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:30.859 15:11:34 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2871008 00:06:30.859 15:11:34 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:30.859 15:11:34 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:30.859 15:11:34 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2871008' 00:06:30.859 killing process with pid 2871008 00:06:30.859 15:11:34 json_config -- common/autotest_common.sh@967 -- # kill 2871008 00:06:30.859 15:11:34 json_config -- common/autotest_common.sh@972 -- # wait 2871008 00:06:33.388 15:11:36 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:33.388 15:11:36 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:33.388 15:11:36 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:33.388 15:11:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:33.388 15:11:36 json_config -- json_config/json_config.sh@328 -- # return 0 00:06:33.388 15:11:36 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:33.388 INFO: Success 00:06:33.388 00:06:33.388 real 0m16.279s 00:06:33.388 user 0m16.616s 00:06:33.388 sys 0m2.358s 00:06:33.388 15:11:36 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.388 15:11:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:33.388 ************************************ 00:06:33.388 END TEST json_config 00:06:33.388 ************************************ 00:06:33.388 15:11:36 -- common/autotest_common.sh@1142 -- # return 0 00:06:33.388 15:11:36 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:33.388 15:11:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:33.388 15:11:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.388 15:11:36 -- common/autotest_common.sh@10 -- # set +x 00:06:33.388 ************************************ 00:06:33.388 START TEST json_config_extra_key 00:06:33.388 ************************************ 00:06:33.388 15:11:36 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:33.388 15:11:36 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:33.388 15:11:36 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:33.388 15:11:36 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:33.388 15:11:36 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:33.388 15:11:36 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:33.388 15:11:36 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:33.388 15:11:36 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:33.388 15:11:36 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:33.388 15:11:36 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:33.388 15:11:36 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:33.388 15:11:36 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:33.388 15:11:36 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:33.388 15:11:36 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:06:33.388 15:11:36 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:06:33.388 15:11:36 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:33.388 15:11:36 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:33.388 15:11:36 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:33.388 15:11:36 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:33.388 15:11:36 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:33.388 15:11:36 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:33.388 15:11:36 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:33.388 15:11:36 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:33.388 15:11:36 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.388 15:11:36 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.388 15:11:36 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.388 15:11:36 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:33.388 15:11:36 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.388 15:11:36 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:33.388 15:11:36 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:33.388 15:11:36 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:33.388 15:11:36 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:33.388 15:11:36 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:33.388 15:11:36 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:33.388 15:11:36 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:33.388 15:11:36 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:33.388 15:11:36 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:33.389 15:11:36 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:33.389 15:11:36 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:33.389 15:11:36 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:33.389 15:11:36 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:33.389 15:11:36 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:33.389 15:11:36 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:33.389 15:11:36 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:33.389 15:11:36 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:33.389 15:11:36 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:33.389 15:11:36 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:33.389 15:11:36 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:33.389 INFO: launching applications... 00:06:33.389 15:11:36 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:33.389 15:11:36 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:33.389 15:11:36 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:33.389 15:11:36 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:33.389 15:11:36 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:33.389 15:11:36 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:33.389 15:11:36 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:33.389 15:11:36 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:33.389 15:11:36 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2872440 00:06:33.389 15:11:36 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:33.389 Waiting for target to run... 00:06:33.389 15:11:36 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2872440 /var/tmp/spdk_tgt.sock 00:06:33.389 15:11:36 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 2872440 ']' 00:06:33.389 15:11:36 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:33.389 15:11:36 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:33.389 15:11:36 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:33.389 15:11:36 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:33.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:33.389 15:11:36 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:33.389 15:11:36 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:33.389 [2024-07-15 15:11:36.984619] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:06:33.389 [2024-07-15 15:11:36.984674] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2872440 ] 00:06:33.389 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.646 [2024-07-15 15:11:37.411553] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.646 [2024-07-15 15:11:37.497073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.904 15:11:37 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:33.904 15:11:37 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:06:33.904 15:11:37 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:33.904 00:06:33.904 15:11:37 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:33.904 INFO: shutting down applications... 00:06:33.904 15:11:37 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:33.904 15:11:37 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:33.904 15:11:37 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:33.904 15:11:37 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2872440 ]] 00:06:33.904 15:11:37 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2872440 00:06:33.904 15:11:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:33.904 15:11:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:33.904 15:11:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2872440 00:06:33.904 15:11:37 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:34.470 15:11:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:34.470 15:11:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:34.470 15:11:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2872440 00:06:34.470 15:11:38 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:34.470 15:11:38 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:34.470 15:11:38 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:34.470 15:11:38 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:34.470 SPDK target shutdown done 00:06:34.470 15:11:38 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:34.470 Success 00:06:34.470 00:06:34.470 real 0m1.456s 00:06:34.470 user 0m1.039s 00:06:34.470 sys 0m0.564s 00:06:34.470 15:11:38 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.470 15:11:38 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:34.470 ************************************ 00:06:34.470 END TEST json_config_extra_key 00:06:34.470 ************************************ 00:06:34.470 15:11:38 -- common/autotest_common.sh@1142 -- # return 0 00:06:34.470 15:11:38 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:34.470 15:11:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:34.470 15:11:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.470 15:11:38 -- common/autotest_common.sh@10 -- # set +x 00:06:34.470 ************************************ 00:06:34.470 START TEST alias_rpc 00:06:34.470 ************************************ 00:06:34.470 15:11:38 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:34.728 * Looking for test storage... 00:06:34.728 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:34.728 15:11:38 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:34.728 15:11:38 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2872755 00:06:34.728 15:11:38 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2872755 00:06:34.728 15:11:38 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:34.728 15:11:38 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 2872755 ']' 00:06:34.728 15:11:38 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.728 15:11:38 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:34.728 15:11:38 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.728 15:11:38 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:34.728 15:11:38 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.728 [2024-07-15 15:11:38.515859] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:06:34.728 [2024-07-15 15:11:38.515911] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2872755 ] 00:06:34.728 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.728 [2024-07-15 15:11:38.584461] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.985 [2024-07-15 15:11:38.659596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.548 15:11:39 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:35.548 15:11:39 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:35.548 15:11:39 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:35.806 15:11:39 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2872755 00:06:35.806 15:11:39 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 2872755 ']' 00:06:35.806 15:11:39 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 2872755 00:06:35.806 15:11:39 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:06:35.806 15:11:39 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:35.806 15:11:39 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2872755 00:06:35.806 15:11:39 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:35.806 15:11:39 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:35.806 15:11:39 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2872755' 00:06:35.806 killing process with pid 2872755 00:06:35.806 15:11:39 alias_rpc -- common/autotest_common.sh@967 -- # kill 2872755 00:06:35.806 15:11:39 alias_rpc -- common/autotest_common.sh@972 -- # wait 2872755 00:06:36.063 00:06:36.063 real 0m1.517s 00:06:36.063 user 0m1.630s 00:06:36.063 sys 0m0.436s 00:06:36.063 15:11:39 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.063 15:11:39 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.063 ************************************ 00:06:36.063 END TEST alias_rpc 00:06:36.063 ************************************ 00:06:36.063 15:11:39 -- common/autotest_common.sh@1142 -- # return 0 00:06:36.063 15:11:39 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:36.063 15:11:39 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:36.063 15:11:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:36.063 15:11:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.063 15:11:39 -- common/autotest_common.sh@10 -- # set +x 00:06:36.063 ************************************ 00:06:36.063 START TEST spdkcli_tcp 00:06:36.063 ************************************ 00:06:36.063 15:11:39 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:36.320 * Looking for test storage... 00:06:36.320 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:36.320 15:11:40 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:36.320 15:11:40 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:36.320 15:11:40 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:36.320 15:11:40 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:36.320 15:11:40 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:36.320 15:11:40 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:36.320 15:11:40 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:36.320 15:11:40 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:36.320 15:11:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:36.321 15:11:40 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2873075 00:06:36.321 15:11:40 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2873075 00:06:36.321 15:11:40 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:36.321 15:11:40 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 2873075 ']' 00:06:36.321 15:11:40 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.321 15:11:40 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:36.321 15:11:40 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.321 15:11:40 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:36.321 15:11:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:36.321 [2024-07-15 15:11:40.097758] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:06:36.321 [2024-07-15 15:11:40.097811] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2873075 ] 00:06:36.321 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.321 [2024-07-15 15:11:40.167358] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:36.596 [2024-07-15 15:11:40.243031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.596 [2024-07-15 15:11:40.243035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.214 15:11:40 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:37.214 15:11:40 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:06:37.214 15:11:40 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2873145 00:06:37.214 15:11:40 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:37.214 15:11:40 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:37.214 [ 00:06:37.214 "bdev_malloc_delete", 00:06:37.214 "bdev_malloc_create", 00:06:37.214 "bdev_null_resize", 00:06:37.214 "bdev_null_delete", 00:06:37.214 "bdev_null_create", 00:06:37.214 "bdev_nvme_cuse_unregister", 00:06:37.214 "bdev_nvme_cuse_register", 00:06:37.214 "bdev_opal_new_user", 00:06:37.214 "bdev_opal_set_lock_state", 00:06:37.214 "bdev_opal_delete", 00:06:37.214 "bdev_opal_get_info", 00:06:37.214 "bdev_opal_create", 00:06:37.214 "bdev_nvme_opal_revert", 00:06:37.214 "bdev_nvme_opal_init", 00:06:37.214 "bdev_nvme_send_cmd", 00:06:37.214 "bdev_nvme_get_path_iostat", 00:06:37.214 "bdev_nvme_get_mdns_discovery_info", 00:06:37.214 "bdev_nvme_stop_mdns_discovery", 00:06:37.214 "bdev_nvme_start_mdns_discovery", 00:06:37.214 "bdev_nvme_set_multipath_policy", 00:06:37.214 "bdev_nvme_set_preferred_path", 00:06:37.214 "bdev_nvme_get_io_paths", 00:06:37.214 "bdev_nvme_remove_error_injection", 00:06:37.214 "bdev_nvme_add_error_injection", 00:06:37.214 "bdev_nvme_get_discovery_info", 00:06:37.214 "bdev_nvme_stop_discovery", 00:06:37.214 "bdev_nvme_start_discovery", 00:06:37.214 "bdev_nvme_get_controller_health_info", 00:06:37.214 "bdev_nvme_disable_controller", 00:06:37.214 "bdev_nvme_enable_controller", 00:06:37.214 "bdev_nvme_reset_controller", 00:06:37.214 "bdev_nvme_get_transport_statistics", 00:06:37.214 "bdev_nvme_apply_firmware", 00:06:37.214 "bdev_nvme_detach_controller", 00:06:37.214 "bdev_nvme_get_controllers", 00:06:37.214 "bdev_nvme_attach_controller", 00:06:37.214 "bdev_nvme_set_hotplug", 00:06:37.214 "bdev_nvme_set_options", 00:06:37.214 "bdev_passthru_delete", 00:06:37.214 "bdev_passthru_create", 00:06:37.214 "bdev_lvol_set_parent_bdev", 00:06:37.214 "bdev_lvol_set_parent", 00:06:37.214 "bdev_lvol_check_shallow_copy", 00:06:37.214 "bdev_lvol_start_shallow_copy", 00:06:37.214 "bdev_lvol_grow_lvstore", 00:06:37.214 "bdev_lvol_get_lvols", 00:06:37.214 "bdev_lvol_get_lvstores", 00:06:37.214 "bdev_lvol_delete", 00:06:37.214 "bdev_lvol_set_read_only", 00:06:37.214 "bdev_lvol_resize", 00:06:37.214 "bdev_lvol_decouple_parent", 00:06:37.214 "bdev_lvol_inflate", 00:06:37.214 "bdev_lvol_rename", 00:06:37.214 "bdev_lvol_clone_bdev", 00:06:37.214 "bdev_lvol_clone", 00:06:37.214 "bdev_lvol_snapshot", 00:06:37.214 "bdev_lvol_create", 00:06:37.214 "bdev_lvol_delete_lvstore", 00:06:37.214 "bdev_lvol_rename_lvstore", 00:06:37.214 "bdev_lvol_create_lvstore", 00:06:37.214 "bdev_raid_set_options", 00:06:37.214 "bdev_raid_remove_base_bdev", 00:06:37.214 "bdev_raid_add_base_bdev", 00:06:37.214 "bdev_raid_delete", 00:06:37.214 "bdev_raid_create", 00:06:37.214 "bdev_raid_get_bdevs", 00:06:37.214 "bdev_error_inject_error", 00:06:37.214 "bdev_error_delete", 00:06:37.214 "bdev_error_create", 00:06:37.214 "bdev_split_delete", 00:06:37.214 "bdev_split_create", 00:06:37.214 "bdev_delay_delete", 00:06:37.214 "bdev_delay_create", 00:06:37.214 "bdev_delay_update_latency", 00:06:37.214 "bdev_zone_block_delete", 00:06:37.214 "bdev_zone_block_create", 00:06:37.214 "blobfs_create", 00:06:37.214 "blobfs_detect", 00:06:37.214 "blobfs_set_cache_size", 00:06:37.214 "bdev_aio_delete", 00:06:37.214 "bdev_aio_rescan", 00:06:37.214 "bdev_aio_create", 00:06:37.214 "bdev_ftl_set_property", 00:06:37.214 "bdev_ftl_get_properties", 00:06:37.214 "bdev_ftl_get_stats", 00:06:37.214 "bdev_ftl_unmap", 00:06:37.214 "bdev_ftl_unload", 00:06:37.214 "bdev_ftl_delete", 00:06:37.214 "bdev_ftl_load", 00:06:37.214 "bdev_ftl_create", 00:06:37.214 "bdev_virtio_attach_controller", 00:06:37.214 "bdev_virtio_scsi_get_devices", 00:06:37.214 "bdev_virtio_detach_controller", 00:06:37.214 "bdev_virtio_blk_set_hotplug", 00:06:37.214 "bdev_iscsi_delete", 00:06:37.214 "bdev_iscsi_create", 00:06:37.214 "bdev_iscsi_set_options", 00:06:37.214 "accel_error_inject_error", 00:06:37.214 "ioat_scan_accel_module", 00:06:37.214 "dsa_scan_accel_module", 00:06:37.214 "iaa_scan_accel_module", 00:06:37.214 "vfu_virtio_create_scsi_endpoint", 00:06:37.214 "vfu_virtio_scsi_remove_target", 00:06:37.214 "vfu_virtio_scsi_add_target", 00:06:37.214 "vfu_virtio_create_blk_endpoint", 00:06:37.214 "vfu_virtio_delete_endpoint", 00:06:37.214 "keyring_file_remove_key", 00:06:37.214 "keyring_file_add_key", 00:06:37.214 "keyring_linux_set_options", 00:06:37.214 "iscsi_get_histogram", 00:06:37.214 "iscsi_enable_histogram", 00:06:37.214 "iscsi_set_options", 00:06:37.214 "iscsi_get_auth_groups", 00:06:37.214 "iscsi_auth_group_remove_secret", 00:06:37.214 "iscsi_auth_group_add_secret", 00:06:37.214 "iscsi_delete_auth_group", 00:06:37.214 "iscsi_create_auth_group", 00:06:37.214 "iscsi_set_discovery_auth", 00:06:37.214 "iscsi_get_options", 00:06:37.214 "iscsi_target_node_request_logout", 00:06:37.214 "iscsi_target_node_set_redirect", 00:06:37.214 "iscsi_target_node_set_auth", 00:06:37.214 "iscsi_target_node_add_lun", 00:06:37.214 "iscsi_get_stats", 00:06:37.214 "iscsi_get_connections", 00:06:37.214 "iscsi_portal_group_set_auth", 00:06:37.214 "iscsi_start_portal_group", 00:06:37.214 "iscsi_delete_portal_group", 00:06:37.214 "iscsi_create_portal_group", 00:06:37.214 "iscsi_get_portal_groups", 00:06:37.214 "iscsi_delete_target_node", 00:06:37.214 "iscsi_target_node_remove_pg_ig_maps", 00:06:37.214 "iscsi_target_node_add_pg_ig_maps", 00:06:37.214 "iscsi_create_target_node", 00:06:37.214 "iscsi_get_target_nodes", 00:06:37.214 "iscsi_delete_initiator_group", 00:06:37.214 "iscsi_initiator_group_remove_initiators", 00:06:37.214 "iscsi_initiator_group_add_initiators", 00:06:37.214 "iscsi_create_initiator_group", 00:06:37.214 "iscsi_get_initiator_groups", 00:06:37.214 "nvmf_set_crdt", 00:06:37.214 "nvmf_set_config", 00:06:37.214 "nvmf_set_max_subsystems", 00:06:37.214 "nvmf_stop_mdns_prr", 00:06:37.214 "nvmf_publish_mdns_prr", 00:06:37.214 "nvmf_subsystem_get_listeners", 00:06:37.214 "nvmf_subsystem_get_qpairs", 00:06:37.214 "nvmf_subsystem_get_controllers", 00:06:37.214 "nvmf_get_stats", 00:06:37.214 "nvmf_get_transports", 00:06:37.214 "nvmf_create_transport", 00:06:37.214 "nvmf_get_targets", 00:06:37.214 "nvmf_delete_target", 00:06:37.214 "nvmf_create_target", 00:06:37.214 "nvmf_subsystem_allow_any_host", 00:06:37.214 "nvmf_subsystem_remove_host", 00:06:37.214 "nvmf_subsystem_add_host", 00:06:37.214 "nvmf_ns_remove_host", 00:06:37.214 "nvmf_ns_add_host", 00:06:37.214 "nvmf_subsystem_remove_ns", 00:06:37.214 "nvmf_subsystem_add_ns", 00:06:37.214 "nvmf_subsystem_listener_set_ana_state", 00:06:37.214 "nvmf_discovery_get_referrals", 00:06:37.214 "nvmf_discovery_remove_referral", 00:06:37.214 "nvmf_discovery_add_referral", 00:06:37.214 "nvmf_subsystem_remove_listener", 00:06:37.214 "nvmf_subsystem_add_listener", 00:06:37.214 "nvmf_delete_subsystem", 00:06:37.214 "nvmf_create_subsystem", 00:06:37.214 "nvmf_get_subsystems", 00:06:37.214 "env_dpdk_get_mem_stats", 00:06:37.214 "nbd_get_disks", 00:06:37.214 "nbd_stop_disk", 00:06:37.214 "nbd_start_disk", 00:06:37.214 "ublk_recover_disk", 00:06:37.214 "ublk_get_disks", 00:06:37.214 "ublk_stop_disk", 00:06:37.214 "ublk_start_disk", 00:06:37.214 "ublk_destroy_target", 00:06:37.214 "ublk_create_target", 00:06:37.214 "virtio_blk_create_transport", 00:06:37.214 "virtio_blk_get_transports", 00:06:37.214 "vhost_controller_set_coalescing", 00:06:37.214 "vhost_get_controllers", 00:06:37.214 "vhost_delete_controller", 00:06:37.214 "vhost_create_blk_controller", 00:06:37.214 "vhost_scsi_controller_remove_target", 00:06:37.214 "vhost_scsi_controller_add_target", 00:06:37.214 "vhost_start_scsi_controller", 00:06:37.214 "vhost_create_scsi_controller", 00:06:37.214 "thread_set_cpumask", 00:06:37.214 "framework_get_governor", 00:06:37.215 "framework_get_scheduler", 00:06:37.215 "framework_set_scheduler", 00:06:37.215 "framework_get_reactors", 00:06:37.215 "thread_get_io_channels", 00:06:37.215 "thread_get_pollers", 00:06:37.215 "thread_get_stats", 00:06:37.215 "framework_monitor_context_switch", 00:06:37.215 "spdk_kill_instance", 00:06:37.215 "log_enable_timestamps", 00:06:37.215 "log_get_flags", 00:06:37.215 "log_clear_flag", 00:06:37.215 "log_set_flag", 00:06:37.215 "log_get_level", 00:06:37.215 "log_set_level", 00:06:37.215 "log_get_print_level", 00:06:37.215 "log_set_print_level", 00:06:37.215 "framework_enable_cpumask_locks", 00:06:37.215 "framework_disable_cpumask_locks", 00:06:37.215 "framework_wait_init", 00:06:37.215 "framework_start_init", 00:06:37.215 "scsi_get_devices", 00:06:37.215 "bdev_get_histogram", 00:06:37.215 "bdev_enable_histogram", 00:06:37.215 "bdev_set_qos_limit", 00:06:37.215 "bdev_set_qd_sampling_period", 00:06:37.215 "bdev_get_bdevs", 00:06:37.215 "bdev_reset_iostat", 00:06:37.215 "bdev_get_iostat", 00:06:37.215 "bdev_examine", 00:06:37.215 "bdev_wait_for_examine", 00:06:37.215 "bdev_set_options", 00:06:37.215 "notify_get_notifications", 00:06:37.215 "notify_get_types", 00:06:37.215 "accel_get_stats", 00:06:37.215 "accel_set_options", 00:06:37.215 "accel_set_driver", 00:06:37.215 "accel_crypto_key_destroy", 00:06:37.215 "accel_crypto_keys_get", 00:06:37.215 "accel_crypto_key_create", 00:06:37.215 "accel_assign_opc", 00:06:37.215 "accel_get_module_info", 00:06:37.215 "accel_get_opc_assignments", 00:06:37.215 "vmd_rescan", 00:06:37.215 "vmd_remove_device", 00:06:37.215 "vmd_enable", 00:06:37.215 "sock_get_default_impl", 00:06:37.215 "sock_set_default_impl", 00:06:37.215 "sock_impl_set_options", 00:06:37.215 "sock_impl_get_options", 00:06:37.215 "iobuf_get_stats", 00:06:37.215 "iobuf_set_options", 00:06:37.215 "keyring_get_keys", 00:06:37.215 "framework_get_pci_devices", 00:06:37.215 "framework_get_config", 00:06:37.215 "framework_get_subsystems", 00:06:37.215 "vfu_tgt_set_base_path", 00:06:37.215 "trace_get_info", 00:06:37.215 "trace_get_tpoint_group_mask", 00:06:37.215 "trace_disable_tpoint_group", 00:06:37.215 "trace_enable_tpoint_group", 00:06:37.215 "trace_clear_tpoint_mask", 00:06:37.215 "trace_set_tpoint_mask", 00:06:37.215 "spdk_get_version", 00:06:37.215 "rpc_get_methods" 00:06:37.215 ] 00:06:37.215 15:11:41 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:37.215 15:11:41 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:37.215 15:11:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:37.215 15:11:41 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:37.215 15:11:41 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2873075 00:06:37.215 15:11:41 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 2873075 ']' 00:06:37.215 15:11:41 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 2873075 00:06:37.473 15:11:41 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:06:37.473 15:11:41 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:37.473 15:11:41 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2873075 00:06:37.473 15:11:41 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:37.473 15:11:41 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:37.473 15:11:41 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2873075' 00:06:37.473 killing process with pid 2873075 00:06:37.473 15:11:41 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 2873075 00:06:37.473 15:11:41 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 2873075 00:06:37.732 00:06:37.732 real 0m1.545s 00:06:37.732 user 0m2.825s 00:06:37.732 sys 0m0.500s 00:06:37.732 15:11:41 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:37.732 15:11:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:37.732 ************************************ 00:06:37.732 END TEST spdkcli_tcp 00:06:37.732 ************************************ 00:06:37.732 15:11:41 -- common/autotest_common.sh@1142 -- # return 0 00:06:37.732 15:11:41 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:37.732 15:11:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:37.732 15:11:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.732 15:11:41 -- common/autotest_common.sh@10 -- # set +x 00:06:37.732 ************************************ 00:06:37.732 START TEST dpdk_mem_utility 00:06:37.732 ************************************ 00:06:37.732 15:11:41 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:37.991 * Looking for test storage... 00:06:37.991 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:37.991 15:11:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:37.991 15:11:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:37.991 15:11:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2873414 00:06:37.991 15:11:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2873414 00:06:37.991 15:11:41 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 2873414 ']' 00:06:37.991 15:11:41 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.991 15:11:41 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:37.991 15:11:41 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.991 15:11:41 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:37.991 15:11:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:37.991 [2024-07-15 15:11:41.709756] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:06:37.991 [2024-07-15 15:11:41.709819] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2873414 ] 00:06:37.991 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.991 [2024-07-15 15:11:41.778693] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.991 [2024-07-15 15:11:41.853735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.925 15:11:42 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:38.925 15:11:42 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:06:38.925 15:11:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:38.925 15:11:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:38.925 15:11:42 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.925 15:11:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:38.925 { 00:06:38.925 "filename": "/tmp/spdk_mem_dump.txt" 00:06:38.925 } 00:06:38.925 15:11:42 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.925 15:11:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:38.925 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:38.925 1 heaps totaling size 814.000000 MiB 00:06:38.925 size: 814.000000 MiB heap id: 0 00:06:38.925 end heaps---------- 00:06:38.925 8 mempools totaling size 598.116089 MiB 00:06:38.925 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:38.925 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:38.925 size: 84.521057 MiB name: bdev_io_2873414 00:06:38.925 size: 51.011292 MiB name: evtpool_2873414 00:06:38.925 size: 50.003479 MiB name: msgpool_2873414 00:06:38.925 size: 21.763794 MiB name: PDU_Pool 00:06:38.925 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:38.925 size: 0.026123 MiB name: Session_Pool 00:06:38.925 end mempools------- 00:06:38.925 6 memzones totaling size 4.142822 MiB 00:06:38.925 size: 1.000366 MiB name: RG_ring_0_2873414 00:06:38.925 size: 1.000366 MiB name: RG_ring_1_2873414 00:06:38.925 size: 1.000366 MiB name: RG_ring_4_2873414 00:06:38.925 size: 1.000366 MiB name: RG_ring_5_2873414 00:06:38.925 size: 0.125366 MiB name: RG_ring_2_2873414 00:06:38.925 size: 0.015991 MiB name: RG_ring_3_2873414 00:06:38.925 end memzones------- 00:06:38.925 15:11:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:38.925 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:38.925 list of free elements. size: 12.519348 MiB 00:06:38.925 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:38.925 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:38.925 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:38.925 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:38.925 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:38.925 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:38.925 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:38.925 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:38.925 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:38.925 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:38.925 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:38.925 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:38.925 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:38.925 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:38.925 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:38.925 list of standard malloc elements. size: 199.218079 MiB 00:06:38.925 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:38.925 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:38.925 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:38.925 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:38.925 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:38.925 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:38.925 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:38.925 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:38.925 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:38.925 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:38.925 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:38.925 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:38.925 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:38.925 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:38.925 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:38.925 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:38.925 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:38.925 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:38.925 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:38.925 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:38.925 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:38.925 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:38.925 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:38.925 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:38.925 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:38.925 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:38.925 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:38.925 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:38.925 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:38.925 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:38.925 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:38.925 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:38.925 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:38.925 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:38.925 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:38.925 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:38.925 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:38.925 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:38.925 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:38.925 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:38.925 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:38.925 list of memzone associated elements. size: 602.262573 MiB 00:06:38.925 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:38.925 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:38.925 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:38.925 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:38.925 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:38.925 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2873414_0 00:06:38.925 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:38.925 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2873414_0 00:06:38.925 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:38.925 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2873414_0 00:06:38.925 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:38.925 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:38.925 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:38.925 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:38.925 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:38.925 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2873414 00:06:38.925 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:38.926 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2873414 00:06:38.926 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:38.926 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2873414 00:06:38.926 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:38.926 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:38.926 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:38.926 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:38.926 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:38.926 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:38.926 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:38.926 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:38.926 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:38.926 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2873414 00:06:38.926 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:38.926 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2873414 00:06:38.926 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:38.926 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2873414 00:06:38.926 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:38.926 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2873414 00:06:38.926 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:38.926 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2873414 00:06:38.926 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:38.926 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:38.926 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:38.926 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:38.926 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:38.926 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:38.926 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:38.926 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2873414 00:06:38.926 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:38.926 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:38.926 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:38.926 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:38.926 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:38.926 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2873414 00:06:38.926 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:38.926 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:38.926 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:38.926 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2873414 00:06:38.926 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:38.926 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2873414 00:06:38.926 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:38.926 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:38.926 15:11:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:38.926 15:11:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2873414 00:06:38.926 15:11:42 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 2873414 ']' 00:06:38.926 15:11:42 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 2873414 00:06:38.926 15:11:42 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:06:38.926 15:11:42 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:38.926 15:11:42 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2873414 00:06:38.926 15:11:42 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:38.926 15:11:42 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:38.926 15:11:42 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2873414' 00:06:38.926 killing process with pid 2873414 00:06:38.926 15:11:42 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 2873414 00:06:38.926 15:11:42 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 2873414 00:06:39.184 00:06:39.184 real 0m1.411s 00:06:39.184 user 0m1.442s 00:06:39.184 sys 0m0.444s 00:06:39.184 15:11:42 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.184 15:11:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:39.184 ************************************ 00:06:39.184 END TEST dpdk_mem_utility 00:06:39.184 ************************************ 00:06:39.184 15:11:43 -- common/autotest_common.sh@1142 -- # return 0 00:06:39.184 15:11:43 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:39.184 15:11:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:39.184 15:11:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.184 15:11:43 -- common/autotest_common.sh@10 -- # set +x 00:06:39.184 ************************************ 00:06:39.184 START TEST event 00:06:39.184 ************************************ 00:06:39.184 15:11:43 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:39.442 * Looking for test storage... 00:06:39.442 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:39.442 15:11:43 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:39.442 15:11:43 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:39.442 15:11:43 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:39.442 15:11:43 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:39.442 15:11:43 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.442 15:11:43 event -- common/autotest_common.sh@10 -- # set +x 00:06:39.442 ************************************ 00:06:39.442 START TEST event_perf 00:06:39.442 ************************************ 00:06:39.442 15:11:43 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:39.442 Running I/O for 1 seconds...[2024-07-15 15:11:43.210261] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:06:39.442 [2024-07-15 15:11:43.210348] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2873733 ] 00:06:39.442 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.442 [2024-07-15 15:11:43.283783] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:39.699 [2024-07-15 15:11:43.361061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.699 [2024-07-15 15:11:43.361158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.699 [2024-07-15 15:11:43.361242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:39.699 [2024-07-15 15:11:43.361244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.634 Running I/O for 1 seconds... 00:06:40.634 lcore 0: 217407 00:06:40.634 lcore 1: 217406 00:06:40.634 lcore 2: 217407 00:06:40.634 lcore 3: 217407 00:06:40.634 done. 00:06:40.634 00:06:40.634 real 0m1.242s 00:06:40.634 user 0m4.142s 00:06:40.634 sys 0m0.096s 00:06:40.634 15:11:44 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.634 15:11:44 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:40.634 ************************************ 00:06:40.634 END TEST event_perf 00:06:40.634 ************************************ 00:06:40.634 15:11:44 event -- common/autotest_common.sh@1142 -- # return 0 00:06:40.634 15:11:44 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:40.634 15:11:44 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:40.634 15:11:44 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.634 15:11:44 event -- common/autotest_common.sh@10 -- # set +x 00:06:40.634 ************************************ 00:06:40.634 START TEST event_reactor 00:06:40.634 ************************************ 00:06:40.634 15:11:44 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:40.634 [2024-07-15 15:11:44.535874] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:06:40.634 [2024-07-15 15:11:44.535953] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2874023 ] 00:06:40.892 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.892 [2024-07-15 15:11:44.610372] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.892 [2024-07-15 15:11:44.677484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.266 test_start 00:06:42.266 oneshot 00:06:42.266 tick 100 00:06:42.266 tick 100 00:06:42.266 tick 250 00:06:42.266 tick 100 00:06:42.266 tick 100 00:06:42.266 tick 250 00:06:42.266 tick 100 00:06:42.266 tick 500 00:06:42.266 tick 100 00:06:42.266 tick 100 00:06:42.266 tick 250 00:06:42.266 tick 100 00:06:42.266 tick 100 00:06:42.266 test_end 00:06:42.266 00:06:42.266 real 0m1.230s 00:06:42.266 user 0m1.133s 00:06:42.266 sys 0m0.093s 00:06:42.266 15:11:45 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.266 15:11:45 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:42.266 ************************************ 00:06:42.267 END TEST event_reactor 00:06:42.267 ************************************ 00:06:42.267 15:11:45 event -- common/autotest_common.sh@1142 -- # return 0 00:06:42.267 15:11:45 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:42.267 15:11:45 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:42.267 15:11:45 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.267 15:11:45 event -- common/autotest_common.sh@10 -- # set +x 00:06:42.267 ************************************ 00:06:42.267 START TEST event_reactor_perf 00:06:42.267 ************************************ 00:06:42.267 15:11:45 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:42.267 [2024-07-15 15:11:45.848408] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:06:42.267 [2024-07-15 15:11:45.848475] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2874267 ] 00:06:42.267 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.267 [2024-07-15 15:11:45.922011] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.267 [2024-07-15 15:11:45.989128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.201 test_start 00:06:43.201 test_end 00:06:43.201 Performance: 532751 events per second 00:06:43.201 00:06:43.201 real 0m1.230s 00:06:43.201 user 0m1.131s 00:06:43.201 sys 0m0.095s 00:06:43.201 15:11:47 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.201 15:11:47 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:43.201 ************************************ 00:06:43.201 END TEST event_reactor_perf 00:06:43.201 ************************************ 00:06:43.201 15:11:47 event -- common/autotest_common.sh@1142 -- # return 0 00:06:43.201 15:11:47 event -- event/event.sh@49 -- # uname -s 00:06:43.201 15:11:47 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:43.201 15:11:47 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:43.201 15:11:47 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:43.201 15:11:47 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.201 15:11:47 event -- common/autotest_common.sh@10 -- # set +x 00:06:43.459 ************************************ 00:06:43.459 START TEST event_scheduler 00:06:43.459 ************************************ 00:06:43.459 15:11:47 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:43.459 * Looking for test storage... 00:06:43.459 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:43.459 15:11:47 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:43.459 15:11:47 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:43.459 15:11:47 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2874511 00:06:43.459 15:11:47 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:43.459 15:11:47 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2874511 00:06:43.459 15:11:47 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 2874511 ']' 00:06:43.459 15:11:47 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.459 15:11:47 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:43.459 15:11:47 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.459 15:11:47 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:43.459 15:11:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:43.459 [2024-07-15 15:11:47.264001] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:06:43.459 [2024-07-15 15:11:47.264061] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2874511 ] 00:06:43.459 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.459 [2024-07-15 15:11:47.332409] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:43.717 [2024-07-15 15:11:47.412970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.717 [2024-07-15 15:11:47.413064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.717 [2024-07-15 15:11:47.413146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:43.717 [2024-07-15 15:11:47.413148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:44.283 15:11:48 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:44.283 15:11:48 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:06:44.283 15:11:48 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:44.283 15:11:48 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.283 15:11:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:44.283 [2024-07-15 15:11:48.083518] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:44.283 [2024-07-15 15:11:48.083542] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:44.283 [2024-07-15 15:11:48.083552] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:44.283 [2024-07-15 15:11:48.083560] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:44.283 [2024-07-15 15:11:48.083568] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:44.283 15:11:48 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.283 15:11:48 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:44.283 15:11:48 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.283 15:11:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:44.283 [2024-07-15 15:11:48.155663] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:44.283 15:11:48 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.283 15:11:48 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:44.283 15:11:48 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:44.283 15:11:48 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.283 15:11:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:44.542 ************************************ 00:06:44.542 START TEST scheduler_create_thread 00:06:44.542 ************************************ 00:06:44.542 15:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:06:44.542 15:11:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:44.542 15:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.542 15:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:44.542 2 00:06:44.542 15:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.542 15:11:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:44.542 15:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.542 15:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:44.542 3 00:06:44.542 15:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.542 15:11:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:44.542 15:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.542 15:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:44.542 4 00:06:44.542 15:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.542 15:11:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:44.542 15:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.542 15:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:44.542 5 00:06:44.542 15:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.542 15:11:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:44.542 15:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.542 15:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:44.542 6 00:06:44.542 15:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.542 15:11:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:44.542 15:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.542 15:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:44.542 7 00:06:44.542 15:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.542 15:11:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:44.542 15:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.542 15:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:44.542 8 00:06:44.542 15:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.542 15:11:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:44.542 15:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.542 15:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:44.542 9 00:06:44.542 15:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.542 15:11:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:44.542 15:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.542 15:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:44.542 10 00:06:44.542 15:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.542 15:11:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:44.542 15:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.542 15:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:44.542 15:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.542 15:11:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:44.542 15:11:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:44.542 15:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.542 15:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.108 15:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.108 15:11:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:45.108 15:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.108 15:11:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.481 15:11:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.481 15:11:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:46.481 15:11:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:46.481 15:11:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.481 15:11:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.414 15:11:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.414 00:06:47.414 real 0m3.100s 00:06:47.414 user 0m0.023s 00:06:47.414 sys 0m0.008s 00:06:47.414 15:11:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.414 15:11:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.414 ************************************ 00:06:47.414 END TEST scheduler_create_thread 00:06:47.414 ************************************ 00:06:47.673 15:11:51 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:06:47.673 15:11:51 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:47.673 15:11:51 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2874511 00:06:47.673 15:11:51 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 2874511 ']' 00:06:47.673 15:11:51 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 2874511 00:06:47.673 15:11:51 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:47.673 15:11:51 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:47.673 15:11:51 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2874511 00:06:47.673 15:11:51 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:47.673 15:11:51 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:47.673 15:11:51 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2874511' 00:06:47.673 killing process with pid 2874511 00:06:47.673 15:11:51 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 2874511 00:06:47.673 15:11:51 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 2874511 00:06:47.932 [2024-07-15 15:11:51.675046] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:48.191 00:06:48.191 real 0m4.748s 00:06:48.191 user 0m9.209s 00:06:48.191 sys 0m0.406s 00:06:48.191 15:11:51 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.191 15:11:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:48.191 ************************************ 00:06:48.191 END TEST event_scheduler 00:06:48.191 ************************************ 00:06:48.191 15:11:51 event -- common/autotest_common.sh@1142 -- # return 0 00:06:48.191 15:11:51 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:48.191 15:11:51 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:48.191 15:11:51 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:48.191 15:11:51 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.191 15:11:51 event -- common/autotest_common.sh@10 -- # set +x 00:06:48.191 ************************************ 00:06:48.191 START TEST app_repeat 00:06:48.191 ************************************ 00:06:48.191 15:11:51 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:48.191 15:11:51 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:48.191 15:11:51 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:48.191 15:11:51 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:48.191 15:11:51 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:48.191 15:11:51 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:48.191 15:11:51 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:48.191 15:11:51 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:48.191 15:11:51 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2875418 00:06:48.191 15:11:51 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:48.191 15:11:51 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:48.191 15:11:51 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2875418' 00:06:48.191 Process app_repeat pid: 2875418 00:06:48.191 15:11:51 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:48.191 15:11:51 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:48.191 spdk_app_start Round 0 00:06:48.191 15:11:51 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2875418 /var/tmp/spdk-nbd.sock 00:06:48.191 15:11:51 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2875418 ']' 00:06:48.191 15:11:51 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:48.191 15:11:51 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:48.191 15:11:51 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:48.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:48.191 15:11:51 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:48.191 15:11:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:48.191 [2024-07-15 15:11:52.006115] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:06:48.191 [2024-07-15 15:11:52.006174] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2875418 ] 00:06:48.191 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.191 [2024-07-15 15:11:52.075738] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:48.450 [2024-07-15 15:11:52.153283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.450 [2024-07-15 15:11:52.153287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.016 15:11:52 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:49.016 15:11:52 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:49.016 15:11:52 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:49.275 Malloc0 00:06:49.275 15:11:53 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:49.533 Malloc1 00:06:49.533 15:11:53 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:49.533 15:11:53 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.533 15:11:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:49.533 15:11:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:49.533 15:11:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.533 15:11:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:49.533 15:11:53 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:49.533 15:11:53 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.533 15:11:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:49.533 15:11:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:49.533 15:11:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.533 15:11:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:49.533 15:11:53 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:49.533 15:11:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:49.533 15:11:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:49.533 15:11:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:49.533 /dev/nbd0 00:06:49.533 15:11:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:49.533 15:11:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:49.533 15:11:53 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:49.533 15:11:53 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:49.533 15:11:53 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:49.533 15:11:53 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:49.533 15:11:53 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:49.533 15:11:53 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:49.533 15:11:53 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:49.533 15:11:53 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:49.533 15:11:53 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:49.533 1+0 records in 00:06:49.533 1+0 records out 00:06:49.533 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026368 s, 15.5 MB/s 00:06:49.533 15:11:53 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:49.533 15:11:53 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:49.533 15:11:53 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:49.533 15:11:53 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:49.533 15:11:53 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:49.533 15:11:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:49.533 15:11:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:49.533 15:11:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:49.792 /dev/nbd1 00:06:49.792 15:11:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:49.792 15:11:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:49.792 15:11:53 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:49.792 15:11:53 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:49.792 15:11:53 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:49.792 15:11:53 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:49.792 15:11:53 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:49.792 15:11:53 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:49.792 15:11:53 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:49.792 15:11:53 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:49.792 15:11:53 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:49.792 1+0 records in 00:06:49.792 1+0 records out 00:06:49.792 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225693 s, 18.1 MB/s 00:06:49.792 15:11:53 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:49.792 15:11:53 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:49.792 15:11:53 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:49.792 15:11:53 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:49.792 15:11:53 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:49.792 15:11:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:49.792 15:11:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:49.792 15:11:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:49.792 15:11:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.792 15:11:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:50.051 15:11:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:50.051 { 00:06:50.051 "nbd_device": "/dev/nbd0", 00:06:50.051 "bdev_name": "Malloc0" 00:06:50.051 }, 00:06:50.051 { 00:06:50.051 "nbd_device": "/dev/nbd1", 00:06:50.051 "bdev_name": "Malloc1" 00:06:50.051 } 00:06:50.051 ]' 00:06:50.051 15:11:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:50.051 { 00:06:50.051 "nbd_device": "/dev/nbd0", 00:06:50.051 "bdev_name": "Malloc0" 00:06:50.051 }, 00:06:50.051 { 00:06:50.051 "nbd_device": "/dev/nbd1", 00:06:50.051 "bdev_name": "Malloc1" 00:06:50.051 } 00:06:50.051 ]' 00:06:50.051 15:11:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:50.051 15:11:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:50.051 /dev/nbd1' 00:06:50.051 15:11:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:50.051 /dev/nbd1' 00:06:50.051 15:11:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:50.051 15:11:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:50.051 15:11:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:50.051 15:11:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:50.051 15:11:53 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:50.051 15:11:53 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:50.051 15:11:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.051 15:11:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:50.051 15:11:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:50.051 15:11:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:50.051 15:11:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:50.051 15:11:53 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:50.051 256+0 records in 00:06:50.051 256+0 records out 00:06:50.051 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.011104 s, 94.4 MB/s 00:06:50.051 15:11:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:50.051 15:11:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:50.051 256+0 records in 00:06:50.051 256+0 records out 00:06:50.051 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0198628 s, 52.8 MB/s 00:06:50.051 15:11:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:50.051 15:11:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:50.051 256+0 records in 00:06:50.051 256+0 records out 00:06:50.051 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.015145 s, 69.2 MB/s 00:06:50.051 15:11:53 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:50.051 15:11:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.051 15:11:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:50.051 15:11:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:50.051 15:11:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:50.051 15:11:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:50.051 15:11:53 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:50.051 15:11:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:50.051 15:11:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:50.051 15:11:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:50.051 15:11:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:50.051 15:11:53 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:50.051 15:11:53 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:50.051 15:11:53 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.051 15:11:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.051 15:11:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:50.051 15:11:53 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:50.051 15:11:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.051 15:11:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:50.309 15:11:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:50.309 15:11:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:50.309 15:11:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:50.309 15:11:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.309 15:11:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.309 15:11:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:50.309 15:11:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:50.309 15:11:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.310 15:11:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.310 15:11:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:50.568 15:11:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:50.568 15:11:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:50.568 15:11:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:50.568 15:11:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.568 15:11:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.568 15:11:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:50.568 15:11:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:50.568 15:11:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.568 15:11:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:50.568 15:11:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.568 15:11:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:50.826 15:11:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:50.826 15:11:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:50.826 15:11:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:50.826 15:11:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:50.826 15:11:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:50.826 15:11:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:50.826 15:11:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:50.826 15:11:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:50.826 15:11:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:50.826 15:11:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:50.826 15:11:54 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:50.826 15:11:54 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:50.826 15:11:54 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:51.086 15:11:54 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:51.086 [2024-07-15 15:11:54.919613] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:51.086 [2024-07-15 15:11:54.987080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.086 [2024-07-15 15:11:54.987084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.344 [2024-07-15 15:11:55.027226] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:51.344 [2024-07-15 15:11:55.027268] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:53.874 15:11:57 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:53.874 15:11:57 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:53.874 spdk_app_start Round 1 00:06:53.874 15:11:57 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2875418 /var/tmp/spdk-nbd.sock 00:06:53.874 15:11:57 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2875418 ']' 00:06:53.874 15:11:57 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:53.874 15:11:57 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:53.874 15:11:57 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:53.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:53.874 15:11:57 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:53.874 15:11:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:54.132 15:11:57 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:54.132 15:11:57 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:54.132 15:11:57 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:54.391 Malloc0 00:06:54.391 15:11:58 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:54.391 Malloc1 00:06:54.391 15:11:58 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:54.391 15:11:58 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.391 15:11:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:54.391 15:11:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:54.391 15:11:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.391 15:11:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:54.391 15:11:58 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:54.391 15:11:58 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.391 15:11:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:54.391 15:11:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:54.391 15:11:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.391 15:11:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:54.391 15:11:58 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:54.391 15:11:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:54.391 15:11:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:54.391 15:11:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:54.650 /dev/nbd0 00:06:54.650 15:11:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:54.650 15:11:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:54.650 15:11:58 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:54.650 15:11:58 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:54.650 15:11:58 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:54.650 15:11:58 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:54.650 15:11:58 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:54.650 15:11:58 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:54.650 15:11:58 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:54.650 15:11:58 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:54.650 15:11:58 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:54.650 1+0 records in 00:06:54.650 1+0 records out 00:06:54.650 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000210283 s, 19.5 MB/s 00:06:54.650 15:11:58 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:54.650 15:11:58 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:54.650 15:11:58 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:54.650 15:11:58 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:54.650 15:11:58 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:54.650 15:11:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:54.650 15:11:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:54.650 15:11:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:54.909 /dev/nbd1 00:06:54.909 15:11:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:54.909 15:11:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:54.909 15:11:58 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:54.909 15:11:58 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:54.909 15:11:58 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:54.909 15:11:58 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:54.909 15:11:58 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:54.909 15:11:58 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:54.909 15:11:58 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:54.909 15:11:58 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:54.909 15:11:58 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:54.909 1+0 records in 00:06:54.909 1+0 records out 00:06:54.909 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000193057 s, 21.2 MB/s 00:06:54.909 15:11:58 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:54.909 15:11:58 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:54.909 15:11:58 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:54.909 15:11:58 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:54.909 15:11:58 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:54.909 15:11:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:54.909 15:11:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:54.909 15:11:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:54.909 15:11:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.909 15:11:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:55.168 15:11:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:55.168 { 00:06:55.168 "nbd_device": "/dev/nbd0", 00:06:55.168 "bdev_name": "Malloc0" 00:06:55.168 }, 00:06:55.168 { 00:06:55.168 "nbd_device": "/dev/nbd1", 00:06:55.168 "bdev_name": "Malloc1" 00:06:55.168 } 00:06:55.168 ]' 00:06:55.168 15:11:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:55.168 { 00:06:55.168 "nbd_device": "/dev/nbd0", 00:06:55.168 "bdev_name": "Malloc0" 00:06:55.168 }, 00:06:55.168 { 00:06:55.168 "nbd_device": "/dev/nbd1", 00:06:55.168 "bdev_name": "Malloc1" 00:06:55.168 } 00:06:55.168 ]' 00:06:55.168 15:11:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:55.168 15:11:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:55.168 /dev/nbd1' 00:06:55.168 15:11:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:55.168 /dev/nbd1' 00:06:55.168 15:11:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:55.168 15:11:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:55.168 15:11:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:55.168 15:11:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:55.168 15:11:58 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:55.168 15:11:58 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:55.168 15:11:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:55.168 15:11:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:55.168 15:11:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:55.169 15:11:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:55.169 15:11:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:55.169 15:11:58 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:55.169 256+0 records in 00:06:55.169 256+0 records out 00:06:55.169 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114391 s, 91.7 MB/s 00:06:55.169 15:11:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:55.169 15:11:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:55.169 256+0 records in 00:06:55.169 256+0 records out 00:06:55.169 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0131451 s, 79.8 MB/s 00:06:55.169 15:11:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:55.169 15:11:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:55.169 256+0 records in 00:06:55.169 256+0 records out 00:06:55.169 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0212128 s, 49.4 MB/s 00:06:55.169 15:11:58 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:55.169 15:11:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:55.169 15:11:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:55.169 15:11:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:55.169 15:11:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:55.169 15:11:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:55.169 15:11:58 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:55.169 15:11:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:55.169 15:11:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:55.169 15:11:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:55.169 15:11:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:55.169 15:11:59 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:55.169 15:11:59 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:55.169 15:11:59 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.169 15:11:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:55.169 15:11:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:55.169 15:11:59 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:55.169 15:11:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:55.169 15:11:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:55.427 15:11:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:55.428 15:11:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:55.428 15:11:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:55.428 15:11:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:55.428 15:11:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:55.428 15:11:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:55.428 15:11:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:55.428 15:11:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:55.428 15:11:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:55.428 15:11:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:55.686 15:11:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:55.686 15:11:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:55.686 15:11:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:55.686 15:11:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:55.686 15:11:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:55.686 15:11:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:55.687 15:11:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:55.687 15:11:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:55.687 15:11:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:55.687 15:11:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.687 15:11:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:55.687 15:11:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:55.687 15:11:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:55.687 15:11:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:55.945 15:11:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:55.945 15:11:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:55.945 15:11:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:55.945 15:11:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:55.945 15:11:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:55.945 15:11:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:55.945 15:11:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:55.945 15:11:59 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:55.945 15:11:59 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:55.945 15:11:59 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:55.945 15:11:59 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:56.204 [2024-07-15 15:12:00.003701] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:56.204 [2024-07-15 15:12:00.084408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.204 [2024-07-15 15:12:00.084412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.463 [2024-07-15 15:12:00.128402] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:56.463 [2024-07-15 15:12:00.128444] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:59.033 15:12:02 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:59.033 15:12:02 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:59.033 spdk_app_start Round 2 00:06:59.033 15:12:02 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2875418 /var/tmp/spdk-nbd.sock 00:06:59.033 15:12:02 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2875418 ']' 00:06:59.033 15:12:02 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:59.033 15:12:02 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:59.033 15:12:02 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:59.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:59.033 15:12:02 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:59.033 15:12:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:59.291 15:12:02 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:59.291 15:12:02 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:59.291 15:12:02 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:59.291 Malloc0 00:06:59.291 15:12:03 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:59.548 Malloc1 00:06:59.548 15:12:03 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:59.548 15:12:03 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.548 15:12:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:59.548 15:12:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:59.548 15:12:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.548 15:12:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:59.548 15:12:03 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:59.548 15:12:03 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.548 15:12:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:59.548 15:12:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:59.548 15:12:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.548 15:12:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:59.548 15:12:03 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:59.548 15:12:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:59.548 15:12:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:59.548 15:12:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:59.807 /dev/nbd0 00:06:59.807 15:12:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:59.807 15:12:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:59.807 15:12:03 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:59.807 15:12:03 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:59.807 15:12:03 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:59.807 15:12:03 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:59.807 15:12:03 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:59.807 15:12:03 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:59.807 15:12:03 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:59.807 15:12:03 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:59.807 15:12:03 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:59.807 1+0 records in 00:06:59.807 1+0 records out 00:06:59.807 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000148765 s, 27.5 MB/s 00:06:59.807 15:12:03 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:59.807 15:12:03 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:59.807 15:12:03 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:59.807 15:12:03 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:59.807 15:12:03 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:59.807 15:12:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:59.807 15:12:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:59.807 15:12:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:59.807 /dev/nbd1 00:07:00.066 15:12:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:00.066 15:12:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:00.066 15:12:03 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:00.066 15:12:03 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:00.066 15:12:03 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:00.066 15:12:03 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:00.066 15:12:03 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:00.066 15:12:03 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:00.066 15:12:03 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:00.066 15:12:03 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:00.066 15:12:03 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:00.066 1+0 records in 00:07:00.066 1+0 records out 00:07:00.066 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000228416 s, 17.9 MB/s 00:07:00.066 15:12:03 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:00.066 15:12:03 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:00.066 15:12:03 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:00.066 15:12:03 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:00.066 15:12:03 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:00.066 15:12:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:00.066 15:12:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:00.066 15:12:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:00.066 15:12:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.066 15:12:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:00.066 15:12:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:00.066 { 00:07:00.066 "nbd_device": "/dev/nbd0", 00:07:00.066 "bdev_name": "Malloc0" 00:07:00.066 }, 00:07:00.066 { 00:07:00.066 "nbd_device": "/dev/nbd1", 00:07:00.066 "bdev_name": "Malloc1" 00:07:00.066 } 00:07:00.066 ]' 00:07:00.066 15:12:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:00.066 15:12:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:00.066 { 00:07:00.066 "nbd_device": "/dev/nbd0", 00:07:00.066 "bdev_name": "Malloc0" 00:07:00.066 }, 00:07:00.066 { 00:07:00.066 "nbd_device": "/dev/nbd1", 00:07:00.066 "bdev_name": "Malloc1" 00:07:00.066 } 00:07:00.066 ]' 00:07:00.066 15:12:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:00.066 /dev/nbd1' 00:07:00.066 15:12:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:00.066 /dev/nbd1' 00:07:00.066 15:12:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:00.066 15:12:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:00.066 15:12:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:00.066 15:12:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:00.066 15:12:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:00.066 15:12:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:00.066 15:12:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:00.066 15:12:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:00.066 15:12:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:00.066 15:12:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:00.066 15:12:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:00.066 15:12:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:00.325 256+0 records in 00:07:00.325 256+0 records out 00:07:00.325 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115438 s, 90.8 MB/s 00:07:00.325 15:12:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:00.325 15:12:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:00.325 256+0 records in 00:07:00.325 256+0 records out 00:07:00.325 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0201491 s, 52.0 MB/s 00:07:00.325 15:12:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:00.325 15:12:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:00.325 256+0 records in 00:07:00.325 256+0 records out 00:07:00.325 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0147277 s, 71.2 MB/s 00:07:00.325 15:12:04 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:00.325 15:12:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:00.325 15:12:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:00.325 15:12:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:00.325 15:12:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:00.325 15:12:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:00.325 15:12:04 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:00.325 15:12:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:00.325 15:12:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:00.325 15:12:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:00.325 15:12:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:00.325 15:12:04 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:00.325 15:12:04 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:00.325 15:12:04 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.325 15:12:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:00.325 15:12:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:00.325 15:12:04 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:00.325 15:12:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:00.325 15:12:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:00.584 15:12:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:00.584 15:12:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:00.584 15:12:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:00.584 15:12:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:00.584 15:12:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:00.584 15:12:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:00.584 15:12:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:00.584 15:12:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:00.584 15:12:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:00.584 15:12:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:00.584 15:12:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:00.584 15:12:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:00.584 15:12:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:00.584 15:12:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:00.584 15:12:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:00.584 15:12:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:00.584 15:12:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:00.584 15:12:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:00.584 15:12:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:00.584 15:12:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.584 15:12:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:00.842 15:12:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:00.842 15:12:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:00.842 15:12:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:00.842 15:12:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:00.842 15:12:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:00.842 15:12:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:00.842 15:12:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:00.842 15:12:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:00.842 15:12:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:00.842 15:12:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:00.842 15:12:04 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:00.842 15:12:04 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:00.842 15:12:04 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:01.101 15:12:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:01.360 [2024-07-15 15:12:05.041505] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:01.360 [2024-07-15 15:12:05.106446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.360 [2024-07-15 15:12:05.106450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.360 [2024-07-15 15:12:05.147123] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:01.360 [2024-07-15 15:12:05.147166] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:04.644 15:12:07 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2875418 /var/tmp/spdk-nbd.sock 00:07:04.644 15:12:07 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2875418 ']' 00:07:04.644 15:12:07 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:04.644 15:12:07 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:04.644 15:12:07 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:04.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:04.644 15:12:07 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:04.644 15:12:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:04.644 15:12:08 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:04.644 15:12:08 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:04.644 15:12:08 event.app_repeat -- event/event.sh@39 -- # killprocess 2875418 00:07:04.644 15:12:08 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 2875418 ']' 00:07:04.644 15:12:08 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 2875418 00:07:04.644 15:12:08 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:07:04.644 15:12:08 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:04.644 15:12:08 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2875418 00:07:04.644 15:12:08 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:04.644 15:12:08 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:04.644 15:12:08 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2875418' 00:07:04.644 killing process with pid 2875418 00:07:04.644 15:12:08 event.app_repeat -- common/autotest_common.sh@967 -- # kill 2875418 00:07:04.644 15:12:08 event.app_repeat -- common/autotest_common.sh@972 -- # wait 2875418 00:07:04.644 spdk_app_start is called in Round 0. 00:07:04.644 Shutdown signal received, stop current app iteration 00:07:04.644 Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 reinitialization... 00:07:04.644 spdk_app_start is called in Round 1. 00:07:04.644 Shutdown signal received, stop current app iteration 00:07:04.644 Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 reinitialization... 00:07:04.644 spdk_app_start is called in Round 2. 00:07:04.644 Shutdown signal received, stop current app iteration 00:07:04.644 Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 reinitialization... 00:07:04.644 spdk_app_start is called in Round 3. 00:07:04.644 Shutdown signal received, stop current app iteration 00:07:04.644 15:12:08 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:04.644 15:12:08 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:04.644 00:07:04.644 real 0m16.278s 00:07:04.644 user 0m34.585s 00:07:04.644 sys 0m2.992s 00:07:04.644 15:12:08 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.644 15:12:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:04.644 ************************************ 00:07:04.645 END TEST app_repeat 00:07:04.645 ************************************ 00:07:04.645 15:12:08 event -- common/autotest_common.sh@1142 -- # return 0 00:07:04.645 15:12:08 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:04.645 15:12:08 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:04.645 15:12:08 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:04.645 15:12:08 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.645 15:12:08 event -- common/autotest_common.sh@10 -- # set +x 00:07:04.645 ************************************ 00:07:04.645 START TEST cpu_locks 00:07:04.645 ************************************ 00:07:04.645 15:12:08 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:04.645 * Looking for test storage... 00:07:04.645 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:04.645 15:12:08 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:04.645 15:12:08 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:04.645 15:12:08 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:04.645 15:12:08 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:04.645 15:12:08 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:04.645 15:12:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.645 15:12:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:04.645 ************************************ 00:07:04.645 START TEST default_locks 00:07:04.645 ************************************ 00:07:04.645 15:12:08 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:07:04.645 15:12:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2878884 00:07:04.645 15:12:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2878884 00:07:04.645 15:12:08 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 2878884 ']' 00:07:04.645 15:12:08 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.645 15:12:08 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:04.645 15:12:08 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.645 15:12:08 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:04.645 15:12:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:04.645 15:12:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:04.645 [2024-07-15 15:12:08.518557] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:04.645 [2024-07-15 15:12:08.518606] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2878884 ] 00:07:04.645 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.903 [2024-07-15 15:12:08.588199] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.903 [2024-07-15 15:12:08.661950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.470 15:12:09 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:05.470 15:12:09 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:07:05.470 15:12:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2878884 00:07:05.470 15:12:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2878884 00:07:05.470 15:12:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:05.728 lslocks: write error 00:07:05.728 15:12:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2878884 00:07:05.728 15:12:09 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 2878884 ']' 00:07:05.728 15:12:09 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 2878884 00:07:05.728 15:12:09 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:07:05.728 15:12:09 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:05.728 15:12:09 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2878884 00:07:05.987 15:12:09 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:05.987 15:12:09 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:05.987 15:12:09 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2878884' 00:07:05.987 killing process with pid 2878884 00:07:05.987 15:12:09 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 2878884 00:07:05.987 15:12:09 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 2878884 00:07:06.246 15:12:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2878884 00:07:06.246 15:12:09 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:07:06.246 15:12:09 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2878884 00:07:06.246 15:12:09 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:06.246 15:12:09 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:06.246 15:12:09 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:06.246 15:12:09 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:06.246 15:12:09 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 2878884 00:07:06.246 15:12:09 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 2878884 ']' 00:07:06.246 15:12:09 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.246 15:12:09 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:06.246 15:12:09 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.246 15:12:09 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:06.246 15:12:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:06.246 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2878884) - No such process 00:07:06.246 ERROR: process (pid: 2878884) is no longer running 00:07:06.246 15:12:09 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:06.246 15:12:09 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:07:06.246 15:12:09 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:07:06.246 15:12:09 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:06.246 15:12:09 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:06.246 15:12:09 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:06.246 15:12:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:06.246 15:12:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:06.246 15:12:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:06.246 15:12:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:06.246 00:07:06.246 real 0m1.488s 00:07:06.246 user 0m1.513s 00:07:06.246 sys 0m0.534s 00:07:06.246 15:12:09 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.246 15:12:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:06.246 ************************************ 00:07:06.246 END TEST default_locks 00:07:06.246 ************************************ 00:07:06.246 15:12:09 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:06.246 15:12:09 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:06.246 15:12:09 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:06.246 15:12:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.247 15:12:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:06.247 ************************************ 00:07:06.247 START TEST default_locks_via_rpc 00:07:06.247 ************************************ 00:07:06.247 15:12:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:07:06.247 15:12:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2879189 00:07:06.247 15:12:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2879189 00:07:06.247 15:12:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:06.247 15:12:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2879189 ']' 00:07:06.247 15:12:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.247 15:12:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:06.247 15:12:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.247 15:12:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:06.247 15:12:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.247 [2024-07-15 15:12:10.088664] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:06.247 [2024-07-15 15:12:10.088717] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2879189 ] 00:07:06.247 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.505 [2024-07-15 15:12:10.158946] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.505 [2024-07-15 15:12:10.233907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.073 15:12:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:07.073 15:12:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:07.073 15:12:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:07.073 15:12:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.073 15:12:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.073 15:12:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.073 15:12:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:07.073 15:12:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:07.073 15:12:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:07.073 15:12:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:07.073 15:12:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:07.073 15:12:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.073 15:12:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.073 15:12:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.073 15:12:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2879189 00:07:07.073 15:12:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2879189 00:07:07.073 15:12:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:07.331 15:12:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2879189 00:07:07.331 15:12:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 2879189 ']' 00:07:07.331 15:12:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 2879189 00:07:07.331 15:12:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:07:07.331 15:12:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:07.331 15:12:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2879189 00:07:07.590 15:12:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:07.590 15:12:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:07.590 15:12:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2879189' 00:07:07.590 killing process with pid 2879189 00:07:07.590 15:12:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 2879189 00:07:07.590 15:12:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 2879189 00:07:07.849 00:07:07.849 real 0m1.533s 00:07:07.849 user 0m1.587s 00:07:07.849 sys 0m0.554s 00:07:07.849 15:12:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.849 15:12:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.849 ************************************ 00:07:07.849 END TEST default_locks_via_rpc 00:07:07.849 ************************************ 00:07:07.849 15:12:11 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:07.849 15:12:11 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:07.849 15:12:11 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:07.849 15:12:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.849 15:12:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:07.849 ************************************ 00:07:07.849 START TEST non_locking_app_on_locked_coremask 00:07:07.849 ************************************ 00:07:07.849 15:12:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:07:07.849 15:12:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2879507 00:07:07.849 15:12:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2879507 /var/tmp/spdk.sock 00:07:07.849 15:12:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:07.849 15:12:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2879507 ']' 00:07:07.849 15:12:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.849 15:12:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:07.849 15:12:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.849 15:12:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:07.849 15:12:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:07.849 [2024-07-15 15:12:11.703301] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:07.849 [2024-07-15 15:12:11.703348] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2879507 ] 00:07:07.849 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.108 [2024-07-15 15:12:11.771109] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.108 [2024-07-15 15:12:11.844608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.676 15:12:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:08.676 15:12:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:08.676 15:12:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2879731 00:07:08.676 15:12:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2879731 /var/tmp/spdk2.sock 00:07:08.676 15:12:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:08.676 15:12:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2879731 ']' 00:07:08.676 15:12:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:08.676 15:12:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:08.676 15:12:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:08.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:08.676 15:12:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:08.676 15:12:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:08.676 [2024-07-15 15:12:12.544976] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:08.676 [2024-07-15 15:12:12.545027] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2879731 ] 00:07:08.676 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.935 [2024-07-15 15:12:12.644916] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:08.935 [2024-07-15 15:12:12.644947] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.935 [2024-07-15 15:12:12.789085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.501 15:12:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:09.501 15:12:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:09.501 15:12:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2879507 00:07:09.501 15:12:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2879507 00:07:09.501 15:12:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:10.875 lslocks: write error 00:07:10.875 15:12:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2879507 00:07:10.875 15:12:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2879507 ']' 00:07:10.875 15:12:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2879507 00:07:10.875 15:12:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:10.875 15:12:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:10.875 15:12:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2879507 00:07:10.875 15:12:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:10.875 15:12:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:10.875 15:12:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2879507' 00:07:10.875 killing process with pid 2879507 00:07:10.875 15:12:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2879507 00:07:10.875 15:12:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2879507 00:07:11.441 15:12:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2879731 00:07:11.441 15:12:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2879731 ']' 00:07:11.442 15:12:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2879731 00:07:11.442 15:12:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:11.442 15:12:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:11.442 15:12:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2879731 00:07:11.442 15:12:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:11.442 15:12:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:11.442 15:12:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2879731' 00:07:11.442 killing process with pid 2879731 00:07:11.442 15:12:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2879731 00:07:11.442 15:12:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2879731 00:07:11.700 00:07:11.700 real 0m3.835s 00:07:11.700 user 0m4.080s 00:07:11.700 sys 0m1.287s 00:07:11.700 15:12:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.700 15:12:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:11.700 ************************************ 00:07:11.700 END TEST non_locking_app_on_locked_coremask 00:07:11.700 ************************************ 00:07:11.700 15:12:15 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:11.700 15:12:15 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:11.700 15:12:15 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:11.700 15:12:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.700 15:12:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:11.700 ************************************ 00:07:11.700 START TEST locking_app_on_unlocked_coremask 00:07:11.700 ************************************ 00:07:11.700 15:12:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:07:11.700 15:12:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2880297 00:07:11.700 15:12:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2880297 /var/tmp/spdk.sock 00:07:11.700 15:12:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:11.700 15:12:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2880297 ']' 00:07:11.700 15:12:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.700 15:12:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:11.700 15:12:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.700 15:12:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:11.700 15:12:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:11.958 [2024-07-15 15:12:15.617775] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:11.958 [2024-07-15 15:12:15.617822] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2880297 ] 00:07:11.958 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.958 [2024-07-15 15:12:15.686275] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:11.958 [2024-07-15 15:12:15.686299] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.958 [2024-07-15 15:12:15.760885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.525 15:12:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:12.525 15:12:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:12.525 15:12:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2880413 00:07:12.525 15:12:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2880413 /var/tmp/spdk2.sock 00:07:12.525 15:12:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:12.525 15:12:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2880413 ']' 00:07:12.525 15:12:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:12.525 15:12:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:12.525 15:12:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:12.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:12.525 15:12:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:12.525 15:12:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:12.783 [2024-07-15 15:12:16.457400] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:12.783 [2024-07-15 15:12:16.457467] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2880413 ] 00:07:12.783 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.783 [2024-07-15 15:12:16.551027] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.042 [2024-07-15 15:12:16.702218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.608 15:12:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:13.608 15:12:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:13.608 15:12:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2880413 00:07:13.608 15:12:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2880413 00:07:13.608 15:12:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:14.173 lslocks: write error 00:07:14.173 15:12:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2880297 00:07:14.173 15:12:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2880297 ']' 00:07:14.173 15:12:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 2880297 00:07:14.173 15:12:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:14.173 15:12:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:14.173 15:12:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2880297 00:07:14.439 15:12:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:14.439 15:12:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:14.439 15:12:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2880297' 00:07:14.439 killing process with pid 2880297 00:07:14.439 15:12:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 2880297 00:07:14.439 15:12:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 2880297 00:07:15.013 15:12:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2880413 00:07:15.013 15:12:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2880413 ']' 00:07:15.013 15:12:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 2880413 00:07:15.013 15:12:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:15.013 15:12:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:15.013 15:12:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2880413 00:07:15.013 15:12:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:15.013 15:12:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:15.013 15:12:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2880413' 00:07:15.013 killing process with pid 2880413 00:07:15.013 15:12:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 2880413 00:07:15.013 15:12:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 2880413 00:07:15.271 00:07:15.271 real 0m3.519s 00:07:15.271 user 0m3.765s 00:07:15.271 sys 0m1.196s 00:07:15.271 15:12:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.271 15:12:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:15.271 ************************************ 00:07:15.271 END TEST locking_app_on_unlocked_coremask 00:07:15.271 ************************************ 00:07:15.271 15:12:19 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:15.271 15:12:19 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:15.271 15:12:19 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:15.271 15:12:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.271 15:12:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:15.271 ************************************ 00:07:15.271 START TEST locking_app_on_locked_coremask 00:07:15.271 ************************************ 00:07:15.271 15:12:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:07:15.271 15:12:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2880873 00:07:15.271 15:12:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2880873 /var/tmp/spdk.sock 00:07:15.271 15:12:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2880873 ']' 00:07:15.271 15:12:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.271 15:12:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:15.271 15:12:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.271 15:12:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:15.271 15:12:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:15.271 15:12:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:15.529 [2024-07-15 15:12:19.206886] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:15.529 [2024-07-15 15:12:19.206932] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2880873 ] 00:07:15.529 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.529 [2024-07-15 15:12:19.274967] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.529 [2024-07-15 15:12:19.347220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.095 15:12:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:16.096 15:12:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:16.096 15:12:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2881125 00:07:16.096 15:12:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2881125 /var/tmp/spdk2.sock 00:07:16.096 15:12:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:16.096 15:12:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:16.096 15:12:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2881125 /var/tmp/spdk2.sock 00:07:16.096 15:12:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:16.096 15:12:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:16.096 15:12:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:16.096 15:12:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:16.096 15:12:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2881125 /var/tmp/spdk2.sock 00:07:16.096 15:12:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2881125 ']' 00:07:16.096 15:12:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:16.096 15:12:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:16.096 15:12:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:16.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:16.096 15:12:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:16.096 15:12:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:16.354 [2024-07-15 15:12:20.038056] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:16.354 [2024-07-15 15:12:20.038112] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2881125 ] 00:07:16.354 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.354 [2024-07-15 15:12:20.141611] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2880873 has claimed it. 00:07:16.354 [2024-07-15 15:12:20.141651] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:16.919 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2881125) - No such process 00:07:16.919 ERROR: process (pid: 2881125) is no longer running 00:07:16.919 15:12:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:16.919 15:12:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:16.919 15:12:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:16.919 15:12:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:16.919 15:12:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:16.919 15:12:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:16.919 15:12:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2880873 00:07:16.919 15:12:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2880873 00:07:16.919 15:12:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:17.228 lslocks: write error 00:07:17.228 15:12:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2880873 00:07:17.228 15:12:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2880873 ']' 00:07:17.228 15:12:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2880873 00:07:17.228 15:12:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:17.228 15:12:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:17.228 15:12:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2880873 00:07:17.500 15:12:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:17.500 15:12:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:17.500 15:12:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2880873' 00:07:17.500 killing process with pid 2880873 00:07:17.500 15:12:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2880873 00:07:17.500 15:12:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2880873 00:07:17.758 00:07:17.758 real 0m2.298s 00:07:17.758 user 0m2.517s 00:07:17.758 sys 0m0.661s 00:07:17.758 15:12:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.758 15:12:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:17.758 ************************************ 00:07:17.758 END TEST locking_app_on_locked_coremask 00:07:17.758 ************************************ 00:07:17.758 15:12:21 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:17.758 15:12:21 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:17.758 15:12:21 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:17.758 15:12:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.758 15:12:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:17.758 ************************************ 00:07:17.758 START TEST locking_overlapped_coremask 00:07:17.758 ************************************ 00:07:17.758 15:12:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:07:17.758 15:12:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2881423 00:07:17.758 15:12:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2881423 /var/tmp/spdk.sock 00:07:17.758 15:12:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:17.758 15:12:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 2881423 ']' 00:07:17.758 15:12:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.758 15:12:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:17.758 15:12:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.758 15:12:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:17.758 15:12:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:17.758 [2024-07-15 15:12:21.584708] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:17.758 [2024-07-15 15:12:21.584757] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2881423 ] 00:07:17.758 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.758 [2024-07-15 15:12:21.654964] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:18.016 [2024-07-15 15:12:21.731323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.016 [2024-07-15 15:12:21.731415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:18.016 [2024-07-15 15:12:21.731418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.582 15:12:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:18.582 15:12:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:18.582 15:12:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2881459 00:07:18.582 15:12:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2881459 /var/tmp/spdk2.sock 00:07:18.582 15:12:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:18.582 15:12:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:18.582 15:12:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2881459 /var/tmp/spdk2.sock 00:07:18.582 15:12:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:18.582 15:12:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.582 15:12:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:18.582 15:12:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.582 15:12:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2881459 /var/tmp/spdk2.sock 00:07:18.582 15:12:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 2881459 ']' 00:07:18.582 15:12:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:18.582 15:12:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:18.582 15:12:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:18.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:18.582 15:12:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:18.582 15:12:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:18.582 [2024-07-15 15:12:22.440173] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:18.582 [2024-07-15 15:12:22.440225] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2881459 ] 00:07:18.582 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.839 [2024-07-15 15:12:22.542405] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2881423 has claimed it. 00:07:18.839 [2024-07-15 15:12:22.542446] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:19.405 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2881459) - No such process 00:07:19.405 ERROR: process (pid: 2881459) is no longer running 00:07:19.405 15:12:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:19.405 15:12:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:19.405 15:12:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:19.405 15:12:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:19.405 15:12:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:19.405 15:12:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:19.405 15:12:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:19.405 15:12:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:19.405 15:12:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:19.405 15:12:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:19.405 15:12:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2881423 00:07:19.405 15:12:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 2881423 ']' 00:07:19.405 15:12:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 2881423 00:07:19.405 15:12:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:07:19.405 15:12:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:19.405 15:12:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2881423 00:07:19.405 15:12:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:19.405 15:12:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:19.405 15:12:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2881423' 00:07:19.405 killing process with pid 2881423 00:07:19.405 15:12:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 2881423 00:07:19.405 15:12:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 2881423 00:07:19.664 00:07:19.664 real 0m1.897s 00:07:19.664 user 0m5.306s 00:07:19.664 sys 0m0.461s 00:07:19.664 15:12:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.664 15:12:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:19.664 ************************************ 00:07:19.664 END TEST locking_overlapped_coremask 00:07:19.664 ************************************ 00:07:19.664 15:12:23 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:19.664 15:12:23 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:19.664 15:12:23 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:19.664 15:12:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.664 15:12:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:19.664 ************************************ 00:07:19.664 START TEST locking_overlapped_coremask_via_rpc 00:07:19.664 ************************************ 00:07:19.664 15:12:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:07:19.664 15:12:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2881734 00:07:19.664 15:12:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2881734 /var/tmp/spdk.sock 00:07:19.664 15:12:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:19.664 15:12:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2881734 ']' 00:07:19.664 15:12:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.664 15:12:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:19.664 15:12:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.664 15:12:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:19.664 15:12:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.664 [2024-07-15 15:12:23.568546] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:19.664 [2024-07-15 15:12:23.568593] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2881734 ] 00:07:19.921 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.921 [2024-07-15 15:12:23.637960] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:19.921 [2024-07-15 15:12:23.637985] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:19.921 [2024-07-15 15:12:23.705010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.921 [2024-07-15 15:12:23.705105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:19.921 [2024-07-15 15:12:23.705108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.485 15:12:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:20.485 15:12:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:20.485 15:12:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2881980 00:07:20.485 15:12:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2881980 /var/tmp/spdk2.sock 00:07:20.485 15:12:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:20.485 15:12:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2881980 ']' 00:07:20.485 15:12:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:20.485 15:12:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:20.485 15:12:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:20.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:20.485 15:12:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:20.485 15:12:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.743 [2024-07-15 15:12:24.416750] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:20.743 [2024-07-15 15:12:24.416805] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2881980 ] 00:07:20.743 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.743 [2024-07-15 15:12:24.516434] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:20.743 [2024-07-15 15:12:24.516467] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:21.000 [2024-07-15 15:12:24.660264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:21.000 [2024-07-15 15:12:24.660383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:21.000 [2024-07-15 15:12:24.660383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:21.564 15:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:21.564 15:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:21.564 15:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:21.564 15:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.564 15:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.564 15:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.564 15:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:21.564 15:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:21.564 15:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:21.564 15:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:21.564 15:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.564 15:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:21.564 15:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.564 15:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:21.564 15:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.564 15:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.564 [2024-07-15 15:12:25.231909] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2881734 has claimed it. 00:07:21.564 request: 00:07:21.564 { 00:07:21.564 "method": "framework_enable_cpumask_locks", 00:07:21.564 "req_id": 1 00:07:21.564 } 00:07:21.564 Got JSON-RPC error response 00:07:21.564 response: 00:07:21.564 { 00:07:21.564 "code": -32603, 00:07:21.564 "message": "Failed to claim CPU core: 2" 00:07:21.564 } 00:07:21.564 15:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:21.564 15:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:21.564 15:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:21.564 15:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:21.564 15:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:21.564 15:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2881734 /var/tmp/spdk.sock 00:07:21.564 15:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2881734 ']' 00:07:21.564 15:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.564 15:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:21.564 15:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.564 15:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:21.564 15:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.564 15:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:21.564 15:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:21.564 15:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2881980 /var/tmp/spdk2.sock 00:07:21.564 15:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2881980 ']' 00:07:21.564 15:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:21.564 15:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:21.564 15:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:21.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:21.564 15:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:21.564 15:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.821 15:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:21.821 15:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:21.821 15:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:21.821 15:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:21.821 15:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:21.821 15:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:21.821 00:07:21.821 real 0m2.082s 00:07:21.821 user 0m0.817s 00:07:21.821 sys 0m0.204s 00:07:21.821 15:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.821 15:12:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.821 ************************************ 00:07:21.821 END TEST locking_overlapped_coremask_via_rpc 00:07:21.821 ************************************ 00:07:21.821 15:12:25 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:21.821 15:12:25 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:21.821 15:12:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2881734 ]] 00:07:21.821 15:12:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2881734 00:07:21.821 15:12:25 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2881734 ']' 00:07:21.821 15:12:25 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2881734 00:07:21.821 15:12:25 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:21.821 15:12:25 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:21.821 15:12:25 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2881734 00:07:21.821 15:12:25 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:21.821 15:12:25 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:21.821 15:12:25 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2881734' 00:07:21.821 killing process with pid 2881734 00:07:21.821 15:12:25 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 2881734 00:07:21.821 15:12:25 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 2881734 00:07:22.385 15:12:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2881980 ]] 00:07:22.385 15:12:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2881980 00:07:22.385 15:12:26 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2881980 ']' 00:07:22.385 15:12:26 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2881980 00:07:22.385 15:12:26 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:22.385 15:12:26 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:22.385 15:12:26 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2881980 00:07:22.385 15:12:26 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:07:22.385 15:12:26 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:07:22.385 15:12:26 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2881980' 00:07:22.385 killing process with pid 2881980 00:07:22.385 15:12:26 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 2881980 00:07:22.385 15:12:26 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 2881980 00:07:22.644 15:12:26 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:22.644 15:12:26 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:22.644 15:12:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2881734 ]] 00:07:22.644 15:12:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2881734 00:07:22.644 15:12:26 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2881734 ']' 00:07:22.644 15:12:26 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2881734 00:07:22.644 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2881734) - No such process 00:07:22.644 15:12:26 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 2881734 is not found' 00:07:22.644 Process with pid 2881734 is not found 00:07:22.644 15:12:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2881980 ]] 00:07:22.644 15:12:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2881980 00:07:22.644 15:12:26 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2881980 ']' 00:07:22.644 15:12:26 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2881980 00:07:22.644 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2881980) - No such process 00:07:22.644 15:12:26 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 2881980 is not found' 00:07:22.644 Process with pid 2881980 is not found 00:07:22.644 15:12:26 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:22.644 00:07:22.644 real 0m18.058s 00:07:22.644 user 0m30.045s 00:07:22.644 sys 0m5.963s 00:07:22.644 15:12:26 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.644 15:12:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:22.644 ************************************ 00:07:22.644 END TEST cpu_locks 00:07:22.644 ************************************ 00:07:22.644 15:12:26 event -- common/autotest_common.sh@1142 -- # return 0 00:07:22.644 00:07:22.644 real 0m43.382s 00:07:22.644 user 1m20.470s 00:07:22.644 sys 0m10.057s 00:07:22.644 15:12:26 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.644 15:12:26 event -- common/autotest_common.sh@10 -- # set +x 00:07:22.644 ************************************ 00:07:22.644 END TEST event 00:07:22.644 ************************************ 00:07:22.644 15:12:26 -- common/autotest_common.sh@1142 -- # return 0 00:07:22.644 15:12:26 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:22.644 15:12:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:22.644 15:12:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.644 15:12:26 -- common/autotest_common.sh@10 -- # set +x 00:07:22.644 ************************************ 00:07:22.644 START TEST thread 00:07:22.644 ************************************ 00:07:22.644 15:12:26 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:22.903 * Looking for test storage... 00:07:22.903 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:22.903 15:12:26 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:22.903 15:12:26 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:22.903 15:12:26 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.903 15:12:26 thread -- common/autotest_common.sh@10 -- # set +x 00:07:22.903 ************************************ 00:07:22.903 START TEST thread_poller_perf 00:07:22.903 ************************************ 00:07:22.903 15:12:26 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:22.903 [2024-07-15 15:12:26.685639] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:22.903 [2024-07-15 15:12:26.685716] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2882365 ] 00:07:22.903 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.903 [2024-07-15 15:12:26.757679] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.163 [2024-07-15 15:12:26.828474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.163 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:24.100 ====================================== 00:07:24.100 busy:2505896604 (cyc) 00:07:24.100 total_run_count: 428000 00:07:24.100 tsc_hz: 2500000000 (cyc) 00:07:24.100 ====================================== 00:07:24.100 poller_cost: 5854 (cyc), 2341 (nsec) 00:07:24.100 00:07:24.100 real 0m1.236s 00:07:24.100 user 0m1.146s 00:07:24.100 sys 0m0.086s 00:07:24.100 15:12:27 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:24.100 15:12:27 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:24.100 ************************************ 00:07:24.100 END TEST thread_poller_perf 00:07:24.100 ************************************ 00:07:24.100 15:12:27 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:24.100 15:12:27 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:24.100 15:12:27 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:24.100 15:12:27 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.100 15:12:27 thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.100 ************************************ 00:07:24.100 START TEST thread_poller_perf 00:07:24.100 ************************************ 00:07:24.100 15:12:27 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:24.359 [2024-07-15 15:12:28.009106] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:24.359 [2024-07-15 15:12:28.009208] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2882643 ] 00:07:24.359 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.359 [2024-07-15 15:12:28.080431] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.359 [2024-07-15 15:12:28.148820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.359 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:25.736 ====================================== 00:07:25.736 busy:2501867174 (cyc) 00:07:25.736 total_run_count: 5724000 00:07:25.736 tsc_hz: 2500000000 (cyc) 00:07:25.736 ====================================== 00:07:25.736 poller_cost: 437 (cyc), 174 (nsec) 00:07:25.736 00:07:25.736 real 0m1.227s 00:07:25.736 user 0m1.134s 00:07:25.736 sys 0m0.089s 00:07:25.736 15:12:29 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.736 15:12:29 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:25.736 ************************************ 00:07:25.736 END TEST thread_poller_perf 00:07:25.736 ************************************ 00:07:25.736 15:12:29 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:25.736 15:12:29 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:25.736 00:07:25.736 real 0m2.742s 00:07:25.737 user 0m2.386s 00:07:25.737 sys 0m0.369s 00:07:25.737 15:12:29 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.737 15:12:29 thread -- common/autotest_common.sh@10 -- # set +x 00:07:25.737 ************************************ 00:07:25.737 END TEST thread 00:07:25.737 ************************************ 00:07:25.737 15:12:29 -- common/autotest_common.sh@1142 -- # return 0 00:07:25.737 15:12:29 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:25.737 15:12:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:25.737 15:12:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.737 15:12:29 -- common/autotest_common.sh@10 -- # set +x 00:07:25.737 ************************************ 00:07:25.737 START TEST accel 00:07:25.737 ************************************ 00:07:25.737 15:12:29 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:25.737 * Looking for test storage... 00:07:25.737 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:25.737 15:12:29 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:25.737 15:12:29 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:25.737 15:12:29 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:25.737 15:12:29 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=2882975 00:07:25.737 15:12:29 accel -- accel/accel.sh@63 -- # waitforlisten 2882975 00:07:25.737 15:12:29 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:25.737 15:12:29 accel -- common/autotest_common.sh@829 -- # '[' -z 2882975 ']' 00:07:25.737 15:12:29 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.737 15:12:29 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:25.737 15:12:29 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:25.737 15:12:29 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.737 15:12:29 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.737 15:12:29 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:25.737 15:12:29 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:25.737 15:12:29 accel -- common/autotest_common.sh@10 -- # set +x 00:07:25.737 15:12:29 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.737 15:12:29 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.737 15:12:29 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:25.737 15:12:29 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:25.737 15:12:29 accel -- accel/accel.sh@41 -- # jq -r . 00:07:25.737 [2024-07-15 15:12:29.503549] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:25.737 [2024-07-15 15:12:29.503608] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2882975 ] 00:07:25.737 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.737 [2024-07-15 15:12:29.573323] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.995 [2024-07-15 15:12:29.650560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.563 15:12:30 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:26.563 15:12:30 accel -- common/autotest_common.sh@862 -- # return 0 00:07:26.563 15:12:30 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:26.563 15:12:30 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:26.563 15:12:30 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:26.563 15:12:30 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:26.563 15:12:30 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:26.563 15:12:30 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:26.563 15:12:30 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.563 15:12:30 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:26.563 15:12:30 accel -- common/autotest_common.sh@10 -- # set +x 00:07:26.563 15:12:30 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.563 15:12:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.563 15:12:30 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.563 15:12:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.563 15:12:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.563 15:12:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.563 15:12:30 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.563 15:12:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.563 15:12:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.563 15:12:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.563 15:12:30 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.563 15:12:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.563 15:12:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.563 15:12:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.563 15:12:30 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.563 15:12:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.563 15:12:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.563 15:12:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.563 15:12:30 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.563 15:12:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.563 15:12:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.563 15:12:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.563 15:12:30 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.563 15:12:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.563 15:12:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.563 15:12:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.563 15:12:30 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.563 15:12:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.563 15:12:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.563 15:12:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.563 15:12:30 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.563 15:12:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.563 15:12:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.563 15:12:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.563 15:12:30 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.563 15:12:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.563 15:12:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.563 15:12:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.563 15:12:30 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.563 15:12:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.563 15:12:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.563 15:12:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.563 15:12:30 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.563 15:12:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.563 15:12:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.563 15:12:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.563 15:12:30 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.563 15:12:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.563 15:12:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.563 15:12:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.563 15:12:30 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.563 15:12:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.563 15:12:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.563 15:12:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.563 15:12:30 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.563 15:12:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.563 15:12:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.563 15:12:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.563 15:12:30 accel -- accel/accel.sh@72 -- # IFS== 00:07:26.563 15:12:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:26.563 15:12:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.563 15:12:30 accel -- accel/accel.sh@75 -- # killprocess 2882975 00:07:26.563 15:12:30 accel -- common/autotest_common.sh@948 -- # '[' -z 2882975 ']' 00:07:26.563 15:12:30 accel -- common/autotest_common.sh@952 -- # kill -0 2882975 00:07:26.563 15:12:30 accel -- common/autotest_common.sh@953 -- # uname 00:07:26.563 15:12:30 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:26.563 15:12:30 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2882975 00:07:26.563 15:12:30 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:26.563 15:12:30 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:26.563 15:12:30 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2882975' 00:07:26.563 killing process with pid 2882975 00:07:26.563 15:12:30 accel -- common/autotest_common.sh@967 -- # kill 2882975 00:07:26.563 15:12:30 accel -- common/autotest_common.sh@972 -- # wait 2882975 00:07:26.822 15:12:30 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:26.822 15:12:30 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:26.822 15:12:30 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:26.822 15:12:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.822 15:12:30 accel -- common/autotest_common.sh@10 -- # set +x 00:07:27.081 15:12:30 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:07:27.081 15:12:30 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:27.081 15:12:30 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:27.081 15:12:30 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:27.081 15:12:30 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:27.081 15:12:30 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.081 15:12:30 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.081 15:12:30 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:27.081 15:12:30 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:27.081 15:12:30 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:27.081 15:12:30 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.081 15:12:30 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:27.081 15:12:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:27.081 15:12:30 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:27.081 15:12:30 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:27.081 15:12:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.081 15:12:30 accel -- common/autotest_common.sh@10 -- # set +x 00:07:27.081 ************************************ 00:07:27.081 START TEST accel_missing_filename 00:07:27.081 ************************************ 00:07:27.081 15:12:30 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:07:27.081 15:12:30 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:27.081 15:12:30 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:27.081 15:12:30 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:27.081 15:12:30 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:27.081 15:12:30 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:27.081 15:12:30 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:27.081 15:12:30 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:27.081 15:12:30 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:27.081 15:12:30 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:27.081 15:12:30 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:27.081 15:12:30 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:27.081 15:12:30 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.081 15:12:30 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.081 15:12:30 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:27.081 15:12:30 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:27.081 15:12:30 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:27.081 [2024-07-15 15:12:30.902180] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:27.081 [2024-07-15 15:12:30.902249] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2883271 ] 00:07:27.081 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.081 [2024-07-15 15:12:30.974815] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.341 [2024-07-15 15:12:31.048657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.341 [2024-07-15 15:12:31.089454] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:27.341 [2024-07-15 15:12:31.149133] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:07:27.341 A filename is required. 00:07:27.341 15:12:31 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:27.341 15:12:31 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:27.341 15:12:31 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:27.341 15:12:31 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:27.341 15:12:31 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:27.341 15:12:31 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:27.341 00:07:27.341 real 0m0.347s 00:07:27.341 user 0m0.248s 00:07:27.341 sys 0m0.134s 00:07:27.341 15:12:31 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.341 15:12:31 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:27.341 ************************************ 00:07:27.341 END TEST accel_missing_filename 00:07:27.341 ************************************ 00:07:27.600 15:12:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:27.600 15:12:31 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:27.600 15:12:31 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:27.600 15:12:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.600 15:12:31 accel -- common/autotest_common.sh@10 -- # set +x 00:07:27.600 ************************************ 00:07:27.600 START TEST accel_compress_verify 00:07:27.600 ************************************ 00:07:27.600 15:12:31 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:27.600 15:12:31 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:27.600 15:12:31 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:27.600 15:12:31 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:27.600 15:12:31 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:27.600 15:12:31 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:27.600 15:12:31 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:27.600 15:12:31 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:27.600 15:12:31 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:27.600 15:12:31 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:27.600 15:12:31 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:27.600 15:12:31 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:27.600 15:12:31 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.600 15:12:31 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.600 15:12:31 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:27.600 15:12:31 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:27.600 15:12:31 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:27.600 [2024-07-15 15:12:31.332802] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:27.600 [2024-07-15 15:12:31.332878] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2883293 ] 00:07:27.600 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.600 [2024-07-15 15:12:31.404978] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.600 [2024-07-15 15:12:31.479305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.858 [2024-07-15 15:12:31.520678] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:27.859 [2024-07-15 15:12:31.580825] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:07:27.859 00:07:27.859 Compression does not support the verify option, aborting. 00:07:27.859 15:12:31 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:27.859 15:12:31 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:27.859 15:12:31 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:27.859 15:12:31 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:27.859 15:12:31 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:27.859 15:12:31 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:27.859 00:07:27.859 real 0m0.349s 00:07:27.859 user 0m0.254s 00:07:27.859 sys 0m0.131s 00:07:27.859 15:12:31 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.859 15:12:31 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:27.859 ************************************ 00:07:27.859 END TEST accel_compress_verify 00:07:27.859 ************************************ 00:07:27.859 15:12:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:27.859 15:12:31 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:27.859 15:12:31 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:27.859 15:12:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.859 15:12:31 accel -- common/autotest_common.sh@10 -- # set +x 00:07:27.859 ************************************ 00:07:27.859 START TEST accel_wrong_workload 00:07:27.859 ************************************ 00:07:27.859 15:12:31 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:07:27.859 15:12:31 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:27.859 15:12:31 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:27.859 15:12:31 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:27.859 15:12:31 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:27.859 15:12:31 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:27.859 15:12:31 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:27.859 15:12:31 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:27.859 15:12:31 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:27.859 15:12:31 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:27.859 15:12:31 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:27.859 15:12:31 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:27.859 15:12:31 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.859 15:12:31 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.859 15:12:31 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:27.859 15:12:31 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:27.859 15:12:31 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:27.859 Unsupported workload type: foobar 00:07:27.859 [2024-07-15 15:12:31.763574] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:28.118 accel_perf options: 00:07:28.118 [-h help message] 00:07:28.118 [-q queue depth per core] 00:07:28.118 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:28.118 [-T number of threads per core 00:07:28.118 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:28.118 [-t time in seconds] 00:07:28.118 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:28.118 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:28.118 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:28.118 [-l for compress/decompress workloads, name of uncompressed input file 00:07:28.118 [-S for crc32c workload, use this seed value (default 0) 00:07:28.118 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:28.118 [-f for fill workload, use this BYTE value (default 255) 00:07:28.118 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:28.118 [-y verify result if this switch is on] 00:07:28.118 [-a tasks to allocate per core (default: same value as -q)] 00:07:28.118 Can be used to spread operations across a wider range of memory. 00:07:28.118 15:12:31 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:28.118 15:12:31 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:28.118 15:12:31 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:28.118 15:12:31 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:28.118 00:07:28.118 real 0m0.036s 00:07:28.118 user 0m0.045s 00:07:28.118 sys 0m0.017s 00:07:28.118 15:12:31 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.118 15:12:31 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:28.118 ************************************ 00:07:28.118 END TEST accel_wrong_workload 00:07:28.118 ************************************ 00:07:28.118 15:12:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:28.118 15:12:31 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:28.118 15:12:31 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:28.118 15:12:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.118 15:12:31 accel -- common/autotest_common.sh@10 -- # set +x 00:07:28.118 ************************************ 00:07:28.118 START TEST accel_negative_buffers 00:07:28.118 ************************************ 00:07:28.118 15:12:31 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:28.118 15:12:31 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:28.118 15:12:31 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:28.118 15:12:31 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:28.118 15:12:31 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.118 15:12:31 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:28.118 15:12:31 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.118 15:12:31 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:28.118 15:12:31 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:28.118 15:12:31 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:28.118 15:12:31 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:28.119 15:12:31 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:28.119 15:12:31 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.119 15:12:31 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.119 15:12:31 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:28.119 15:12:31 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:28.119 15:12:31 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:28.119 -x option must be non-negative. 00:07:28.119 [2024-07-15 15:12:31.857892] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:28.119 accel_perf options: 00:07:28.119 [-h help message] 00:07:28.119 [-q queue depth per core] 00:07:28.119 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:28.119 [-T number of threads per core 00:07:28.119 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:28.119 [-t time in seconds] 00:07:28.119 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:28.119 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:28.119 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:28.119 [-l for compress/decompress workloads, name of uncompressed input file 00:07:28.119 [-S for crc32c workload, use this seed value (default 0) 00:07:28.119 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:28.119 [-f for fill workload, use this BYTE value (default 255) 00:07:28.119 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:28.119 [-y verify result if this switch is on] 00:07:28.119 [-a tasks to allocate per core (default: same value as -q)] 00:07:28.119 Can be used to spread operations across a wider range of memory. 00:07:28.119 15:12:31 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:28.119 15:12:31 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:28.119 15:12:31 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:28.119 15:12:31 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:28.119 00:07:28.119 real 0m0.020s 00:07:28.119 user 0m0.007s 00:07:28.119 sys 0m0.013s 00:07:28.119 15:12:31 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.119 15:12:31 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:28.119 ************************************ 00:07:28.119 END TEST accel_negative_buffers 00:07:28.119 ************************************ 00:07:28.119 Error: writing output failed: Broken pipe 00:07:28.119 15:12:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:28.119 15:12:31 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:28.119 15:12:31 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:28.119 15:12:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.119 15:12:31 accel -- common/autotest_common.sh@10 -- # set +x 00:07:28.119 ************************************ 00:07:28.119 START TEST accel_crc32c 00:07:28.119 ************************************ 00:07:28.119 15:12:31 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:28.119 15:12:31 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:28.119 15:12:31 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:28.119 15:12:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.119 15:12:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.119 15:12:31 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:28.119 15:12:31 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:28.119 15:12:31 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:28.119 15:12:31 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:28.119 15:12:31 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:28.119 15:12:31 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.119 15:12:31 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.119 15:12:31 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:28.119 15:12:31 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:28.119 15:12:31 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:28.119 [2024-07-15 15:12:31.961169] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:28.119 [2024-07-15 15:12:31.961229] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2883547 ] 00:07:28.119 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.378 [2024-07-15 15:12:32.032057] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.378 [2024-07-15 15:12:32.103780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.378 15:12:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:28.378 15:12:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.378 15:12:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.378 15:12:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.378 15:12:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:28.378 15:12:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.378 15:12:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.378 15:12:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.378 15:12:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:28.378 15:12:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.378 15:12:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.378 15:12:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.378 15:12:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:28.378 15:12:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.378 15:12:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.378 15:12:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.378 15:12:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:28.378 15:12:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.378 15:12:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.378 15:12:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.378 15:12:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:28.378 15:12:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.378 15:12:32 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:28.378 15:12:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.378 15:12:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.378 15:12:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:28.378 15:12:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.378 15:12:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.378 15:12:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.378 15:12:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:28.378 15:12:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.378 15:12:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.378 15:12:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.378 15:12:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:28.378 15:12:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.378 15:12:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.378 15:12:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.378 15:12:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:28.378 15:12:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.378 15:12:32 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:28.378 15:12:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.378 15:12:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.378 15:12:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:28.378 15:12:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.378 15:12:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.378 15:12:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.378 15:12:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:28.378 15:12:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.378 15:12:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.378 15:12:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.378 15:12:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:28.378 15:12:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.378 15:12:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.378 15:12:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.378 15:12:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:28.378 15:12:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.379 15:12:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.379 15:12:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.379 15:12:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:28.379 15:12:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.379 15:12:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.379 15:12:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.379 15:12:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:28.379 15:12:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.379 15:12:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.379 15:12:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.379 15:12:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:28.379 15:12:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.379 15:12:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.379 15:12:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.755 15:12:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:29.755 15:12:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.755 15:12:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.755 15:12:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.755 15:12:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:29.755 15:12:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.755 15:12:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.755 15:12:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.755 15:12:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:29.755 15:12:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.755 15:12:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.755 15:12:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.755 15:12:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:29.755 15:12:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.755 15:12:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.755 15:12:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.755 15:12:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:29.755 15:12:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.755 15:12:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.755 15:12:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.755 15:12:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:29.755 15:12:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.755 15:12:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.755 15:12:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.755 15:12:33 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:29.755 15:12:33 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:29.755 15:12:33 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:29.755 00:07:29.755 real 0m1.342s 00:07:29.755 user 0m1.221s 00:07:29.755 sys 0m0.126s 00:07:29.755 15:12:33 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.755 15:12:33 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:29.755 ************************************ 00:07:29.755 END TEST accel_crc32c 00:07:29.755 ************************************ 00:07:29.755 15:12:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:29.755 15:12:33 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:29.755 15:12:33 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:29.755 15:12:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.755 15:12:33 accel -- common/autotest_common.sh@10 -- # set +x 00:07:29.755 ************************************ 00:07:29.755 START TEST accel_crc32c_C2 00:07:29.755 ************************************ 00:07:29.755 15:12:33 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:29.755 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:29.756 [2024-07-15 15:12:33.363917] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:29.756 [2024-07-15 15:12:33.363961] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2883772 ] 00:07:29.756 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.756 [2024-07-15 15:12:33.435304] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.756 [2024-07-15 15:12:33.506248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.756 15:12:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:31.131 15:12:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:31.131 15:12:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.131 15:12:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:31.131 15:12:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:31.131 15:12:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:31.131 15:12:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.131 15:12:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:31.131 15:12:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:31.131 15:12:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:31.131 15:12:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.131 15:12:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:31.131 15:12:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:31.131 15:12:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:31.131 15:12:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.131 15:12:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:31.131 15:12:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:31.131 15:12:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:31.131 15:12:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.131 15:12:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:31.131 15:12:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:31.132 15:12:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:31.132 15:12:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.132 15:12:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:31.132 15:12:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:31.132 15:12:34 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:31.132 15:12:34 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:31.132 15:12:34 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:31.132 00:07:31.132 real 0m1.333s 00:07:31.132 user 0m1.211s 00:07:31.132 sys 0m0.124s 00:07:31.132 15:12:34 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:31.132 15:12:34 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:31.132 ************************************ 00:07:31.132 END TEST accel_crc32c_C2 00:07:31.132 ************************************ 00:07:31.132 15:12:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:31.132 15:12:34 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:31.132 15:12:34 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:31.132 15:12:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.132 15:12:34 accel -- common/autotest_common.sh@10 -- # set +x 00:07:31.132 ************************************ 00:07:31.132 START TEST accel_copy 00:07:31.132 ************************************ 00:07:31.132 15:12:34 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:31.132 [2024-07-15 15:12:34.781923] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:31.132 [2024-07-15 15:12:34.781977] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2883990 ] 00:07:31.132 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.132 [2024-07-15 15:12:34.850143] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.132 [2024-07-15 15:12:34.918623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:31.132 15:12:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.508 15:12:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:32.508 15:12:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.508 15:12:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.508 15:12:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.508 15:12:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:32.508 15:12:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.508 15:12:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.508 15:12:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.508 15:12:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:32.508 15:12:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.508 15:12:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.508 15:12:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.508 15:12:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:32.508 15:12:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.508 15:12:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.508 15:12:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.508 15:12:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:32.508 15:12:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.508 15:12:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.508 15:12:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.508 15:12:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:32.508 15:12:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:32.508 15:12:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:32.508 15:12:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:32.508 15:12:36 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:32.508 15:12:36 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:32.508 15:12:36 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:32.508 00:07:32.508 real 0m1.335s 00:07:32.508 user 0m1.204s 00:07:32.508 sys 0m0.135s 00:07:32.508 15:12:36 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:32.508 15:12:36 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:32.508 ************************************ 00:07:32.508 END TEST accel_copy 00:07:32.508 ************************************ 00:07:32.508 15:12:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:32.508 15:12:36 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:32.508 15:12:36 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:32.508 15:12:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.508 15:12:36 accel -- common/autotest_common.sh@10 -- # set +x 00:07:32.508 ************************************ 00:07:32.508 START TEST accel_fill 00:07:32.508 ************************************ 00:07:32.508 15:12:36 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:32.508 [2024-07-15 15:12:36.187368] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:32.508 [2024-07-15 15:12:36.187425] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2884214 ] 00:07:32.508 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.508 [2024-07-15 15:12:36.256760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.508 [2024-07-15 15:12:36.325155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.508 15:12:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.509 15:12:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:32.509 15:12:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.509 15:12:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.509 15:12:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:33.882 15:12:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:33.882 15:12:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:33.882 15:12:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:33.882 15:12:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:33.882 15:12:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:33.882 15:12:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:33.882 15:12:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:33.882 15:12:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:33.882 15:12:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:33.882 15:12:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:33.882 15:12:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:33.882 15:12:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:33.882 15:12:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:33.882 15:12:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:33.882 15:12:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:33.882 15:12:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:33.882 15:12:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:33.882 15:12:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:33.882 15:12:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:33.882 15:12:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:33.882 15:12:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:33.882 15:12:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:33.882 15:12:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:33.882 15:12:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:33.882 15:12:37 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:33.882 15:12:37 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:33.882 15:12:37 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:33.882 00:07:33.882 real 0m1.336s 00:07:33.882 user 0m1.216s 00:07:33.882 sys 0m0.125s 00:07:33.883 15:12:37 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.883 15:12:37 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:33.883 ************************************ 00:07:33.883 END TEST accel_fill 00:07:33.883 ************************************ 00:07:33.883 15:12:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:33.883 15:12:37 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:33.883 15:12:37 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:33.883 15:12:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.883 15:12:37 accel -- common/autotest_common.sh@10 -- # set +x 00:07:33.883 ************************************ 00:07:33.883 START TEST accel_copy_crc32c 00:07:33.883 ************************************ 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:33.883 [2024-07-15 15:12:37.594959] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:33.883 [2024-07-15 15:12:37.595018] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2884493 ] 00:07:33.883 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.883 [2024-07-15 15:12:37.663246] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.883 [2024-07-15 15:12:37.731412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:33.883 15:12:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.260 15:12:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:35.260 15:12:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.260 15:12:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.260 15:12:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.260 15:12:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:35.260 15:12:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.260 15:12:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.260 15:12:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.260 15:12:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:35.260 15:12:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.260 15:12:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.260 15:12:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.260 15:12:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:35.260 15:12:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.260 15:12:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.260 15:12:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.260 15:12:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:35.260 15:12:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.260 15:12:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.260 15:12:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.260 15:12:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:35.260 15:12:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.260 15:12:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.260 15:12:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.260 15:12:38 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:35.260 15:12:38 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:35.260 15:12:38 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:35.260 00:07:35.260 real 0m1.336s 00:07:35.260 user 0m1.209s 00:07:35.260 sys 0m0.132s 00:07:35.260 15:12:38 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.260 15:12:38 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:35.260 ************************************ 00:07:35.260 END TEST accel_copy_crc32c 00:07:35.260 ************************************ 00:07:35.260 15:12:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:35.260 15:12:38 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:35.260 15:12:38 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:35.260 15:12:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.260 15:12:38 accel -- common/autotest_common.sh@10 -- # set +x 00:07:35.260 ************************************ 00:07:35.260 START TEST accel_copy_crc32c_C2 00:07:35.260 ************************************ 00:07:35.260 15:12:38 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:35.260 15:12:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:35.260 15:12:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:35.260 15:12:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.260 15:12:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:35.260 15:12:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.260 15:12:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:35.260 15:12:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:35.260 15:12:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:35.260 15:12:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:35.260 15:12:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.260 15:12:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.260 15:12:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:35.260 15:12:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:35.260 15:12:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:35.260 [2024-07-15 15:12:38.998925] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:35.260 [2024-07-15 15:12:38.998978] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2884775 ] 00:07:35.260 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.260 [2024-07-15 15:12:39.066367] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.260 [2024-07-15 15:12:39.134779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.519 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:35.519 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.519 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.519 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.519 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:35.519 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.519 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.519 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.519 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:35.519 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.519 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.519 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.519 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:35.519 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.519 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.519 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.519 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:35.519 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.519 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.519 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.519 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:35.519 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.519 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:35.519 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.519 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.519 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:35.519 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.519 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.519 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.519 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:35.519 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.519 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.519 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.519 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:35.519 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.519 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.519 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.519 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:35.520 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.520 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.520 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.520 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:35.520 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.520 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:35.520 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.520 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.520 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:35.520 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.520 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.520 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.520 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:35.520 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.520 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.520 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.520 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:35.520 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.520 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.520 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.520 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:35.520 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.520 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.520 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.520 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:35.520 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.520 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.520 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.520 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:35.520 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.520 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.520 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.520 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:35.520 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.520 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.520 15:12:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.453 15:12:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:36.453 15:12:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.453 15:12:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.453 15:12:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.453 15:12:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:36.453 15:12:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.453 15:12:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.453 15:12:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.453 15:12:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:36.453 15:12:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.453 15:12:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.453 15:12:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.453 15:12:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:36.453 15:12:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.453 15:12:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.453 15:12:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.453 15:12:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:36.453 15:12:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.453 15:12:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.453 15:12:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.453 15:12:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:36.453 15:12:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.453 15:12:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.453 15:12:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.453 15:12:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:36.453 15:12:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:36.453 15:12:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:36.453 00:07:36.453 real 0m1.335s 00:07:36.453 user 0m1.222s 00:07:36.453 sys 0m0.126s 00:07:36.453 15:12:40 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:36.453 15:12:40 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:36.453 ************************************ 00:07:36.453 END TEST accel_copy_crc32c_C2 00:07:36.453 ************************************ 00:07:36.453 15:12:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:36.453 15:12:40 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:36.453 15:12:40 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:36.453 15:12:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.453 15:12:40 accel -- common/autotest_common.sh@10 -- # set +x 00:07:36.711 ************************************ 00:07:36.711 START TEST accel_dualcast 00:07:36.711 ************************************ 00:07:36.711 15:12:40 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:07:36.711 15:12:40 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:36.711 15:12:40 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:36.711 15:12:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:36.711 15:12:40 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:36.711 15:12:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:36.711 15:12:40 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:36.711 15:12:40 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:36.711 15:12:40 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:36.711 15:12:40 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:36.711 15:12:40 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.711 15:12:40 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.711 15:12:40 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:36.711 15:12:40 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:36.711 15:12:40 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:36.711 [2024-07-15 15:12:40.415886] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:36.711 [2024-07-15 15:12:40.415950] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2885052 ] 00:07:36.711 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.711 [2024-07-15 15:12:40.485689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.711 [2024-07-15 15:12:40.554266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.711 15:12:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:36.711 15:12:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:36.711 15:12:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:36.711 15:12:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:36.711 15:12:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:36.711 15:12:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:36.711 15:12:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:36.711 15:12:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:36.711 15:12:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:36.711 15:12:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:36.711 15:12:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:36.711 15:12:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:36.711 15:12:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:36.711 15:12:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:36.711 15:12:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:36.711 15:12:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:36.711 15:12:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:36.711 15:12:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:36.711 15:12:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:36.711 15:12:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:36.711 15:12:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:36.711 15:12:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:36.711 15:12:40 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:36.711 15:12:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:36.711 15:12:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:36.711 15:12:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:36.711 15:12:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:36.711 15:12:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:36.711 15:12:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:36.711 15:12:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:36.711 15:12:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:36.711 15:12:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:36.711 15:12:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:36.711 15:12:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:36.711 15:12:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:36.711 15:12:40 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:36.711 15:12:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:36.711 15:12:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:36.711 15:12:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:36.711 15:12:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:36.712 15:12:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:36.712 15:12:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:36.712 15:12:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:36.712 15:12:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:36.712 15:12:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:36.712 15:12:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:36.712 15:12:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:36.712 15:12:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:36.712 15:12:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:36.712 15:12:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:36.712 15:12:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:36.712 15:12:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:36.712 15:12:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:36.712 15:12:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:36.712 15:12:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:36.712 15:12:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:36.712 15:12:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:36.712 15:12:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:36.712 15:12:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:36.712 15:12:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:36.712 15:12:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:36.712 15:12:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:36.712 15:12:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:36.712 15:12:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:36.712 15:12:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:36.712 15:12:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:38.136 15:12:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:38.136 15:12:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:38.136 15:12:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:38.136 15:12:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:38.136 15:12:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:38.136 15:12:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:38.136 15:12:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:38.136 15:12:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:38.136 15:12:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:38.136 15:12:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:38.136 15:12:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:38.136 15:12:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:38.136 15:12:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:38.136 15:12:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:38.136 15:12:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:38.136 15:12:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:38.136 15:12:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:38.136 15:12:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:38.136 15:12:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:38.136 15:12:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:38.136 15:12:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:38.136 15:12:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:38.136 15:12:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:38.136 15:12:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:38.136 15:12:41 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:38.136 15:12:41 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:38.136 15:12:41 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:38.136 00:07:38.136 real 0m1.339s 00:07:38.136 user 0m1.226s 00:07:38.136 sys 0m0.127s 00:07:38.136 15:12:41 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:38.136 15:12:41 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:38.136 ************************************ 00:07:38.136 END TEST accel_dualcast 00:07:38.136 ************************************ 00:07:38.136 15:12:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:38.136 15:12:41 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:38.136 15:12:41 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:38.136 15:12:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.136 15:12:41 accel -- common/autotest_common.sh@10 -- # set +x 00:07:38.136 ************************************ 00:07:38.136 START TEST accel_compare 00:07:38.136 ************************************ 00:07:38.136 15:12:41 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:07:38.136 15:12:41 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:38.136 15:12:41 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:38.136 15:12:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:38.136 15:12:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:38.136 15:12:41 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:38.136 15:12:41 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:38.136 15:12:41 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:38.136 15:12:41 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:38.136 15:12:41 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:38.136 15:12:41 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.136 15:12:41 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.136 15:12:41 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:38.136 15:12:41 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:38.136 15:12:41 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:38.136 [2024-07-15 15:12:41.841305] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:38.136 [2024-07-15 15:12:41.841377] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2885339 ] 00:07:38.136 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.136 [2024-07-15 15:12:41.912213] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.136 [2024-07-15 15:12:41.980763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.136 15:12:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:38.136 15:12:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:38.136 15:12:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:38.136 15:12:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:38.136 15:12:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:38.136 15:12:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:38.136 15:12:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:38.136 15:12:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:38.137 15:12:42 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:38.137 15:12:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:38.137 15:12:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:38.137 15:12:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:38.137 15:12:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:38.137 15:12:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:38.137 15:12:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:38.137 15:12:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:38.137 15:12:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:38.137 15:12:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:38.137 15:12:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:38.137 15:12:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:38.137 15:12:42 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:38.137 15:12:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:38.137 15:12:42 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:38.137 15:12:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:38.137 15:12:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:38.137 15:12:42 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:38.137 15:12:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:38.137 15:12:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:38.137 15:12:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:38.137 15:12:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:38.137 15:12:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:38.137 15:12:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:38.137 15:12:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:38.137 15:12:42 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:38.137 15:12:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:38.137 15:12:42 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:38.137 15:12:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:38.137 15:12:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:38.137 15:12:42 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:38.137 15:12:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:38.137 15:12:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:38.137 15:12:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:38.137 15:12:42 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:38.137 15:12:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:38.137 15:12:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:38.137 15:12:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:38.137 15:12:42 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:38.137 15:12:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:38.137 15:12:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:38.137 15:12:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:38.137 15:12:42 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:38.137 15:12:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:38.137 15:12:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:38.137 15:12:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:38.137 15:12:42 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:38.137 15:12:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:38.137 15:12:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:38.137 15:12:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:38.137 15:12:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:38.137 15:12:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:38.137 15:12:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:38.137 15:12:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:38.137 15:12:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:38.137 15:12:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:38.137 15:12:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:38.137 15:12:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.508 15:12:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:39.508 15:12:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.508 15:12:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.508 15:12:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.508 15:12:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:39.508 15:12:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.508 15:12:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.508 15:12:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.508 15:12:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:39.508 15:12:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.508 15:12:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.508 15:12:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.508 15:12:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:39.508 15:12:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.508 15:12:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.508 15:12:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.508 15:12:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:39.508 15:12:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.508 15:12:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.508 15:12:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.508 15:12:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:39.508 15:12:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:39.508 15:12:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:39.508 15:12:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:39.508 15:12:43 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:39.508 15:12:43 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:39.508 15:12:43 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:39.508 00:07:39.508 real 0m1.345s 00:07:39.508 user 0m1.221s 00:07:39.508 sys 0m0.138s 00:07:39.508 15:12:43 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:39.508 15:12:43 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:39.508 ************************************ 00:07:39.508 END TEST accel_compare 00:07:39.508 ************************************ 00:07:39.508 15:12:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:39.508 15:12:43 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:39.508 15:12:43 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:39.508 15:12:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.508 15:12:43 accel -- common/autotest_common.sh@10 -- # set +x 00:07:39.508 ************************************ 00:07:39.508 START TEST accel_xor 00:07:39.508 ************************************ 00:07:39.508 15:12:43 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:07:39.508 15:12:43 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:39.508 15:12:43 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:39.508 15:12:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:39.509 15:12:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:39.509 15:12:43 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:39.509 15:12:43 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:39.509 15:12:43 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:39.509 15:12:43 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:39.509 15:12:43 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:39.509 15:12:43 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.509 15:12:43 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.509 15:12:43 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:39.509 15:12:43 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:39.509 15:12:43 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:39.509 [2024-07-15 15:12:43.269726] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:39.509 [2024-07-15 15:12:43.269783] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2885619 ] 00:07:39.509 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.509 [2024-07-15 15:12:43.340076] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.509 [2024-07-15 15:12:43.407242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:39.767 15:12:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.702 15:12:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:40.702 15:12:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.702 15:12:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.702 15:12:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.702 15:12:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:40.702 15:12:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.702 15:12:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.702 15:12:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.702 15:12:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:40.702 15:12:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.702 15:12:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.702 15:12:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.702 15:12:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:40.702 15:12:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.702 15:12:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.702 15:12:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.702 15:12:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:40.702 15:12:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.702 15:12:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.702 15:12:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.702 15:12:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:40.702 15:12:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.702 15:12:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.702 15:12:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.702 15:12:44 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:40.702 15:12:44 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:40.702 15:12:44 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:40.702 00:07:40.702 real 0m1.344s 00:07:40.702 user 0m1.218s 00:07:40.702 sys 0m0.140s 00:07:40.702 15:12:44 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:40.702 15:12:44 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:40.702 ************************************ 00:07:40.702 END TEST accel_xor 00:07:40.702 ************************************ 00:07:40.962 15:12:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:40.962 15:12:44 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:40.962 15:12:44 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:40.962 15:12:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.962 15:12:44 accel -- common/autotest_common.sh@10 -- # set +x 00:07:40.962 ************************************ 00:07:40.962 START TEST accel_xor 00:07:40.962 ************************************ 00:07:40.962 15:12:44 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:07:40.962 15:12:44 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:40.962 15:12:44 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:40.962 15:12:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.962 15:12:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.963 15:12:44 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:40.963 15:12:44 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:40.963 15:12:44 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:40.963 15:12:44 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:40.963 15:12:44 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:40.963 15:12:44 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.963 15:12:44 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.963 15:12:44 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:40.963 15:12:44 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:40.963 15:12:44 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:40.963 [2024-07-15 15:12:44.698073] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:40.963 [2024-07-15 15:12:44.698134] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2885904 ] 00:07:40.963 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.963 [2024-07-15 15:12:44.768697] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.963 [2024-07-15 15:12:44.838003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.221 15:12:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:41.221 15:12:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.221 15:12:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.221 15:12:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.221 15:12:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:41.221 15:12:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.221 15:12:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.221 15:12:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.221 15:12:44 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:41.221 15:12:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.221 15:12:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.221 15:12:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.221 15:12:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:41.221 15:12:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.221 15:12:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.221 15:12:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.221 15:12:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:41.221 15:12:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.221 15:12:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.221 15:12:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.221 15:12:44 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:41.221 15:12:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.221 15:12:44 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:41.221 15:12:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.221 15:12:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.221 15:12:44 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:41.221 15:12:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.221 15:12:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.221 15:12:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.222 15:12:44 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:41.222 15:12:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.222 15:12:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.222 15:12:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.222 15:12:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:41.222 15:12:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.222 15:12:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.222 15:12:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.222 15:12:44 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:41.222 15:12:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.222 15:12:44 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:41.222 15:12:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.222 15:12:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.222 15:12:44 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:41.222 15:12:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.222 15:12:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.222 15:12:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.222 15:12:44 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:41.222 15:12:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.222 15:12:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.222 15:12:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.222 15:12:44 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:41.222 15:12:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.222 15:12:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.222 15:12:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.222 15:12:44 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:41.222 15:12:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.222 15:12:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.222 15:12:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.222 15:12:44 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:41.222 15:12:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.222 15:12:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.222 15:12:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.222 15:12:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:41.222 15:12:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.222 15:12:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.222 15:12:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.222 15:12:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:41.222 15:12:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.222 15:12:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.222 15:12:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.157 15:12:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:42.157 15:12:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.157 15:12:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.157 15:12:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.157 15:12:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:42.157 15:12:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.157 15:12:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.157 15:12:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.157 15:12:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:42.157 15:12:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.157 15:12:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.157 15:12:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.157 15:12:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:42.157 15:12:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.157 15:12:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.157 15:12:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.157 15:12:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:42.157 15:12:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.157 15:12:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.157 15:12:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.157 15:12:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:42.157 15:12:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:42.157 15:12:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:42.157 15:12:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:42.157 15:12:46 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:42.157 15:12:46 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:42.157 15:12:46 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:42.157 00:07:42.157 real 0m1.347s 00:07:42.157 user 0m1.226s 00:07:42.157 sys 0m0.134s 00:07:42.157 15:12:46 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:42.157 15:12:46 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:42.157 ************************************ 00:07:42.157 END TEST accel_xor 00:07:42.157 ************************************ 00:07:42.157 15:12:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:42.157 15:12:46 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:42.157 15:12:46 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:42.157 15:12:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.157 15:12:46 accel -- common/autotest_common.sh@10 -- # set +x 00:07:42.415 ************************************ 00:07:42.415 START TEST accel_dif_verify 00:07:42.415 ************************************ 00:07:42.415 15:12:46 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:07:42.415 15:12:46 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:42.415 15:12:46 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:42.415 15:12:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:42.416 [2024-07-15 15:12:46.125013] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:42.416 [2024-07-15 15:12:46.125073] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2886181 ] 00:07:42.416 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.416 [2024-07-15 15:12:46.193741] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.416 [2024-07-15 15:12:46.261805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:42.416 15:12:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:43.790 15:12:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:43.790 15:12:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:43.790 15:12:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:43.790 15:12:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:43.790 15:12:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:43.790 15:12:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:43.790 15:12:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:43.790 15:12:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:43.790 15:12:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:43.790 15:12:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:43.790 15:12:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:43.790 15:12:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:43.790 15:12:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:43.790 15:12:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:43.790 15:12:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:43.790 15:12:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:43.790 15:12:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:43.790 15:12:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:43.790 15:12:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:43.790 15:12:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:43.790 15:12:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:43.790 15:12:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:43.790 15:12:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:43.790 15:12:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:43.790 15:12:47 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:43.790 15:12:47 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:43.790 15:12:47 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:43.790 00:07:43.790 real 0m1.344s 00:07:43.790 user 0m1.227s 00:07:43.790 sys 0m0.132s 00:07:43.790 15:12:47 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.790 15:12:47 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:43.790 ************************************ 00:07:43.790 END TEST accel_dif_verify 00:07:43.790 ************************************ 00:07:43.790 15:12:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:43.790 15:12:47 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:43.790 15:12:47 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:43.790 15:12:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.790 15:12:47 accel -- common/autotest_common.sh@10 -- # set +x 00:07:43.790 ************************************ 00:07:43.790 START TEST accel_dif_generate 00:07:43.790 ************************************ 00:07:43.790 15:12:47 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:07:43.790 15:12:47 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:43.790 15:12:47 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:43.790 15:12:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:43.790 15:12:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:43.790 15:12:47 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:43.790 15:12:47 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:43.790 15:12:47 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:43.790 15:12:47 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:43.790 15:12:47 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:43.790 15:12:47 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.790 15:12:47 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.790 15:12:47 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:43.790 15:12:47 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:43.790 15:12:47 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:43.790 [2024-07-15 15:12:47.548557] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:43.790 [2024-07-15 15:12:47.548632] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2886459 ] 00:07:43.790 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.790 [2024-07-15 15:12:47.618526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.790 [2024-07-15 15:12:47.687108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.049 15:12:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.050 15:12:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.983 15:12:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:44.983 15:12:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.983 15:12:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.983 15:12:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.983 15:12:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:44.983 15:12:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.983 15:12:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.983 15:12:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.983 15:12:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:44.983 15:12:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.983 15:12:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.983 15:12:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.983 15:12:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:44.983 15:12:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.983 15:12:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.983 15:12:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.983 15:12:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:44.983 15:12:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.983 15:12:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.983 15:12:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.983 15:12:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:44.983 15:12:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.983 15:12:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.983 15:12:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.983 15:12:48 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:44.983 15:12:48 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:44.983 15:12:48 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:44.983 00:07:44.983 real 0m1.347s 00:07:44.983 user 0m1.232s 00:07:44.983 sys 0m0.131s 00:07:44.983 15:12:48 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:44.983 15:12:48 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:44.983 ************************************ 00:07:44.983 END TEST accel_dif_generate 00:07:44.983 ************************************ 00:07:45.242 15:12:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:45.242 15:12:48 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:45.242 15:12:48 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:45.242 15:12:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.242 15:12:48 accel -- common/autotest_common.sh@10 -- # set +x 00:07:45.242 ************************************ 00:07:45.242 START TEST accel_dif_generate_copy 00:07:45.242 ************************************ 00:07:45.242 15:12:48 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:07:45.242 15:12:48 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:45.242 15:12:48 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:45.242 15:12:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.242 15:12:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.242 15:12:48 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:45.242 15:12:48 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:45.242 15:12:48 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:45.242 15:12:48 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:45.242 15:12:48 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:45.242 15:12:48 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.242 15:12:48 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.242 15:12:48 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:45.242 15:12:48 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:45.242 15:12:48 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:45.242 [2024-07-15 15:12:48.964018] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:45.242 [2024-07-15 15:12:48.964092] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2886695 ] 00:07:45.242 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.242 [2024-07-15 15:12:49.034260] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.242 [2024-07-15 15:12:49.101956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.242 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:45.242 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.242 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.242 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.242 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:45.242 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.242 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.242 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.242 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:45.242 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.242 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.242 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.242 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:45.242 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.242 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.242 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.242 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:45.242 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.242 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.242 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.242 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:45.242 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.242 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:45.242 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.242 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.242 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:45.500 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.500 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.500 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.500 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:45.500 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.500 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.500 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.500 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:45.500 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.500 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.500 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.500 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:45.500 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.500 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:45.500 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.500 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.500 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:45.500 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.500 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.500 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.500 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:45.500 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.500 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.500 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.500 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:45.500 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.500 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.500 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.500 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:45.500 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.500 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.500 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.500 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:45.500 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.500 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.500 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.500 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:45.500 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.500 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.500 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.500 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:45.500 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.500 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.500 15:12:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.436 15:12:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:46.436 15:12:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.436 15:12:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.436 15:12:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.436 15:12:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:46.436 15:12:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.436 15:12:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.436 15:12:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.436 15:12:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:46.436 15:12:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.436 15:12:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.436 15:12:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.436 15:12:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:46.436 15:12:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.436 15:12:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.436 15:12:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.436 15:12:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:46.436 15:12:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.436 15:12:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.436 15:12:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.436 15:12:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:46.436 15:12:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.436 15:12:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.436 15:12:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.436 15:12:50 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:46.436 15:12:50 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:46.436 15:12:50 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:46.436 00:07:46.436 real 0m1.347s 00:07:46.436 user 0m1.242s 00:07:46.436 sys 0m0.120s 00:07:46.436 15:12:50 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:46.436 15:12:50 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:46.436 ************************************ 00:07:46.436 END TEST accel_dif_generate_copy 00:07:46.436 ************************************ 00:07:46.436 15:12:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:46.436 15:12:50 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:46.436 15:12:50 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:46.436 15:12:50 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:46.436 15:12:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.436 15:12:50 accel -- common/autotest_common.sh@10 -- # set +x 00:07:46.694 ************************************ 00:07:46.694 START TEST accel_comp 00:07:46.694 ************************************ 00:07:46.694 15:12:50 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:46.694 15:12:50 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:46.694 15:12:50 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:46.694 15:12:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:46.694 15:12:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:46.694 15:12:50 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:46.694 15:12:50 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:46.694 15:12:50 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:46.694 15:12:50 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:46.694 15:12:50 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:46.694 15:12:50 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:46.694 15:12:50 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:46.694 15:12:50 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:46.694 15:12:50 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:46.694 15:12:50 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:46.694 [2024-07-15 15:12:50.389322] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:46.694 [2024-07-15 15:12:50.389391] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2886915 ] 00:07:46.695 EAL: No free 2048 kB hugepages reported on node 1 00:07:46.695 [2024-07-15 15:12:50.459974] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.695 [2024-07-15 15:12:50.528748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:46.695 15:12:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:48.074 15:12:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:48.074 15:12:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.074 15:12:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:48.074 15:12:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:48.074 15:12:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:48.074 15:12:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.074 15:12:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:48.074 15:12:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:48.074 15:12:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:48.074 15:12:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.074 15:12:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:48.074 15:12:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:48.074 15:12:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:48.074 15:12:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.074 15:12:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:48.074 15:12:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:48.074 15:12:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:48.074 15:12:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.074 15:12:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:48.074 15:12:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:48.074 15:12:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:48.074 15:12:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.074 15:12:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:48.074 15:12:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:48.074 15:12:51 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:48.074 15:12:51 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:48.074 15:12:51 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:48.074 00:07:48.074 real 0m1.348s 00:07:48.074 user 0m1.228s 00:07:48.074 sys 0m0.134s 00:07:48.074 15:12:51 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:48.074 15:12:51 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:48.074 ************************************ 00:07:48.074 END TEST accel_comp 00:07:48.074 ************************************ 00:07:48.074 15:12:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:48.074 15:12:51 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:48.074 15:12:51 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:48.074 15:12:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:48.074 15:12:51 accel -- common/autotest_common.sh@10 -- # set +x 00:07:48.074 ************************************ 00:07:48.074 START TEST accel_decomp 00:07:48.074 ************************************ 00:07:48.074 15:12:51 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:48.074 15:12:51 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:48.074 15:12:51 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:48.074 15:12:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.074 15:12:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.074 15:12:51 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:48.074 15:12:51 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:48.074 15:12:51 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:48.074 15:12:51 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:48.074 15:12:51 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:48.074 15:12:51 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:48.074 15:12:51 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:48.074 15:12:51 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:48.074 15:12:51 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:48.074 15:12:51 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:48.074 [2024-07-15 15:12:51.818824] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:48.074 [2024-07-15 15:12:51.818886] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2887143 ] 00:07:48.074 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.074 [2024-07-15 15:12:51.889960] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.074 [2024-07-15 15:12:51.958846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.333 15:12:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:48.334 15:12:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:49.270 15:12:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:49.270 15:12:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:49.270 15:12:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:49.270 15:12:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:49.270 15:12:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:49.270 15:12:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:49.270 15:12:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:49.270 15:12:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:49.270 15:12:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:49.270 15:12:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:49.270 15:12:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:49.270 15:12:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:49.270 15:12:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:49.270 15:12:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:49.270 15:12:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:49.270 15:12:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:49.270 15:12:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:49.270 15:12:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:49.270 15:12:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:49.270 15:12:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:49.270 15:12:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:49.270 15:12:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:49.271 15:12:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:49.271 15:12:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:49.271 15:12:53 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:49.271 15:12:53 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:49.271 15:12:53 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:49.271 00:07:49.271 real 0m1.348s 00:07:49.271 user 0m1.230s 00:07:49.271 sys 0m0.135s 00:07:49.271 15:12:53 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:49.271 15:12:53 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:49.271 ************************************ 00:07:49.271 END TEST accel_decomp 00:07:49.271 ************************************ 00:07:49.271 15:12:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:49.271 15:12:53 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:49.271 15:12:53 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:49.271 15:12:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.271 15:12:53 accel -- common/autotest_common.sh@10 -- # set +x 00:07:49.529 ************************************ 00:07:49.529 START TEST accel_decomp_full 00:07:49.529 ************************************ 00:07:49.529 15:12:53 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:49.529 15:12:53 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:49.529 15:12:53 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:49.529 15:12:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:49.529 15:12:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:49.529 15:12:53 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:49.529 15:12:53 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:49.529 15:12:53 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:49.529 15:12:53 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:49.529 15:12:53 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:49.529 15:12:53 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:49.529 15:12:53 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:49.529 15:12:53 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:49.529 15:12:53 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:49.529 15:12:53 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:49.529 [2024-07-15 15:12:53.244613] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:49.529 [2024-07-15 15:12:53.244674] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2887371 ] 00:07:49.529 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.529 [2024-07-15 15:12:53.315651] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.529 [2024-07-15 15:12:53.388535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.529 15:12:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:49.529 15:12:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:49.529 15:12:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:49.529 15:12:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:49.529 15:12:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:49.529 15:12:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:49.529 15:12:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:49.529 15:12:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:49.529 15:12:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:49.529 15:12:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:49.529 15:12:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:49.529 15:12:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:49.529 15:12:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:49.529 15:12:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:49.529 15:12:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:49.529 15:12:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:49.787 15:12:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:49.787 15:12:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:49.787 15:12:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:49.787 15:12:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:49.787 15:12:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:49.787 15:12:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:49.787 15:12:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:49.787 15:12:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:49.787 15:12:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:49.787 15:12:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:49.787 15:12:53 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:49.787 15:12:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:49.787 15:12:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:49.787 15:12:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:49.787 15:12:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:49.787 15:12:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:49.787 15:12:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:49.787 15:12:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:49.787 15:12:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:49.787 15:12:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:49.787 15:12:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:49.787 15:12:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:49.787 15:12:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:49.787 15:12:53 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:49.787 15:12:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:49.787 15:12:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:49.787 15:12:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:49.787 15:12:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:49.787 15:12:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:49.787 15:12:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:49.787 15:12:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:49.787 15:12:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:49.787 15:12:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:49.787 15:12:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:49.787 15:12:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:49.787 15:12:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:49.787 15:12:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:49.787 15:12:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:49.787 15:12:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:49.787 15:12:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:49.787 15:12:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:49.787 15:12:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:49.787 15:12:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:49.787 15:12:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:49.787 15:12:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:49.787 15:12:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:49.787 15:12:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:49.787 15:12:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:49.787 15:12:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:49.787 15:12:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:49.787 15:12:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:49.787 15:12:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:49.787 15:12:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:49.787 15:12:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:49.787 15:12:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:49.787 15:12:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:49.787 15:12:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:49.787 15:12:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:50.724 15:12:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:50.724 15:12:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:50.724 15:12:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:50.724 15:12:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:50.724 15:12:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:50.724 15:12:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:50.724 15:12:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:50.724 15:12:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:50.724 15:12:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:50.724 15:12:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:50.724 15:12:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:50.724 15:12:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:50.724 15:12:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:50.724 15:12:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:50.724 15:12:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:50.724 15:12:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:50.724 15:12:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:50.724 15:12:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:50.724 15:12:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:50.724 15:12:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:50.724 15:12:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:50.724 15:12:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:50.724 15:12:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:50.724 15:12:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:50.724 15:12:54 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:50.724 15:12:54 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:50.724 15:12:54 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:50.724 00:07:50.724 real 0m1.361s 00:07:50.724 user 0m1.242s 00:07:50.725 sys 0m0.133s 00:07:50.725 15:12:54 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:50.725 15:12:54 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:50.725 ************************************ 00:07:50.725 END TEST accel_decomp_full 00:07:50.725 ************************************ 00:07:50.725 15:12:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:50.725 15:12:54 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:50.725 15:12:54 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:50.725 15:12:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.725 15:12:54 accel -- common/autotest_common.sh@10 -- # set +x 00:07:50.983 ************************************ 00:07:50.983 START TEST accel_decomp_mcore 00:07:50.983 ************************************ 00:07:50.983 15:12:54 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:50.983 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:50.983 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:50.983 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.983 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:50.983 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.983 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:50.983 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:50.983 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:50.983 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:50.983 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:50.983 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:50.983 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:50.983 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:50.983 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:50.983 [2024-07-15 15:12:54.680930] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:50.984 [2024-07-15 15:12:54.680981] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2887629 ] 00:07:50.984 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.984 [2024-07-15 15:12:54.751895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:50.984 [2024-07-15 15:12:54.824060] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.984 [2024-07-15 15:12:54.824155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:50.984 [2024-07-15 15:12:54.824239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:50.984 [2024-07-15 15:12:54.824241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.984 15:12:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.358 15:12:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:52.358 15:12:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.358 15:12:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.358 15:12:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.358 15:12:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:52.358 15:12:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.358 15:12:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.358 15:12:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.358 15:12:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:52.358 15:12:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.358 15:12:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.358 15:12:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.358 15:12:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:52.358 15:12:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.358 15:12:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.358 15:12:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.358 15:12:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:52.358 15:12:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.358 15:12:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.358 15:12:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.358 15:12:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:52.358 15:12:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.358 15:12:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.358 15:12:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.358 15:12:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:52.358 15:12:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.358 15:12:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.358 15:12:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.358 15:12:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:52.358 15:12:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.358 15:12:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.358 15:12:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.358 15:12:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:52.358 15:12:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.358 15:12:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.358 15:12:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.358 15:12:56 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:52.358 15:12:56 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:52.358 15:12:56 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:52.358 00:07:52.358 real 0m1.357s 00:07:52.358 user 0m4.552s 00:07:52.358 sys 0m0.150s 00:07:52.358 15:12:56 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:52.358 15:12:56 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:52.358 ************************************ 00:07:52.358 END TEST accel_decomp_mcore 00:07:52.358 ************************************ 00:07:52.358 15:12:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:52.358 15:12:56 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:52.358 15:12:56 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:52.358 15:12:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.358 15:12:56 accel -- common/autotest_common.sh@10 -- # set +x 00:07:52.358 ************************************ 00:07:52.358 START TEST accel_decomp_full_mcore 00:07:52.358 ************************************ 00:07:52.358 15:12:56 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:52.358 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:52.358 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:52.358 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.358 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.358 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:52.358 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:52.358 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:52.358 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:52.358 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:52.358 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:52.358 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:52.358 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:52.358 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:52.358 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:52.358 [2024-07-15 15:12:56.124262] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:52.358 [2024-07-15 15:12:56.124337] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2887918 ] 00:07:52.358 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.358 [2024-07-15 15:12:56.194830] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:52.617 [2024-07-15 15:12:56.267106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.617 [2024-07-15 15:12:56.267201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:52.617 [2024-07-15 15:12:56.267262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:52.617 [2024-07-15 15:12:56.267264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.617 15:12:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.549 15:12:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:53.549 15:12:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.549 15:12:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.549 15:12:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.549 15:12:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:53.549 15:12:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.549 15:12:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.549 15:12:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.549 15:12:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:53.809 15:12:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.809 15:12:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.809 15:12:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.809 15:12:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:53.809 15:12:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.809 15:12:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.809 15:12:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.809 15:12:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:53.809 15:12:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.809 15:12:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.809 15:12:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.809 15:12:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:53.809 15:12:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.809 15:12:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.809 15:12:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.809 15:12:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:53.809 15:12:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.809 15:12:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.809 15:12:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.809 15:12:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:53.809 15:12:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.809 15:12:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.809 15:12:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.809 15:12:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:53.809 15:12:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.809 15:12:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.809 15:12:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.809 15:12:57 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:53.809 15:12:57 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:53.809 15:12:57 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:53.809 00:07:53.809 real 0m1.370s 00:07:53.809 user 0m4.596s 00:07:53.809 sys 0m0.141s 00:07:53.809 15:12:57 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:53.809 15:12:57 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:53.809 ************************************ 00:07:53.809 END TEST accel_decomp_full_mcore 00:07:53.809 ************************************ 00:07:53.809 15:12:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:53.810 15:12:57 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:53.810 15:12:57 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:53.810 15:12:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.810 15:12:57 accel -- common/autotest_common.sh@10 -- # set +x 00:07:53.810 ************************************ 00:07:53.810 START TEST accel_decomp_mthread 00:07:53.810 ************************************ 00:07:53.810 15:12:57 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:53.810 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:53.810 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:53.810 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.810 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.810 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:53.810 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:53.810 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:53.810 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:53.810 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:53.810 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:53.810 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:53.810 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:53.810 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:53.810 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:53.810 [2024-07-15 15:12:57.561195] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:53.810 [2024-07-15 15:12:57.561271] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2888200 ] 00:07:53.810 EAL: No free 2048 kB hugepages reported on node 1 00:07:53.810 [2024-07-15 15:12:57.631273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.810 [2024-07-15 15:12:57.699150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:54.069 15:12:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.006 15:12:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:55.006 15:12:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.006 15:12:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.006 15:12:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.006 15:12:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:55.006 15:12:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.006 15:12:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.006 15:12:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.006 15:12:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:55.006 15:12:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.006 15:12:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.006 15:12:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.006 15:12:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:55.006 15:12:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.006 15:12:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.006 15:12:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.006 15:12:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:55.006 15:12:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.006 15:12:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.006 15:12:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.006 15:12:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:55.006 15:12:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.006 15:12:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.006 15:12:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.006 15:12:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:55.006 15:12:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.006 15:12:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.006 15:12:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.006 15:12:58 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:55.006 15:12:58 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:55.006 15:12:58 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:55.006 00:07:55.006 real 0m1.351s 00:07:55.006 user 0m1.225s 00:07:55.006 sys 0m0.141s 00:07:55.006 15:12:58 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:55.006 15:12:58 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:55.006 ************************************ 00:07:55.006 END TEST accel_decomp_mthread 00:07:55.006 ************************************ 00:07:55.266 15:12:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:55.266 15:12:58 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:55.266 15:12:58 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:55.266 15:12:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.266 15:12:58 accel -- common/autotest_common.sh@10 -- # set +x 00:07:55.266 ************************************ 00:07:55.266 START TEST accel_decomp_full_mthread 00:07:55.266 ************************************ 00:07:55.266 15:12:58 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:55.266 15:12:58 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:55.266 15:12:58 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:55.266 15:12:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.266 15:12:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.266 15:12:58 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:55.266 15:12:58 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:55.266 15:12:58 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:55.266 15:12:58 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:55.266 15:12:58 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:55.266 15:12:58 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:55.266 15:12:58 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:55.266 15:12:58 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:55.266 15:12:58 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:55.266 15:12:58 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:55.266 [2024-07-15 15:12:58.983557] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:55.266 [2024-07-15 15:12:58.983614] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2888482 ] 00:07:55.266 EAL: No free 2048 kB hugepages reported on node 1 00:07:55.266 [2024-07-15 15:12:59.050943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.266 [2024-07-15 15:12:59.119325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.266 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:55.266 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.266 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.266 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.266 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:55.266 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.266 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.266 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.266 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:55.266 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.266 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.266 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.266 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:55.266 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.266 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.266 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.266 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:55.266 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.266 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.266 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.266 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:55.266 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.266 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.266 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.266 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:55.266 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.266 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:55.266 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.266 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.266 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:55.266 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.266 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.266 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.266 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:55.266 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.266 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.266 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.266 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:55.266 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.266 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:55.266 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.266 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.525 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:55.525 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.525 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.525 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.525 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:55.525 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.525 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.525 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.525 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:55.525 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.525 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.525 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.525 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:55.525 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.525 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.525 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.525 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:55.525 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.525 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.525 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.525 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:55.525 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.525 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.525 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.525 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:55.525 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.525 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.525 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.525 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:55.525 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.525 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.525 15:12:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.462 15:13:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:56.462 15:13:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.462 15:13:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.462 15:13:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.462 15:13:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:56.462 15:13:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.462 15:13:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.462 15:13:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.462 15:13:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:56.462 15:13:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.462 15:13:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.462 15:13:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.462 15:13:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:56.462 15:13:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.462 15:13:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.462 15:13:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.462 15:13:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:56.462 15:13:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.462 15:13:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.462 15:13:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.462 15:13:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:56.462 15:13:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.462 15:13:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.462 15:13:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.462 15:13:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:56.462 15:13:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.462 15:13:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.462 15:13:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.462 15:13:00 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:56.462 15:13:00 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:56.462 15:13:00 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:56.462 00:07:56.462 real 0m1.359s 00:07:56.462 user 0m1.250s 00:07:56.462 sys 0m0.124s 00:07:56.462 15:13:00 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:56.462 15:13:00 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:56.462 ************************************ 00:07:56.462 END TEST accel_decomp_full_mthread 00:07:56.462 ************************************ 00:07:56.462 15:13:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:56.462 15:13:00 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:56.462 15:13:00 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:56.462 15:13:00 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:56.462 15:13:00 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:56.462 15:13:00 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:56.462 15:13:00 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:56.462 15:13:00 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:56.462 15:13:00 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:56.462 15:13:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.462 15:13:00 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:56.462 15:13:00 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:56.462 15:13:00 accel -- common/autotest_common.sh@10 -- # set +x 00:07:56.462 15:13:00 accel -- accel/accel.sh@41 -- # jq -r . 00:07:56.721 ************************************ 00:07:56.721 START TEST accel_dif_functional_tests 00:07:56.721 ************************************ 00:07:56.721 15:13:00 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:56.721 [2024-07-15 15:13:00.429209] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:56.722 [2024-07-15 15:13:00.429250] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2888766 ] 00:07:56.722 EAL: No free 2048 kB hugepages reported on node 1 00:07:56.722 [2024-07-15 15:13:00.495536] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:56.722 [2024-07-15 15:13:00.565352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:56.722 [2024-07-15 15:13:00.565450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:56.722 [2024-07-15 15:13:00.565453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.980 00:07:56.980 00:07:56.980 CUnit - A unit testing framework for C - Version 2.1-3 00:07:56.980 http://cunit.sourceforge.net/ 00:07:56.980 00:07:56.980 00:07:56.980 Suite: accel_dif 00:07:56.980 Test: verify: DIF generated, GUARD check ...passed 00:07:56.980 Test: verify: DIF generated, APPTAG check ...passed 00:07:56.980 Test: verify: DIF generated, REFTAG check ...passed 00:07:56.980 Test: verify: DIF not generated, GUARD check ...[2024-07-15 15:13:00.633459] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:56.980 passed 00:07:56.980 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 15:13:00.633510] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:56.980 passed 00:07:56.980 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 15:13:00.633532] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:56.980 passed 00:07:56.980 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:56.980 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 15:13:00.633581] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:56.981 passed 00:07:56.981 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:56.981 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:56.981 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:56.981 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 15:13:00.633684] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:56.981 passed 00:07:56.981 Test: verify copy: DIF generated, GUARD check ...passed 00:07:56.981 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:56.981 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:56.981 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 15:13:00.633795] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:56.981 passed 00:07:56.981 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 15:13:00.633823] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:56.981 passed 00:07:56.981 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 15:13:00.633851] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:56.981 passed 00:07:56.981 Test: generate copy: DIF generated, GUARD check ...passed 00:07:56.981 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:56.981 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:56.981 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:56.981 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:56.981 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:56.981 Test: generate copy: iovecs-len validate ...[2024-07-15 15:13:00.634017] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:56.981 passed 00:07:56.981 Test: generate copy: buffer alignment validate ...passed 00:07:56.981 00:07:56.981 Run Summary: Type Total Ran Passed Failed Inactive 00:07:56.981 suites 1 1 n/a 0 0 00:07:56.981 tests 26 26 26 0 0 00:07:56.981 asserts 115 115 115 0 n/a 00:07:56.981 00:07:56.981 Elapsed time = 0.002 seconds 00:07:56.981 00:07:56.981 real 0m0.401s 00:07:56.981 user 0m0.608s 00:07:56.981 sys 0m0.147s 00:07:56.981 15:13:00 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:56.981 15:13:00 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:56.981 ************************************ 00:07:56.981 END TEST accel_dif_functional_tests 00:07:56.981 ************************************ 00:07:56.981 15:13:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:56.981 00:07:56.981 real 0m31.499s 00:07:56.981 user 0m34.538s 00:07:56.981 sys 0m5.016s 00:07:56.981 15:13:00 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:56.981 15:13:00 accel -- common/autotest_common.sh@10 -- # set +x 00:07:56.981 ************************************ 00:07:56.981 END TEST accel 00:07:56.981 ************************************ 00:07:56.981 15:13:00 -- common/autotest_common.sh@1142 -- # return 0 00:07:56.981 15:13:00 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:56.981 15:13:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:56.981 15:13:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.981 15:13:00 -- common/autotest_common.sh@10 -- # set +x 00:07:57.241 ************************************ 00:07:57.241 START TEST accel_rpc 00:07:57.241 ************************************ 00:07:57.241 15:13:00 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:57.241 * Looking for test storage... 00:07:57.241 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:57.241 15:13:01 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:57.241 15:13:01 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2888877 00:07:57.241 15:13:01 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 2888877 00:07:57.241 15:13:01 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:57.241 15:13:01 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 2888877 ']' 00:07:57.241 15:13:01 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.241 15:13:01 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:57.241 15:13:01 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.241 15:13:01 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:57.241 15:13:01 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:57.241 [2024-07-15 15:13:01.067351] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:57.241 [2024-07-15 15:13:01.067411] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2888877 ] 00:07:57.241 EAL: No free 2048 kB hugepages reported on node 1 00:07:57.241 [2024-07-15 15:13:01.136216] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.575 [2024-07-15 15:13:01.212481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.143 15:13:01 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:58.143 15:13:01 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:58.143 15:13:01 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:58.143 15:13:01 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:58.143 15:13:01 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:58.143 15:13:01 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:58.143 15:13:01 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:58.143 15:13:01 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:58.143 15:13:01 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:58.143 15:13:01 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.143 ************************************ 00:07:58.143 START TEST accel_assign_opcode 00:07:58.143 ************************************ 00:07:58.143 15:13:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:58.143 15:13:01 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:58.143 15:13:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.143 15:13:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:58.143 [2024-07-15 15:13:01.890517] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:58.143 15:13:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.143 15:13:01 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:58.143 15:13:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.143 15:13:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:58.143 [2024-07-15 15:13:01.898530] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:58.143 15:13:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.143 15:13:01 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:58.143 15:13:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.143 15:13:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:58.402 15:13:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.402 15:13:02 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:58.402 15:13:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.402 15:13:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:58.402 15:13:02 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:58.402 15:13:02 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:58.402 15:13:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.402 software 00:07:58.402 00:07:58.402 real 0m0.219s 00:07:58.402 user 0m0.029s 00:07:58.402 sys 0m0.006s 00:07:58.402 15:13:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:58.402 15:13:02 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:58.402 ************************************ 00:07:58.402 END TEST accel_assign_opcode 00:07:58.402 ************************************ 00:07:58.402 15:13:02 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:58.402 15:13:02 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 2888877 00:07:58.402 15:13:02 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 2888877 ']' 00:07:58.402 15:13:02 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 2888877 00:07:58.402 15:13:02 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:07:58.402 15:13:02 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:58.402 15:13:02 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2888877 00:07:58.402 15:13:02 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:58.402 15:13:02 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:58.402 15:13:02 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2888877' 00:07:58.402 killing process with pid 2888877 00:07:58.402 15:13:02 accel_rpc -- common/autotest_common.sh@967 -- # kill 2888877 00:07:58.402 15:13:02 accel_rpc -- common/autotest_common.sh@972 -- # wait 2888877 00:07:58.672 00:07:58.672 real 0m1.563s 00:07:58.672 user 0m1.571s 00:07:58.672 sys 0m0.461s 00:07:58.672 15:13:02 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:58.672 15:13:02 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.672 ************************************ 00:07:58.672 END TEST accel_rpc 00:07:58.672 ************************************ 00:07:58.672 15:13:02 -- common/autotest_common.sh@1142 -- # return 0 00:07:58.672 15:13:02 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:58.672 15:13:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:58.672 15:13:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:58.672 15:13:02 -- common/autotest_common.sh@10 -- # set +x 00:07:58.672 ************************************ 00:07:58.672 START TEST app_cmdline 00:07:58.672 ************************************ 00:07:58.672 15:13:02 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:58.932 * Looking for test storage... 00:07:58.932 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:58.932 15:13:02 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:58.932 15:13:02 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2889251 00:07:58.932 15:13:02 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:58.932 15:13:02 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2889251 00:07:58.932 15:13:02 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 2889251 ']' 00:07:58.932 15:13:02 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.932 15:13:02 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:58.932 15:13:02 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.932 15:13:02 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:58.932 15:13:02 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:58.932 [2024-07-15 15:13:02.704418] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:58.932 [2024-07-15 15:13:02.704479] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2889251 ] 00:07:58.932 EAL: No free 2048 kB hugepages reported on node 1 00:07:58.932 [2024-07-15 15:13:02.772422] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.191 [2024-07-15 15:13:02.846828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.776 15:13:03 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:59.776 15:13:03 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:59.776 15:13:03 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:59.776 { 00:07:59.776 "version": "SPDK v24.09-pre git sha1 248c547d0", 00:07:59.776 "fields": { 00:07:59.776 "major": 24, 00:07:59.776 "minor": 9, 00:07:59.776 "patch": 0, 00:07:59.776 "suffix": "-pre", 00:07:59.776 "commit": "248c547d0" 00:07:59.776 } 00:07:59.776 } 00:07:59.776 15:13:03 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:59.776 15:13:03 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:59.776 15:13:03 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:59.776 15:13:03 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:59.776 15:13:03 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:59.776 15:13:03 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.776 15:13:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:59.776 15:13:03 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:59.776 15:13:03 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:59.776 15:13:03 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.035 15:13:03 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:00.035 15:13:03 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:00.035 15:13:03 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:00.035 15:13:03 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:08:00.035 15:13:03 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:00.035 15:13:03 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:00.035 15:13:03 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:00.035 15:13:03 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:00.035 15:13:03 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:00.035 15:13:03 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:00.035 15:13:03 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:00.035 15:13:03 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:00.035 15:13:03 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:00.035 15:13:03 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:00.035 request: 00:08:00.035 { 00:08:00.035 "method": "env_dpdk_get_mem_stats", 00:08:00.035 "req_id": 1 00:08:00.035 } 00:08:00.035 Got JSON-RPC error response 00:08:00.035 response: 00:08:00.035 { 00:08:00.035 "code": -32601, 00:08:00.035 "message": "Method not found" 00:08:00.035 } 00:08:00.035 15:13:03 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:08:00.035 15:13:03 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:00.035 15:13:03 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:00.035 15:13:03 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:00.035 15:13:03 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2889251 00:08:00.035 15:13:03 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 2889251 ']' 00:08:00.035 15:13:03 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 2889251 00:08:00.035 15:13:03 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:08:00.035 15:13:03 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:00.035 15:13:03 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2889251 00:08:00.035 15:13:03 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:00.035 15:13:03 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:00.035 15:13:03 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2889251' 00:08:00.035 killing process with pid 2889251 00:08:00.035 15:13:03 app_cmdline -- common/autotest_common.sh@967 -- # kill 2889251 00:08:00.035 15:13:03 app_cmdline -- common/autotest_common.sh@972 -- # wait 2889251 00:08:00.603 00:08:00.603 real 0m1.678s 00:08:00.603 user 0m1.944s 00:08:00.603 sys 0m0.480s 00:08:00.603 15:13:04 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:00.603 15:13:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:00.603 ************************************ 00:08:00.603 END TEST app_cmdline 00:08:00.603 ************************************ 00:08:00.603 15:13:04 -- common/autotest_common.sh@1142 -- # return 0 00:08:00.603 15:13:04 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:00.603 15:13:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:00.603 15:13:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.603 15:13:04 -- common/autotest_common.sh@10 -- # set +x 00:08:00.603 ************************************ 00:08:00.603 START TEST version 00:08:00.604 ************************************ 00:08:00.604 15:13:04 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:00.604 * Looking for test storage... 00:08:00.604 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:00.604 15:13:04 version -- app/version.sh@17 -- # get_header_version major 00:08:00.604 15:13:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:00.604 15:13:04 version -- app/version.sh@14 -- # cut -f2 00:08:00.604 15:13:04 version -- app/version.sh@14 -- # tr -d '"' 00:08:00.604 15:13:04 version -- app/version.sh@17 -- # major=24 00:08:00.604 15:13:04 version -- app/version.sh@18 -- # get_header_version minor 00:08:00.604 15:13:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:00.604 15:13:04 version -- app/version.sh@14 -- # tr -d '"' 00:08:00.604 15:13:04 version -- app/version.sh@14 -- # cut -f2 00:08:00.604 15:13:04 version -- app/version.sh@18 -- # minor=9 00:08:00.604 15:13:04 version -- app/version.sh@19 -- # get_header_version patch 00:08:00.604 15:13:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:00.604 15:13:04 version -- app/version.sh@14 -- # tr -d '"' 00:08:00.604 15:13:04 version -- app/version.sh@14 -- # cut -f2 00:08:00.604 15:13:04 version -- app/version.sh@19 -- # patch=0 00:08:00.604 15:13:04 version -- app/version.sh@20 -- # get_header_version suffix 00:08:00.604 15:13:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:00.604 15:13:04 version -- app/version.sh@14 -- # cut -f2 00:08:00.604 15:13:04 version -- app/version.sh@14 -- # tr -d '"' 00:08:00.604 15:13:04 version -- app/version.sh@20 -- # suffix=-pre 00:08:00.604 15:13:04 version -- app/version.sh@22 -- # version=24.9 00:08:00.604 15:13:04 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:00.604 15:13:04 version -- app/version.sh@28 -- # version=24.9rc0 00:08:00.604 15:13:04 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:00.604 15:13:04 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:00.604 15:13:04 version -- app/version.sh@30 -- # py_version=24.9rc0 00:08:00.604 15:13:04 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:08:00.604 00:08:00.604 real 0m0.176s 00:08:00.604 user 0m0.085s 00:08:00.604 sys 0m0.132s 00:08:00.604 15:13:04 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:00.604 15:13:04 version -- common/autotest_common.sh@10 -- # set +x 00:08:00.604 ************************************ 00:08:00.604 END TEST version 00:08:00.604 ************************************ 00:08:00.604 15:13:04 -- common/autotest_common.sh@1142 -- # return 0 00:08:00.604 15:13:04 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:08:00.604 15:13:04 -- spdk/autotest.sh@198 -- # uname -s 00:08:00.863 15:13:04 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:08:00.863 15:13:04 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:00.863 15:13:04 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:00.863 15:13:04 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:08:00.863 15:13:04 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:08:00.863 15:13:04 -- spdk/autotest.sh@260 -- # timing_exit lib 00:08:00.863 15:13:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:00.863 15:13:04 -- common/autotest_common.sh@10 -- # set +x 00:08:00.863 15:13:04 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:08:00.863 15:13:04 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:08:00.863 15:13:04 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:08:00.863 15:13:04 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:08:00.863 15:13:04 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:08:00.863 15:13:04 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:08:00.863 15:13:04 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:00.863 15:13:04 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:00.863 15:13:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.863 15:13:04 -- common/autotest_common.sh@10 -- # set +x 00:08:00.863 ************************************ 00:08:00.863 START TEST nvmf_tcp 00:08:00.863 ************************************ 00:08:00.863 15:13:04 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:00.863 * Looking for test storage... 00:08:00.863 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:08:00.863 15:13:04 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:00.863 15:13:04 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:00.863 15:13:04 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:00.863 15:13:04 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:08:00.863 15:13:04 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:00.863 15:13:04 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:00.863 15:13:04 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:00.863 15:13:04 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:00.863 15:13:04 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:00.863 15:13:04 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:00.863 15:13:04 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:00.863 15:13:04 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:00.863 15:13:04 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:00.863 15:13:04 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:00.863 15:13:04 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:08:00.863 15:13:04 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:08:00.863 15:13:04 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:00.863 15:13:04 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:00.863 15:13:04 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:00.863 15:13:04 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:00.863 15:13:04 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:00.863 15:13:04 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:00.863 15:13:04 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:00.863 15:13:04 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:00.863 15:13:04 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.863 15:13:04 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.863 15:13:04 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.863 15:13:04 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:08:00.863 15:13:04 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.863 15:13:04 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:08:00.863 15:13:04 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:00.863 15:13:04 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:00.863 15:13:04 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:00.863 15:13:04 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:00.863 15:13:04 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:00.863 15:13:04 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:00.863 15:13:04 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:00.863 15:13:04 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:00.863 15:13:04 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:00.863 15:13:04 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:08:00.863 15:13:04 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:08:00.863 15:13:04 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:00.863 15:13:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:00.863 15:13:04 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:08:00.863 15:13:04 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:00.863 15:13:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:00.863 15:13:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.863 15:13:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:00.863 ************************************ 00:08:00.863 START TEST nvmf_example 00:08:00.863 ************************************ 00:08:00.863 15:13:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:01.122 * Looking for test storage... 00:08:01.122 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:01.122 15:13:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:01.122 15:13:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:08:01.123 15:13:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:01.123 15:13:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:01.123 15:13:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:01.123 15:13:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:01.123 15:13:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:01.123 15:13:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:01.123 15:13:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:01.123 15:13:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:01.123 15:13:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:01.123 15:13:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:01.123 15:13:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:08:01.123 15:13:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:08:01.123 15:13:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:01.123 15:13:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:01.123 15:13:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:01.123 15:13:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:01.123 15:13:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:01.123 15:13:04 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:01.123 15:13:04 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:01.123 15:13:04 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:01.123 15:13:04 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.123 15:13:04 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.123 15:13:04 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.123 15:13:04 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:08:01.123 15:13:04 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.123 15:13:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:08:01.123 15:13:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:01.123 15:13:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:01.123 15:13:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:01.123 15:13:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:01.123 15:13:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:01.123 15:13:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:01.123 15:13:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:01.123 15:13:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:01.123 15:13:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:08:01.123 15:13:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:08:01.123 15:13:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:08:01.123 15:13:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:08:01.123 15:13:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:08:01.123 15:13:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:08:01.123 15:13:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:08:01.123 15:13:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:08:01.123 15:13:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:01.123 15:13:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:01.123 15:13:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:08:01.123 15:13:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:01.123 15:13:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:01.123 15:13:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:01.123 15:13:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:01.123 15:13:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:01.123 15:13:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:01.123 15:13:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:01.123 15:13:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:01.123 15:13:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:01.123 15:13:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:01.123 15:13:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:08:01.123 15:13:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:07.680 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:07.680 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:07.681 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:07.681 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:07.681 Found net devices under 0000:af:00.0: cvl_0_0 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:07.681 Found net devices under 0000:af:00.1: cvl_0_1 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:07.681 15:13:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:07.681 15:13:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:07.681 15:13:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:07.681 15:13:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:07.681 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:07.681 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:08:07.681 00:08:07.681 --- 10.0.0.2 ping statistics --- 00:08:07.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.682 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:08:07.682 15:13:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:07.682 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:07.682 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:08:07.682 00:08:07.682 --- 10.0.0.1 ping statistics --- 00:08:07.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.682 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:08:07.682 15:13:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:07.682 15:13:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:08:07.682 15:13:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:07.682 15:13:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:07.682 15:13:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:07.682 15:13:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:07.682 15:13:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:07.682 15:13:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:07.682 15:13:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:07.682 15:13:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:08:07.682 15:13:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:08:07.682 15:13:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:07.682 15:13:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:07.682 15:13:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:08:07.682 15:13:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:08:07.682 15:13:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2892940 00:08:07.682 15:13:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:07.682 15:13:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2892940 00:08:07.682 15:13:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 2892940 ']' 00:08:07.682 15:13:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.682 15:13:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:07.682 15:13:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.682 15:13:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:07.682 15:13:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:07.682 15:13:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:08:07.682 EAL: No free 2048 kB hugepages reported on node 1 00:08:08.244 15:13:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:08.244 15:13:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:08:08.244 15:13:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:08:08.244 15:13:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:08.244 15:13:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:08.244 15:13:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:08.244 15:13:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.244 15:13:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:08.244 15:13:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.244 15:13:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:08:08.244 15:13:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.244 15:13:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:08.245 15:13:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.245 15:13:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:08:08.245 15:13:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:08.245 15:13:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.245 15:13:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:08.245 15:13:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.245 15:13:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:08:08.245 15:13:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:08.245 15:13:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.245 15:13:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:08.245 15:13:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.245 15:13:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:08.245 15:13:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.245 15:13:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:08.245 15:13:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.245 15:13:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:08:08.245 15:13:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:08.245 EAL: No free 2048 kB hugepages reported on node 1 00:08:20.431 Initializing NVMe Controllers 00:08:20.431 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:20.431 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:20.431 Initialization complete. Launching workers. 00:08:20.431 ======================================================== 00:08:20.431 Latency(us) 00:08:20.431 Device Information : IOPS MiB/s Average min max 00:08:20.431 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16196.25 63.27 3951.20 680.84 20014.50 00:08:20.431 ======================================================== 00:08:20.431 Total : 16196.25 63.27 3951.20 680.84 20014.50 00:08:20.431 00:08:20.431 15:13:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:20.431 15:13:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:20.431 15:13:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:20.431 15:13:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:08:20.431 15:13:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:20.432 15:13:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:08:20.432 15:13:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:20.432 15:13:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:20.432 rmmod nvme_tcp 00:08:20.432 rmmod nvme_fabrics 00:08:20.432 rmmod nvme_keyring 00:08:20.432 15:13:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:20.432 15:13:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:08:20.432 15:13:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:08:20.432 15:13:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 2892940 ']' 00:08:20.432 15:13:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 2892940 00:08:20.432 15:13:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 2892940 ']' 00:08:20.432 15:13:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 2892940 00:08:20.432 15:13:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:08:20.432 15:13:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:20.432 15:13:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2892940 00:08:20.432 15:13:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:08:20.432 15:13:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:08:20.432 15:13:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2892940' 00:08:20.432 killing process with pid 2892940 00:08:20.432 15:13:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 2892940 00:08:20.432 15:13:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 2892940 00:08:20.432 nvmf threads initialize successfully 00:08:20.432 bdev subsystem init successfully 00:08:20.432 created a nvmf target service 00:08:20.432 create targets's poll groups done 00:08:20.432 all subsystems of target started 00:08:20.432 nvmf target is running 00:08:20.432 all subsystems of target stopped 00:08:20.432 destroy targets's poll groups done 00:08:20.432 destroyed the nvmf target service 00:08:20.432 bdev subsystem finish successfully 00:08:20.432 nvmf threads destroy successfully 00:08:20.432 15:13:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:20.432 15:13:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:20.432 15:13:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:20.432 15:13:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:20.432 15:13:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:20.432 15:13:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.432 15:13:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:20.432 15:13:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.999 15:13:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:20.999 15:13:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:20.999 15:13:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:20.999 15:13:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:20.999 00:08:20.999 real 0m19.979s 00:08:20.999 user 0m45.017s 00:08:20.999 sys 0m6.862s 00:08:20.999 15:13:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:20.999 15:13:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:20.999 ************************************ 00:08:20.999 END TEST nvmf_example 00:08:20.999 ************************************ 00:08:20.999 15:13:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:20.999 15:13:24 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:20.999 15:13:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:20.999 15:13:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:20.999 15:13:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:20.999 ************************************ 00:08:20.999 START TEST nvmf_filesystem 00:08:20.999 ************************************ 00:08:20.999 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:20.999 * Looking for test storage... 00:08:20.999 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:20.999 15:13:24 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:08:20.999 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:20.999 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:08:20.999 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:20.999 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:20.999 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:08:20.999 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:08:20.999 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:20.999 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:08:20.999 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:20.999 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:20.999 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:20.999 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:20.999 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:20.999 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:20.999 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:20.999 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:20.999 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:20.999 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:20.999 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:20.999 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:20.999 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:20.999 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:20.999 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:20.999 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:20.999 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:20.999 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:21.000 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:21.000 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:21.000 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:21.000 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:21.000 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:21.000 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:21.000 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:21.000 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:21.000 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:21.000 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:21.000 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:21.000 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:21.000 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:21.000 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:21.000 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:21.000 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:21.000 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:21.000 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:08:21.000 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:21.000 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:21.260 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:21.260 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:21.260 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:08:21.260 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:21.260 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:21.260 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:21.260 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:21.260 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:08:21.261 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:08:21.261 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:08:21.261 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:21.261 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:08:21.261 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:08:21.261 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:08:21.261 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:08:21.261 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:08:21.261 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:08:21.261 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:08:21.261 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:08:21.261 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:08:21.261 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:08:21.261 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:08:21.261 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:08:21.261 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:08:21.261 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:08:21.261 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:08:21.261 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:08:21.261 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:08:21.261 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:08:21.261 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:08:21.261 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:21.261 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:08:21.261 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:08:21.261 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:08:21.261 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:08:21.261 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:08:21.261 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:08:21.261 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:08:21.261 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:08:21.261 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:08:21.261 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:08:21.261 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:08:21.261 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:21.261 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:08:21.261 15:13:24 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:08:21.261 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:21.261 15:13:24 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:21.261 15:13:24 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:21.261 15:13:24 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:21.261 15:13:24 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:21.261 15:13:24 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:21.261 15:13:24 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:21.261 15:13:24 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:21.261 15:13:24 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:21.261 15:13:24 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:21.261 15:13:24 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:21.261 15:13:24 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:21.261 15:13:24 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:21.261 15:13:24 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:21.261 15:13:24 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:08:21.261 15:13:24 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:21.261 #define SPDK_CONFIG_H 00:08:21.261 #define SPDK_CONFIG_APPS 1 00:08:21.261 #define SPDK_CONFIG_ARCH native 00:08:21.261 #undef SPDK_CONFIG_ASAN 00:08:21.261 #undef SPDK_CONFIG_AVAHI 00:08:21.261 #undef SPDK_CONFIG_CET 00:08:21.261 #define SPDK_CONFIG_COVERAGE 1 00:08:21.261 #define SPDK_CONFIG_CROSS_PREFIX 00:08:21.261 #undef SPDK_CONFIG_CRYPTO 00:08:21.261 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:21.261 #undef SPDK_CONFIG_CUSTOMOCF 00:08:21.261 #undef SPDK_CONFIG_DAOS 00:08:21.261 #define SPDK_CONFIG_DAOS_DIR 00:08:21.261 #define SPDK_CONFIG_DEBUG 1 00:08:21.261 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:21.261 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:08:21.261 #define SPDK_CONFIG_DPDK_INC_DIR 00:08:21.261 #define SPDK_CONFIG_DPDK_LIB_DIR 00:08:21.261 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:21.261 #undef SPDK_CONFIG_DPDK_UADK 00:08:21.261 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:21.261 #define SPDK_CONFIG_EXAMPLES 1 00:08:21.261 #undef SPDK_CONFIG_FC 00:08:21.261 #define SPDK_CONFIG_FC_PATH 00:08:21.261 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:21.261 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:21.261 #undef SPDK_CONFIG_FUSE 00:08:21.261 #undef SPDK_CONFIG_FUZZER 00:08:21.261 #define SPDK_CONFIG_FUZZER_LIB 00:08:21.261 #undef SPDK_CONFIG_GOLANG 00:08:21.261 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:21.261 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:21.261 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:21.261 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:08:21.261 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:21.261 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:21.261 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:21.261 #define SPDK_CONFIG_IDXD 1 00:08:21.261 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:21.261 #undef SPDK_CONFIG_IPSEC_MB 00:08:21.261 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:21.261 #define SPDK_CONFIG_ISAL 1 00:08:21.261 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:21.261 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:21.261 #define SPDK_CONFIG_LIBDIR 00:08:21.261 #undef SPDK_CONFIG_LTO 00:08:21.261 #define SPDK_CONFIG_MAX_LCORES 128 00:08:21.261 #define SPDK_CONFIG_NVME_CUSE 1 00:08:21.261 #undef SPDK_CONFIG_OCF 00:08:21.261 #define SPDK_CONFIG_OCF_PATH 00:08:21.261 #define SPDK_CONFIG_OPENSSL_PATH 00:08:21.261 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:21.261 #define SPDK_CONFIG_PGO_DIR 00:08:21.261 #undef SPDK_CONFIG_PGO_USE 00:08:21.261 #define SPDK_CONFIG_PREFIX /usr/local 00:08:21.261 #undef SPDK_CONFIG_RAID5F 00:08:21.261 #undef SPDK_CONFIG_RBD 00:08:21.261 #define SPDK_CONFIG_RDMA 1 00:08:21.261 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:21.261 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:21.261 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:21.261 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:21.261 #define SPDK_CONFIG_SHARED 1 00:08:21.261 #undef SPDK_CONFIG_SMA 00:08:21.261 #define SPDK_CONFIG_TESTS 1 00:08:21.261 #undef SPDK_CONFIG_TSAN 00:08:21.261 #define SPDK_CONFIG_UBLK 1 00:08:21.261 #define SPDK_CONFIG_UBSAN 1 00:08:21.261 #undef SPDK_CONFIG_UNIT_TESTS 00:08:21.261 #undef SPDK_CONFIG_URING 00:08:21.261 #define SPDK_CONFIG_URING_PATH 00:08:21.261 #undef SPDK_CONFIG_URING_ZNS 00:08:21.261 #undef SPDK_CONFIG_USDT 00:08:21.261 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:21.261 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:21.261 #define SPDK_CONFIG_VFIO_USER 1 00:08:21.261 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:21.261 #define SPDK_CONFIG_VHOST 1 00:08:21.261 #define SPDK_CONFIG_VIRTIO 1 00:08:21.261 #undef SPDK_CONFIG_VTUNE 00:08:21.261 #define SPDK_CONFIG_VTUNE_DIR 00:08:21.261 #define SPDK_CONFIG_WERROR 1 00:08:21.261 #define SPDK_CONFIG_WPDK_DIR 00:08:21.261 #undef SPDK_CONFIG_XNVME 00:08:21.261 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:21.261 15:13:24 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:21.261 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:21.261 15:13:24 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:21.261 15:13:24 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:21.261 15:13:24 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:21.261 15:13:24 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:08:21.262 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:08:21.263 15:13:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:08:21.263 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:08:21.263 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:08:21.263 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:08:21.263 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:08:21.263 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:08:21.263 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:08:21.263 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:08:21.263 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j112 00:08:21.263 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:08:21.263 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:08:21.263 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:08:21.263 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:08:21.263 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:08:21.263 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:08:21.263 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:08:21.263 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 2895431 ]] 00:08:21.263 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 2895431 00:08:21.263 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:08:21.263 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:08:21.263 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:08:21.263 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:08:21.263 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:08:21.263 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:08:21.263 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:08:21.263 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:08:21.263 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.KQ6Hhk 00:08:21.263 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:21.263 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:08:21.263 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:08:21.263 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.KQ6Hhk/tests/target /tmp/spdk.KQ6Hhk 00:08:21.263 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=955215872 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4329213952 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=55204790272 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=61742325760 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=6537535488 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30867787776 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871162880 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12339081216 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12348465152 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9383936 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30870331392 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871162880 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=831488 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6174228480 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6174232576 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:08:21.264 * Looking for test storage... 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=55204790272 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=8752128000 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:21.264 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:08:21.264 15:13:25 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.265 15:13:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:08:21.265 15:13:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:21.265 15:13:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:21.265 15:13:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:21.265 15:13:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:21.265 15:13:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:21.265 15:13:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:21.265 15:13:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:21.265 15:13:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:21.265 15:13:25 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:21.265 15:13:25 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:21.265 15:13:25 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:08:21.265 15:13:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:21.265 15:13:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:21.265 15:13:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:21.265 15:13:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:21.265 15:13:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:21.265 15:13:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.265 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:21.265 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:21.265 15:13:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:21.265 15:13:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:21.265 15:13:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:08:21.265 15:13:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:27.828 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:27.828 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:27.828 Found net devices under 0000:af:00.0: cvl_0_0 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:27.828 Found net devices under 0000:af:00.1: cvl_0_1 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:27.828 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:27.829 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:08:27.829 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:27.829 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:27.829 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:27.829 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:27.829 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:27.829 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:27.829 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:27.829 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:27.829 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:27.829 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:27.829 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:27.829 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:27.829 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:27.829 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:27.829 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:27.829 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:28.087 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:28.087 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:28.087 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:28.087 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:28.087 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:28.087 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:28.087 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:28.087 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:28.087 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:08:28.087 00:08:28.087 --- 10.0.0.2 ping statistics --- 00:08:28.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.087 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:08:28.087 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:28.087 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:28.087 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:08:28.087 00:08:28.087 --- 10.0.0.1 ping statistics --- 00:08:28.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.087 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:08:28.087 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:28.087 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:08:28.087 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:28.087 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:28.087 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:28.087 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:28.087 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:28.087 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:28.087 15:13:31 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:28.087 15:13:31 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:28.087 15:13:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:28.087 15:13:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:28.087 15:13:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:28.346 ************************************ 00:08:28.346 START TEST nvmf_filesystem_no_in_capsule 00:08:28.346 ************************************ 00:08:28.346 15:13:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:08:28.346 15:13:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:08:28.346 15:13:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:28.346 15:13:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:28.346 15:13:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:28.346 15:13:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:28.346 15:13:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2898674 00:08:28.346 15:13:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:28.346 15:13:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2898674 00:08:28.346 15:13:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 2898674 ']' 00:08:28.346 15:13:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.346 15:13:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:28.346 15:13:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.346 15:13:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:28.346 15:13:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:28.346 [2024-07-15 15:13:32.043212] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:08:28.346 [2024-07-15 15:13:32.043257] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:28.346 EAL: No free 2048 kB hugepages reported on node 1 00:08:28.346 [2024-07-15 15:13:32.111748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:28.346 [2024-07-15 15:13:32.185020] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:28.346 [2024-07-15 15:13:32.185064] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:28.346 [2024-07-15 15:13:32.185073] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:28.346 [2024-07-15 15:13:32.185081] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:28.346 [2024-07-15 15:13:32.185088] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:28.346 [2024-07-15 15:13:32.185129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:28.346 [2024-07-15 15:13:32.185231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:28.346 [2024-07-15 15:13:32.185316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:28.346 [2024-07-15 15:13:32.185318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.277 15:13:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:29.277 15:13:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:08:29.277 15:13:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:29.277 15:13:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:29.277 15:13:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:29.277 15:13:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:29.277 15:13:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:29.277 15:13:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:29.277 15:13:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.277 15:13:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:29.277 [2024-07-15 15:13:32.911701] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:29.277 15:13:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.277 15:13:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:29.277 15:13:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.277 15:13:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:29.277 Malloc1 00:08:29.277 15:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.277 15:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:29.277 15:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.277 15:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:29.277 15:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.277 15:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:29.277 15:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.277 15:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:29.277 15:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.277 15:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:29.277 15:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.277 15:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:29.277 [2024-07-15 15:13:33.064020] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:29.277 15:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.277 15:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:29.277 15:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:08:29.277 15:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:08:29.277 15:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:08:29.277 15:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:08:29.277 15:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:29.277 15:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.277 15:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:29.277 15:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.277 15:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:08:29.277 { 00:08:29.277 "name": "Malloc1", 00:08:29.277 "aliases": [ 00:08:29.277 "797e5f48-ae0b-4c85-bee0-76e40482e0ce" 00:08:29.277 ], 00:08:29.277 "product_name": "Malloc disk", 00:08:29.277 "block_size": 512, 00:08:29.277 "num_blocks": 1048576, 00:08:29.277 "uuid": "797e5f48-ae0b-4c85-bee0-76e40482e0ce", 00:08:29.277 "assigned_rate_limits": { 00:08:29.277 "rw_ios_per_sec": 0, 00:08:29.277 "rw_mbytes_per_sec": 0, 00:08:29.277 "r_mbytes_per_sec": 0, 00:08:29.277 "w_mbytes_per_sec": 0 00:08:29.277 }, 00:08:29.277 "claimed": true, 00:08:29.277 "claim_type": "exclusive_write", 00:08:29.277 "zoned": false, 00:08:29.277 "supported_io_types": { 00:08:29.277 "read": true, 00:08:29.277 "write": true, 00:08:29.277 "unmap": true, 00:08:29.277 "flush": true, 00:08:29.277 "reset": true, 00:08:29.277 "nvme_admin": false, 00:08:29.277 "nvme_io": false, 00:08:29.277 "nvme_io_md": false, 00:08:29.277 "write_zeroes": true, 00:08:29.277 "zcopy": true, 00:08:29.277 "get_zone_info": false, 00:08:29.277 "zone_management": false, 00:08:29.277 "zone_append": false, 00:08:29.277 "compare": false, 00:08:29.277 "compare_and_write": false, 00:08:29.277 "abort": true, 00:08:29.277 "seek_hole": false, 00:08:29.277 "seek_data": false, 00:08:29.277 "copy": true, 00:08:29.277 "nvme_iov_md": false 00:08:29.277 }, 00:08:29.277 "memory_domains": [ 00:08:29.277 { 00:08:29.277 "dma_device_id": "system", 00:08:29.277 "dma_device_type": 1 00:08:29.277 }, 00:08:29.277 { 00:08:29.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.277 "dma_device_type": 2 00:08:29.277 } 00:08:29.277 ], 00:08:29.277 "driver_specific": {} 00:08:29.277 } 00:08:29.277 ]' 00:08:29.277 15:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:08:29.277 15:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:08:29.277 15:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:08:29.539 15:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:08:29.539 15:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:08:29.539 15:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:08:29.539 15:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:29.539 15:13:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:30.908 15:13:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:30.908 15:13:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:08:30.908 15:13:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:30.908 15:13:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:30.908 15:13:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:08:32.803 15:13:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:32.803 15:13:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:32.803 15:13:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:32.803 15:13:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:32.803 15:13:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:32.803 15:13:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:08:32.803 15:13:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:32.803 15:13:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:32.803 15:13:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:32.803 15:13:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:32.803 15:13:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:32.803 15:13:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:32.803 15:13:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:32.803 15:13:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:32.803 15:13:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:32.803 15:13:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:32.803 15:13:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:33.093 15:13:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:34.025 15:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:34.960 15:13:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:34.960 15:13:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:34.960 15:13:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:34.960 15:13:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:34.960 15:13:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:34.960 ************************************ 00:08:34.960 START TEST filesystem_ext4 00:08:34.960 ************************************ 00:08:34.960 15:13:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:34.960 15:13:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:34.960 15:13:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:34.960 15:13:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:34.960 15:13:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:08:34.960 15:13:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:34.960 15:13:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:08:34.960 15:13:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:08:34.960 15:13:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:08:34.960 15:13:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:08:34.960 15:13:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:34.960 mke2fs 1.46.5 (30-Dec-2021) 00:08:34.960 Discarding device blocks: 0/522240 done 00:08:34.960 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:34.960 Filesystem UUID: f636c2b4-0456-40bb-bca5-e9972974648c 00:08:34.960 Superblock backups stored on blocks: 00:08:34.960 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:34.960 00:08:34.960 Allocating group tables: 0/64 done 00:08:34.960 Writing inode tables: 0/64 done 00:08:35.218 Creating journal (8192 blocks): done 00:08:35.218 Writing superblocks and filesystem accounting information: 0/64 done 00:08:35.218 00:08:35.218 15:13:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:08:35.218 15:13:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:35.218 15:13:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:35.218 15:13:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:08:35.476 15:13:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:35.476 15:13:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:08:35.476 15:13:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:35.476 15:13:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:35.476 15:13:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2898674 00:08:35.476 15:13:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:35.476 15:13:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:35.476 15:13:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:35.476 15:13:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:35.476 00:08:35.476 real 0m0.498s 00:08:35.476 user 0m0.028s 00:08:35.476 sys 0m0.076s 00:08:35.476 15:13:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:35.476 15:13:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:35.476 ************************************ 00:08:35.476 END TEST filesystem_ext4 00:08:35.476 ************************************ 00:08:35.476 15:13:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:35.476 15:13:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:35.476 15:13:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:35.476 15:13:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:35.476 15:13:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:35.476 ************************************ 00:08:35.476 START TEST filesystem_btrfs 00:08:35.476 ************************************ 00:08:35.476 15:13:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:35.476 15:13:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:35.476 15:13:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:35.476 15:13:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:35.476 15:13:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:08:35.476 15:13:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:35.476 15:13:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:08:35.477 15:13:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:08:35.477 15:13:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:08:35.477 15:13:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:08:35.477 15:13:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:35.734 btrfs-progs v6.6.2 00:08:35.734 See https://btrfs.readthedocs.io for more information. 00:08:35.734 00:08:35.734 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:35.734 NOTE: several default settings have changed in version 5.15, please make sure 00:08:35.734 this does not affect your deployments: 00:08:35.734 - DUP for metadata (-m dup) 00:08:35.734 - enabled no-holes (-O no-holes) 00:08:35.734 - enabled free-space-tree (-R free-space-tree) 00:08:35.734 00:08:35.734 Label: (null) 00:08:35.734 UUID: ebe8cecc-e814-47e7-b56b-c82d0c33a021 00:08:35.734 Node size: 16384 00:08:35.734 Sector size: 4096 00:08:35.734 Filesystem size: 510.00MiB 00:08:35.734 Block group profiles: 00:08:35.734 Data: single 8.00MiB 00:08:35.734 Metadata: DUP 32.00MiB 00:08:35.734 System: DUP 8.00MiB 00:08:35.734 SSD detected: yes 00:08:35.734 Zoned device: no 00:08:35.734 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:35.734 Runtime features: free-space-tree 00:08:35.734 Checksum: crc32c 00:08:35.734 Number of devices: 1 00:08:35.734 Devices: 00:08:35.734 ID SIZE PATH 00:08:35.734 1 510.00MiB /dev/nvme0n1p1 00:08:35.734 00:08:35.734 15:13:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:08:35.734 15:13:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:36.300 15:13:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:36.300 15:13:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:08:36.300 15:13:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:36.300 15:13:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:08:36.300 15:13:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:36.300 15:13:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:36.558 15:13:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2898674 00:08:36.558 15:13:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:36.558 15:13:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:36.558 15:13:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:36.558 15:13:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:36.558 00:08:36.558 real 0m0.956s 00:08:36.558 user 0m0.027s 00:08:36.558 sys 0m0.143s 00:08:36.558 15:13:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:36.558 15:13:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:36.558 ************************************ 00:08:36.558 END TEST filesystem_btrfs 00:08:36.558 ************************************ 00:08:36.558 15:13:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:36.558 15:13:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:36.558 15:13:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:36.558 15:13:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:36.558 15:13:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:36.558 ************************************ 00:08:36.558 START TEST filesystem_xfs 00:08:36.558 ************************************ 00:08:36.558 15:13:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:08:36.558 15:13:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:36.558 15:13:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:36.558 15:13:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:36.558 15:13:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:08:36.558 15:13:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:36.558 15:13:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:08:36.558 15:13:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:08:36.558 15:13:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:08:36.558 15:13:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:08:36.558 15:13:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:36.558 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:36.558 = sectsz=512 attr=2, projid32bit=1 00:08:36.558 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:36.558 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:36.558 data = bsize=4096 blocks=130560, imaxpct=25 00:08:36.558 = sunit=0 swidth=0 blks 00:08:36.558 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:36.558 log =internal log bsize=4096 blocks=16384, version=2 00:08:36.558 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:36.558 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:37.930 Discarding blocks...Done. 00:08:37.930 15:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:37.930 15:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:39.863 15:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:39.863 15:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:08:39.863 15:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:39.863 15:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:08:39.863 15:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:08:39.863 15:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:39.863 15:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2898674 00:08:39.863 15:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:39.863 15:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:39.863 15:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:39.863 15:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:39.863 00:08:39.863 real 0m3.100s 00:08:39.863 user 0m0.035s 00:08:39.863 sys 0m0.077s 00:08:39.863 15:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:39.863 15:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:39.863 ************************************ 00:08:39.863 END TEST filesystem_xfs 00:08:39.863 ************************************ 00:08:39.863 15:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:39.863 15:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:39.863 15:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:39.863 15:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:39.863 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:39.863 15:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:39.863 15:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:39.863 15:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:39.863 15:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:39.863 15:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:39.863 15:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:39.863 15:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:39.863 15:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:39.863 15:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.863 15:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:39.863 15:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.863 15:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:39.863 15:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2898674 00:08:39.863 15:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 2898674 ']' 00:08:39.863 15:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 2898674 00:08:39.863 15:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:39.863 15:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:39.863 15:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2898674 00:08:40.120 15:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:40.120 15:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:40.120 15:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2898674' 00:08:40.120 killing process with pid 2898674 00:08:40.120 15:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 2898674 00:08:40.120 15:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 2898674 00:08:40.378 15:13:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:40.378 00:08:40.378 real 0m12.150s 00:08:40.378 user 0m47.433s 00:08:40.378 sys 0m1.739s 00:08:40.378 15:13:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:40.378 15:13:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:40.378 ************************************ 00:08:40.378 END TEST nvmf_filesystem_no_in_capsule 00:08:40.378 ************************************ 00:08:40.378 15:13:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:40.378 15:13:44 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:40.378 15:13:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:40.378 15:13:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:40.378 15:13:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:40.378 ************************************ 00:08:40.378 START TEST nvmf_filesystem_in_capsule 00:08:40.378 ************************************ 00:08:40.378 15:13:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:08:40.378 15:13:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:40.378 15:13:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:40.378 15:13:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:40.378 15:13:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:40.378 15:13:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:40.378 15:13:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2900885 00:08:40.378 15:13:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2900885 00:08:40.378 15:13:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:40.378 15:13:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 2900885 ']' 00:08:40.378 15:13:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.378 15:13:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:40.378 15:13:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.378 15:13:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:40.378 15:13:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:40.635 [2024-07-15 15:13:44.301140] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:08:40.635 [2024-07-15 15:13:44.301190] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:40.635 EAL: No free 2048 kB hugepages reported on node 1 00:08:40.635 [2024-07-15 15:13:44.378469] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:40.635 [2024-07-15 15:13:44.447382] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:40.635 [2024-07-15 15:13:44.447428] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:40.635 [2024-07-15 15:13:44.447439] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:40.635 [2024-07-15 15:13:44.447463] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:40.635 [2024-07-15 15:13:44.447471] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:40.635 [2024-07-15 15:13:44.447524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:40.636 [2024-07-15 15:13:44.447617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:40.636 [2024-07-15 15:13:44.447682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:40.636 [2024-07-15 15:13:44.447684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.201 15:13:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:41.201 15:13:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:08:41.459 15:13:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:41.459 15:13:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:41.459 15:13:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:41.459 15:13:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:41.459 15:13:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:41.459 15:13:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:41.459 15:13:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.459 15:13:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:41.459 [2024-07-15 15:13:45.155625] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:41.459 15:13:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.459 15:13:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:41.459 15:13:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.459 15:13:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:41.459 Malloc1 00:08:41.459 15:13:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.459 15:13:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:41.459 15:13:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.459 15:13:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:41.459 15:13:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.459 15:13:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:41.459 15:13:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.459 15:13:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:41.459 15:13:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.459 15:13:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:41.459 15:13:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.459 15:13:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:41.459 [2024-07-15 15:13:45.307713] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:41.459 15:13:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.459 15:13:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:41.459 15:13:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:08:41.459 15:13:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:08:41.459 15:13:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:08:41.459 15:13:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:08:41.459 15:13:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:41.459 15:13:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.459 15:13:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:41.459 15:13:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.460 15:13:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:08:41.460 { 00:08:41.460 "name": "Malloc1", 00:08:41.460 "aliases": [ 00:08:41.460 "e49bad50-f577-44b5-a1d5-d7092faa7ba9" 00:08:41.460 ], 00:08:41.460 "product_name": "Malloc disk", 00:08:41.460 "block_size": 512, 00:08:41.460 "num_blocks": 1048576, 00:08:41.460 "uuid": "e49bad50-f577-44b5-a1d5-d7092faa7ba9", 00:08:41.460 "assigned_rate_limits": { 00:08:41.460 "rw_ios_per_sec": 0, 00:08:41.460 "rw_mbytes_per_sec": 0, 00:08:41.460 "r_mbytes_per_sec": 0, 00:08:41.460 "w_mbytes_per_sec": 0 00:08:41.460 }, 00:08:41.460 "claimed": true, 00:08:41.460 "claim_type": "exclusive_write", 00:08:41.460 "zoned": false, 00:08:41.460 "supported_io_types": { 00:08:41.460 "read": true, 00:08:41.460 "write": true, 00:08:41.460 "unmap": true, 00:08:41.460 "flush": true, 00:08:41.460 "reset": true, 00:08:41.460 "nvme_admin": false, 00:08:41.460 "nvme_io": false, 00:08:41.460 "nvme_io_md": false, 00:08:41.460 "write_zeroes": true, 00:08:41.460 "zcopy": true, 00:08:41.460 "get_zone_info": false, 00:08:41.460 "zone_management": false, 00:08:41.460 "zone_append": false, 00:08:41.460 "compare": false, 00:08:41.460 "compare_and_write": false, 00:08:41.460 "abort": true, 00:08:41.460 "seek_hole": false, 00:08:41.460 "seek_data": false, 00:08:41.460 "copy": true, 00:08:41.460 "nvme_iov_md": false 00:08:41.460 }, 00:08:41.460 "memory_domains": [ 00:08:41.460 { 00:08:41.460 "dma_device_id": "system", 00:08:41.460 "dma_device_type": 1 00:08:41.460 }, 00:08:41.460 { 00:08:41.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.460 "dma_device_type": 2 00:08:41.460 } 00:08:41.460 ], 00:08:41.460 "driver_specific": {} 00:08:41.460 } 00:08:41.460 ]' 00:08:41.460 15:13:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:08:41.717 15:13:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:08:41.717 15:13:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:08:41.717 15:13:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:08:41.717 15:13:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:08:41.717 15:13:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:08:41.717 15:13:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:41.717 15:13:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:43.090 15:13:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:43.090 15:13:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:08:43.090 15:13:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:43.090 15:13:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:43.090 15:13:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:08:44.988 15:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:44.988 15:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:44.988 15:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:44.988 15:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:44.988 15:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:44.988 15:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:08:44.988 15:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:44.988 15:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:44.988 15:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:44.988 15:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:44.988 15:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:44.988 15:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:44.988 15:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:44.988 15:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:44.988 15:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:44.988 15:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:44.988 15:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:45.553 15:13:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:45.553 15:13:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:46.928 15:13:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:46.928 15:13:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:46.928 15:13:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:46.928 15:13:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:46.928 15:13:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:46.928 ************************************ 00:08:46.928 START TEST filesystem_in_capsule_ext4 00:08:46.928 ************************************ 00:08:46.928 15:13:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:46.928 15:13:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:46.928 15:13:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:46.928 15:13:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:46.929 15:13:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:08:46.929 15:13:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:46.929 15:13:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:08:46.929 15:13:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:08:46.929 15:13:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:08:46.929 15:13:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:08:46.929 15:13:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:46.929 mke2fs 1.46.5 (30-Dec-2021) 00:08:46.929 Discarding device blocks: 0/522240 done 00:08:46.929 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:46.929 Filesystem UUID: 4d4bd7d3-5269-4dee-82c3-2ff76d99b92f 00:08:46.929 Superblock backups stored on blocks: 00:08:46.929 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:46.929 00:08:46.929 Allocating group tables: 0/64 done 00:08:46.929 Writing inode tables: 0/64 done 00:08:49.458 Creating journal (8192 blocks): done 00:08:49.458 Writing superblocks and filesystem accounting information: 0/64 done 00:08:49.458 00:08:49.458 15:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:08:49.458 15:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:50.392 15:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:50.392 15:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:08:50.392 15:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:50.392 15:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:08:50.392 15:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:50.392 15:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:50.392 15:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2900885 00:08:50.392 15:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:50.392 15:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:50.392 15:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:50.392 15:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:50.392 00:08:50.392 real 0m3.755s 00:08:50.392 user 0m0.029s 00:08:50.392 sys 0m0.076s 00:08:50.392 15:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:50.392 15:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:50.392 ************************************ 00:08:50.392 END TEST filesystem_in_capsule_ext4 00:08:50.392 ************************************ 00:08:50.392 15:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:50.392 15:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:50.392 15:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:50.392 15:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:50.392 15:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:50.392 ************************************ 00:08:50.392 START TEST filesystem_in_capsule_btrfs 00:08:50.392 ************************************ 00:08:50.392 15:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:50.392 15:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:50.392 15:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:50.392 15:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:50.392 15:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:08:50.392 15:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:50.392 15:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:08:50.392 15:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:08:50.392 15:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:08:50.392 15:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:08:50.392 15:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:50.651 btrfs-progs v6.6.2 00:08:50.651 See https://btrfs.readthedocs.io for more information. 00:08:50.651 00:08:50.651 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:50.651 NOTE: several default settings have changed in version 5.15, please make sure 00:08:50.651 this does not affect your deployments: 00:08:50.651 - DUP for metadata (-m dup) 00:08:50.651 - enabled no-holes (-O no-holes) 00:08:50.651 - enabled free-space-tree (-R free-space-tree) 00:08:50.651 00:08:50.651 Label: (null) 00:08:50.651 UUID: fbe0069a-9a12-476d-b456-0d312c58bfff 00:08:50.651 Node size: 16384 00:08:50.651 Sector size: 4096 00:08:50.651 Filesystem size: 510.00MiB 00:08:50.651 Block group profiles: 00:08:50.651 Data: single 8.00MiB 00:08:50.651 Metadata: DUP 32.00MiB 00:08:50.651 System: DUP 8.00MiB 00:08:50.651 SSD detected: yes 00:08:50.651 Zoned device: no 00:08:50.651 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:50.651 Runtime features: free-space-tree 00:08:50.651 Checksum: crc32c 00:08:50.651 Number of devices: 1 00:08:50.651 Devices: 00:08:50.651 ID SIZE PATH 00:08:50.651 1 510.00MiB /dev/nvme0n1p1 00:08:50.651 00:08:50.651 15:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:08:50.651 15:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:51.585 15:13:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:51.585 15:13:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:08:51.585 15:13:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:51.585 15:13:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:08:51.585 15:13:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:51.585 15:13:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:51.585 15:13:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2900885 00:08:51.585 15:13:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:51.585 15:13:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:51.585 15:13:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:51.585 15:13:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:51.585 00:08:51.585 real 0m1.162s 00:08:51.585 user 0m0.032s 00:08:51.585 sys 0m0.143s 00:08:51.585 15:13:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:51.585 15:13:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:51.586 ************************************ 00:08:51.586 END TEST filesystem_in_capsule_btrfs 00:08:51.586 ************************************ 00:08:51.586 15:13:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:51.586 15:13:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:51.586 15:13:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:51.586 15:13:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:51.586 15:13:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:51.844 ************************************ 00:08:51.844 START TEST filesystem_in_capsule_xfs 00:08:51.844 ************************************ 00:08:51.844 15:13:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:08:51.844 15:13:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:51.844 15:13:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:51.844 15:13:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:51.844 15:13:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:08:51.844 15:13:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:51.844 15:13:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:08:51.844 15:13:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:08:51.844 15:13:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:08:51.844 15:13:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:08:51.844 15:13:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:51.844 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:51.844 = sectsz=512 attr=2, projid32bit=1 00:08:51.844 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:51.844 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:51.844 data = bsize=4096 blocks=130560, imaxpct=25 00:08:51.844 = sunit=0 swidth=0 blks 00:08:51.844 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:51.844 log =internal log bsize=4096 blocks=16384, version=2 00:08:51.844 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:51.844 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:52.778 Discarding blocks...Done. 00:08:52.778 15:13:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:52.778 15:13:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:55.310 15:13:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:55.310 15:13:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:55.310 15:13:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:55.310 15:13:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:55.310 15:13:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:55.310 15:13:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:55.310 15:13:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2900885 00:08:55.310 15:13:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:55.310 15:13:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:55.310 15:13:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:55.310 15:13:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:55.310 00:08:55.310 real 0m3.497s 00:08:55.310 user 0m0.033s 00:08:55.310 sys 0m0.077s 00:08:55.310 15:13:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:55.310 15:13:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:55.310 ************************************ 00:08:55.310 END TEST filesystem_in_capsule_xfs 00:08:55.310 ************************************ 00:08:55.310 15:13:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:55.310 15:13:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:55.633 15:13:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:55.633 15:13:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:55.633 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:55.633 15:13:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:55.633 15:13:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:55.633 15:13:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:55.633 15:13:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:55.633 15:13:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:55.633 15:13:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:55.633 15:13:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:55.633 15:13:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:55.633 15:13:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.633 15:13:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:55.633 15:13:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.633 15:13:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:55.633 15:13:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2900885 00:08:55.633 15:13:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 2900885 ']' 00:08:55.633 15:13:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 2900885 00:08:55.633 15:13:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:55.633 15:13:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:55.633 15:13:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2900885 00:08:55.891 15:13:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:55.891 15:13:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:55.891 15:13:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2900885' 00:08:55.891 killing process with pid 2900885 00:08:55.891 15:13:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 2900885 00:08:55.891 15:13:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 2900885 00:08:56.149 15:13:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:56.149 00:08:56.149 real 0m15.661s 00:08:56.149 user 1m1.159s 00:08:56.149 sys 0m1.961s 00:08:56.149 15:13:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:56.149 15:13:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:56.150 ************************************ 00:08:56.150 END TEST nvmf_filesystem_in_capsule 00:08:56.150 ************************************ 00:08:56.150 15:13:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:56.150 15:13:59 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:56.150 15:13:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:56.150 15:13:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:56.150 15:13:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:56.150 15:13:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:56.150 15:13:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:56.150 15:13:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:56.150 rmmod nvme_tcp 00:08:56.150 rmmod nvme_fabrics 00:08:56.150 rmmod nvme_keyring 00:08:56.150 15:14:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:56.150 15:14:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:56.150 15:14:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:56.150 15:14:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:56.150 15:14:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:56.150 15:14:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:56.150 15:14:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:56.150 15:14:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:56.150 15:14:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:56.150 15:14:00 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:56.150 15:14:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:56.150 15:14:00 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.680 15:14:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:58.680 00:08:58.680 real 0m37.309s 00:08:58.680 user 1m50.700s 00:08:58.680 sys 0m9.150s 00:08:58.680 15:14:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:58.680 15:14:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:58.680 ************************************ 00:08:58.680 END TEST nvmf_filesystem 00:08:58.680 ************************************ 00:08:58.680 15:14:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:58.680 15:14:02 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:58.680 15:14:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:58.680 15:14:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:58.680 15:14:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:58.680 ************************************ 00:08:58.680 START TEST nvmf_target_discovery 00:08:58.680 ************************************ 00:08:58.680 15:14:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:58.680 * Looking for test storage... 00:08:58.680 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:58.680 15:14:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:58.680 15:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:58.680 15:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:58.680 15:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:58.680 15:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:58.680 15:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:58.680 15:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:58.680 15:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:58.680 15:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:58.680 15:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:58.680 15:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:58.680 15:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:58.680 15:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:08:58.680 15:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:08:58.680 15:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:58.680 15:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:58.680 15:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:58.680 15:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:58.680 15:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:58.680 15:14:02 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:58.680 15:14:02 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:58.680 15:14:02 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:58.680 15:14:02 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.680 15:14:02 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.680 15:14:02 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.680 15:14:02 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:58.680 15:14:02 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.680 15:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:58.680 15:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:58.680 15:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:58.680 15:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:58.680 15:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:58.680 15:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:58.680 15:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:58.680 15:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:58.680 15:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:58.681 15:14:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:58.681 15:14:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:58.681 15:14:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:58.681 15:14:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:58.681 15:14:02 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:58.681 15:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:58.681 15:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:58.681 15:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:58.681 15:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:58.681 15:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:58.681 15:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.681 15:14:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:58.681 15:14:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.681 15:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:58.681 15:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:58.681 15:14:02 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:08:58.681 15:14:02 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:05.237 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:05.237 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:05.237 Found net devices under 0000:af:00.0: cvl_0_0 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:05.237 Found net devices under 0000:af:00.1: cvl_0_1 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:05.237 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:05.237 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:09:05.237 00:09:05.237 --- 10.0.0.2 ping statistics --- 00:09:05.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.237 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:09:05.237 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:05.237 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:05.237 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.237 ms 00:09:05.237 00:09:05.237 --- 10.0.0.1 ping statistics --- 00:09:05.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.237 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:09:05.238 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:05.238 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:09:05.238 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:05.238 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:05.238 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:05.238 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:05.238 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:05.238 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:05.238 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:05.238 15:14:08 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:09:05.238 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:05.238 15:14:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:05.238 15:14:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:05.238 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=2907424 00:09:05.238 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 2907424 00:09:05.238 15:14:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 2907424 ']' 00:09:05.238 15:14:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.238 15:14:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:05.238 15:14:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.238 15:14:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:05.238 15:14:08 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:05.238 15:14:08 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:05.238 [2024-07-15 15:14:08.703260] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:09:05.238 [2024-07-15 15:14:08.703309] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:05.238 EAL: No free 2048 kB hugepages reported on node 1 00:09:05.238 [2024-07-15 15:14:08.777672] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:05.238 [2024-07-15 15:14:08.850826] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:05.238 [2024-07-15 15:14:08.850867] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:05.238 [2024-07-15 15:14:08.850876] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:05.238 [2024-07-15 15:14:08.850884] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:05.238 [2024-07-15 15:14:08.850891] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:05.238 [2024-07-15 15:14:08.850936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:05.238 [2024-07-15 15:14:08.851031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:05.238 [2024-07-15 15:14:08.851092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:05.238 [2024-07-15 15:14:08.851093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.804 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:05.804 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:09:05.804 15:14:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:05.804 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:05.804 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:05.804 15:14:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:05.804 15:14:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:05.804 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.804 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:05.804 [2024-07-15 15:14:09.561737] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:05.804 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.804 15:14:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:09:05.804 15:14:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:05.804 15:14:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:09:05.804 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.804 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:05.804 Null1 00:09:05.804 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.804 15:14:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:05.804 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.804 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:05.804 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.804 15:14:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:09:05.804 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.804 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:05.804 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.804 15:14:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:05.804 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.804 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:05.804 [2024-07-15 15:14:09.614043] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:05.804 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.804 15:14:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:05.804 15:14:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:09:05.804 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.805 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:05.805 Null2 00:09:05.805 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.805 15:14:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:09:05.805 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.805 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:05.805 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.805 15:14:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:09:05.805 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.805 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:05.805 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.805 15:14:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:05.805 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.805 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:05.805 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.805 15:14:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:05.805 15:14:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:09:05.805 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.805 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:05.805 Null3 00:09:05.805 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.805 15:14:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:09:05.805 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.805 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:05.805 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.805 15:14:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:09:05.805 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.805 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:05.805 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.805 15:14:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:09:05.805 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.805 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:05.805 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.805 15:14:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:05.805 15:14:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:09:05.805 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.805 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:05.805 Null4 00:09:05.805 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.805 15:14:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:09:05.805 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.805 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:05.805 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.805 15:14:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:09:05.805 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.805 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:05.805 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.805 15:14:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:09:05.805 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.805 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.063 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.063 15:14:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:06.063 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.063 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.063 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.063 15:14:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:09:06.063 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.063 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.063 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.063 15:14:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 4420 00:09:06.063 00:09:06.063 Discovery Log Number of Records 6, Generation counter 6 00:09:06.063 =====Discovery Log Entry 0====== 00:09:06.063 trtype: tcp 00:09:06.063 adrfam: ipv4 00:09:06.063 subtype: current discovery subsystem 00:09:06.063 treq: not required 00:09:06.063 portid: 0 00:09:06.063 trsvcid: 4420 00:09:06.063 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:06.063 traddr: 10.0.0.2 00:09:06.063 eflags: explicit discovery connections, duplicate discovery information 00:09:06.063 sectype: none 00:09:06.063 =====Discovery Log Entry 1====== 00:09:06.063 trtype: tcp 00:09:06.063 adrfam: ipv4 00:09:06.063 subtype: nvme subsystem 00:09:06.063 treq: not required 00:09:06.063 portid: 0 00:09:06.063 trsvcid: 4420 00:09:06.063 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:06.063 traddr: 10.0.0.2 00:09:06.064 eflags: none 00:09:06.064 sectype: none 00:09:06.064 =====Discovery Log Entry 2====== 00:09:06.064 trtype: tcp 00:09:06.064 adrfam: ipv4 00:09:06.064 subtype: nvme subsystem 00:09:06.064 treq: not required 00:09:06.064 portid: 0 00:09:06.064 trsvcid: 4420 00:09:06.064 subnqn: nqn.2016-06.io.spdk:cnode2 00:09:06.064 traddr: 10.0.0.2 00:09:06.064 eflags: none 00:09:06.064 sectype: none 00:09:06.064 =====Discovery Log Entry 3====== 00:09:06.064 trtype: tcp 00:09:06.064 adrfam: ipv4 00:09:06.064 subtype: nvme subsystem 00:09:06.064 treq: not required 00:09:06.064 portid: 0 00:09:06.064 trsvcid: 4420 00:09:06.064 subnqn: nqn.2016-06.io.spdk:cnode3 00:09:06.064 traddr: 10.0.0.2 00:09:06.064 eflags: none 00:09:06.064 sectype: none 00:09:06.064 =====Discovery Log Entry 4====== 00:09:06.064 trtype: tcp 00:09:06.064 adrfam: ipv4 00:09:06.064 subtype: nvme subsystem 00:09:06.064 treq: not required 00:09:06.064 portid: 0 00:09:06.064 trsvcid: 4420 00:09:06.064 subnqn: nqn.2016-06.io.spdk:cnode4 00:09:06.064 traddr: 10.0.0.2 00:09:06.064 eflags: none 00:09:06.064 sectype: none 00:09:06.064 =====Discovery Log Entry 5====== 00:09:06.064 trtype: tcp 00:09:06.064 adrfam: ipv4 00:09:06.064 subtype: discovery subsystem referral 00:09:06.064 treq: not required 00:09:06.064 portid: 0 00:09:06.064 trsvcid: 4430 00:09:06.064 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:06.064 traddr: 10.0.0.2 00:09:06.064 eflags: none 00:09:06.064 sectype: none 00:09:06.064 15:14:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:09:06.064 Perform nvmf subsystem discovery via RPC 00:09:06.064 15:14:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:09:06.064 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.064 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.064 [ 00:09:06.064 { 00:09:06.064 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:06.064 "subtype": "Discovery", 00:09:06.064 "listen_addresses": [ 00:09:06.064 { 00:09:06.064 "trtype": "TCP", 00:09:06.064 "adrfam": "IPv4", 00:09:06.064 "traddr": "10.0.0.2", 00:09:06.064 "trsvcid": "4420" 00:09:06.064 } 00:09:06.064 ], 00:09:06.064 "allow_any_host": true, 00:09:06.064 "hosts": [] 00:09:06.064 }, 00:09:06.064 { 00:09:06.064 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:06.064 "subtype": "NVMe", 00:09:06.064 "listen_addresses": [ 00:09:06.064 { 00:09:06.064 "trtype": "TCP", 00:09:06.064 "adrfam": "IPv4", 00:09:06.064 "traddr": "10.0.0.2", 00:09:06.064 "trsvcid": "4420" 00:09:06.064 } 00:09:06.064 ], 00:09:06.064 "allow_any_host": true, 00:09:06.064 "hosts": [], 00:09:06.064 "serial_number": "SPDK00000000000001", 00:09:06.064 "model_number": "SPDK bdev Controller", 00:09:06.064 "max_namespaces": 32, 00:09:06.064 "min_cntlid": 1, 00:09:06.064 "max_cntlid": 65519, 00:09:06.064 "namespaces": [ 00:09:06.064 { 00:09:06.064 "nsid": 1, 00:09:06.064 "bdev_name": "Null1", 00:09:06.064 "name": "Null1", 00:09:06.064 "nguid": "7441BE117C6445F8984FF874DC647F1B", 00:09:06.064 "uuid": "7441be11-7c64-45f8-984f-f874dc647f1b" 00:09:06.064 } 00:09:06.064 ] 00:09:06.064 }, 00:09:06.064 { 00:09:06.064 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:06.064 "subtype": "NVMe", 00:09:06.064 "listen_addresses": [ 00:09:06.064 { 00:09:06.064 "trtype": "TCP", 00:09:06.064 "adrfam": "IPv4", 00:09:06.064 "traddr": "10.0.0.2", 00:09:06.064 "trsvcid": "4420" 00:09:06.064 } 00:09:06.064 ], 00:09:06.064 "allow_any_host": true, 00:09:06.064 "hosts": [], 00:09:06.064 "serial_number": "SPDK00000000000002", 00:09:06.064 "model_number": "SPDK bdev Controller", 00:09:06.064 "max_namespaces": 32, 00:09:06.064 "min_cntlid": 1, 00:09:06.064 "max_cntlid": 65519, 00:09:06.064 "namespaces": [ 00:09:06.064 { 00:09:06.064 "nsid": 1, 00:09:06.064 "bdev_name": "Null2", 00:09:06.064 "name": "Null2", 00:09:06.064 "nguid": "DE73807F791D44A7A5C0556C55FC1E92", 00:09:06.064 "uuid": "de73807f-791d-44a7-a5c0-556c55fc1e92" 00:09:06.064 } 00:09:06.064 ] 00:09:06.064 }, 00:09:06.064 { 00:09:06.064 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:09:06.064 "subtype": "NVMe", 00:09:06.064 "listen_addresses": [ 00:09:06.064 { 00:09:06.064 "trtype": "TCP", 00:09:06.064 "adrfam": "IPv4", 00:09:06.064 "traddr": "10.0.0.2", 00:09:06.064 "trsvcid": "4420" 00:09:06.064 } 00:09:06.064 ], 00:09:06.064 "allow_any_host": true, 00:09:06.064 "hosts": [], 00:09:06.064 "serial_number": "SPDK00000000000003", 00:09:06.064 "model_number": "SPDK bdev Controller", 00:09:06.064 "max_namespaces": 32, 00:09:06.064 "min_cntlid": 1, 00:09:06.064 "max_cntlid": 65519, 00:09:06.064 "namespaces": [ 00:09:06.064 { 00:09:06.064 "nsid": 1, 00:09:06.064 "bdev_name": "Null3", 00:09:06.064 "name": "Null3", 00:09:06.064 "nguid": "DF2BD107B93B45DB8AE846655279C5E6", 00:09:06.064 "uuid": "df2bd107-b93b-45db-8ae8-46655279c5e6" 00:09:06.064 } 00:09:06.064 ] 00:09:06.064 }, 00:09:06.064 { 00:09:06.064 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:09:06.064 "subtype": "NVMe", 00:09:06.064 "listen_addresses": [ 00:09:06.064 { 00:09:06.064 "trtype": "TCP", 00:09:06.064 "adrfam": "IPv4", 00:09:06.064 "traddr": "10.0.0.2", 00:09:06.064 "trsvcid": "4420" 00:09:06.064 } 00:09:06.064 ], 00:09:06.064 "allow_any_host": true, 00:09:06.064 "hosts": [], 00:09:06.064 "serial_number": "SPDK00000000000004", 00:09:06.064 "model_number": "SPDK bdev Controller", 00:09:06.064 "max_namespaces": 32, 00:09:06.064 "min_cntlid": 1, 00:09:06.064 "max_cntlid": 65519, 00:09:06.064 "namespaces": [ 00:09:06.064 { 00:09:06.064 "nsid": 1, 00:09:06.064 "bdev_name": "Null4", 00:09:06.064 "name": "Null4", 00:09:06.064 "nguid": "974888D881E24824BD5D88A5978A0930", 00:09:06.064 "uuid": "974888d8-81e2-4824-bd5d-88a5978a0930" 00:09:06.064 } 00:09:06.064 ] 00:09:06.064 } 00:09:06.064 ] 00:09:06.064 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.064 15:14:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:09:06.064 15:14:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:06.064 15:14:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:06.064 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.064 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.064 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.064 15:14:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:09:06.064 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.064 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.064 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.064 15:14:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:06.064 15:14:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:09:06.064 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.064 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.064 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.064 15:14:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:09:06.064 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.064 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.064 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.064 15:14:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:06.064 15:14:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:09:06.064 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.064 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.064 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.064 15:14:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:09:06.064 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.064 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.064 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.064 15:14:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:06.064 15:14:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:09:06.064 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.064 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.064 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.064 15:14:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:09:06.064 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.064 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.064 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.064 15:14:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:09:06.064 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.064 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.064 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.064 15:14:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:09:06.064 15:14:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:09:06.064 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.064 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.065 15:14:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.323 15:14:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:09:06.323 15:14:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:09:06.323 15:14:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:09:06.323 15:14:09 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:09:06.323 15:14:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:06.324 15:14:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:09:06.324 15:14:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:06.324 15:14:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:09:06.324 15:14:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:06.324 15:14:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:06.324 rmmod nvme_tcp 00:09:06.324 rmmod nvme_fabrics 00:09:06.324 rmmod nvme_keyring 00:09:06.324 15:14:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:06.324 15:14:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:09:06.324 15:14:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:09:06.324 15:14:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 2907424 ']' 00:09:06.324 15:14:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 2907424 00:09:06.324 15:14:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 2907424 ']' 00:09:06.324 15:14:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 2907424 00:09:06.324 15:14:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:09:06.324 15:14:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:06.324 15:14:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2907424 00:09:06.324 15:14:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:06.324 15:14:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:06.324 15:14:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2907424' 00:09:06.324 killing process with pid 2907424 00:09:06.324 15:14:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 2907424 00:09:06.324 15:14:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 2907424 00:09:06.583 15:14:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:06.583 15:14:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:06.583 15:14:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:06.583 15:14:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:06.583 15:14:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:06.583 15:14:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.583 15:14:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:06.583 15:14:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.486 15:14:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:08.486 00:09:08.486 real 0m10.180s 00:09:08.486 user 0m7.539s 00:09:08.486 sys 0m5.193s 00:09:08.486 15:14:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:08.486 15:14:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:08.486 ************************************ 00:09:08.486 END TEST nvmf_target_discovery 00:09:08.486 ************************************ 00:09:08.744 15:14:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:08.744 15:14:12 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:08.744 15:14:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:08.744 15:14:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:08.744 15:14:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:08.744 ************************************ 00:09:08.744 START TEST nvmf_referrals 00:09:08.744 ************************************ 00:09:08.744 15:14:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:08.744 * Looking for test storage... 00:09:08.744 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:08.744 15:14:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:08.744 15:14:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:09:08.744 15:14:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:08.744 15:14:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:08.744 15:14:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:08.744 15:14:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:08.744 15:14:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:08.744 15:14:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:08.744 15:14:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:08.744 15:14:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:08.744 15:14:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:08.744 15:14:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:08.744 15:14:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:09:08.744 15:14:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:09:08.745 15:14:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:08.745 15:14:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:08.745 15:14:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:08.745 15:14:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:08.745 15:14:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:08.745 15:14:12 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:08.745 15:14:12 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:08.745 15:14:12 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:08.745 15:14:12 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.745 15:14:12 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.745 15:14:12 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.745 15:14:12 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:09:08.745 15:14:12 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.745 15:14:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:09:08.745 15:14:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:08.745 15:14:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:08.745 15:14:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:08.745 15:14:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:08.745 15:14:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:08.745 15:14:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:08.745 15:14:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:08.745 15:14:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:08.745 15:14:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:09:08.745 15:14:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:09:08.745 15:14:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:09:08.745 15:14:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:09:08.745 15:14:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:09:08.745 15:14:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:09:08.745 15:14:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:09:08.745 15:14:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:08.745 15:14:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:08.745 15:14:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:08.745 15:14:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:08.745 15:14:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:08.745 15:14:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.745 15:14:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:08.745 15:14:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.745 15:14:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:08.745 15:14:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:08.745 15:14:12 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:09:08.745 15:14:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:15.307 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:15.307 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:15.307 Found net devices under 0000:af:00.0: cvl_0_0 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:15.307 Found net devices under 0000:af:00.1: cvl_0_1 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:15.307 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:15.307 15:14:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:15.307 15:14:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:15.307 15:14:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:15.307 15:14:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:15.307 15:14:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:15.307 15:14:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:15.307 15:14:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:15.566 15:14:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:15.566 15:14:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:15.566 15:14:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:15.566 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:15.566 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:09:15.566 00:09:15.566 --- 10.0.0.2 ping statistics --- 00:09:15.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:15.566 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:09:15.566 15:14:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:15.566 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:15.566 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:09:15.566 00:09:15.566 --- 10.0.0.1 ping statistics --- 00:09:15.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:15.566 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:09:15.566 15:14:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:15.566 15:14:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:09:15.566 15:14:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:15.566 15:14:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:15.566 15:14:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:15.566 15:14:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:15.566 15:14:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:15.566 15:14:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:15.566 15:14:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:15.566 15:14:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:09:15.566 15:14:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:15.566 15:14:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:15.566 15:14:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:15.566 15:14:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=2911396 00:09:15.566 15:14:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 2911396 00:09:15.566 15:14:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:15.566 15:14:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 2911396 ']' 00:09:15.566 15:14:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.566 15:14:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:15.566 15:14:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.566 15:14:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:15.566 15:14:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:15.566 [2024-07-15 15:14:19.417066] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:09:15.566 [2024-07-15 15:14:19.417118] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:15.566 EAL: No free 2048 kB hugepages reported on node 1 00:09:15.825 [2024-07-15 15:14:19.492343] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:15.825 [2024-07-15 15:14:19.566430] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:15.825 [2024-07-15 15:14:19.566469] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:15.825 [2024-07-15 15:14:19.566482] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:15.825 [2024-07-15 15:14:19.566491] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:15.825 [2024-07-15 15:14:19.566497] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:15.825 [2024-07-15 15:14:19.566538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:15.825 [2024-07-15 15:14:19.566633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:15.825 [2024-07-15 15:14:19.566723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:15.825 [2024-07-15 15:14:19.566725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.391 15:14:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:16.391 15:14:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:09:16.391 15:14:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:16.391 15:14:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:16.391 15:14:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:16.391 15:14:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:16.391 15:14:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:16.391 15:14:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.391 15:14:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:16.391 [2024-07-15 15:14:20.283747] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:16.391 15:14:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.391 15:14:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:09:16.391 15:14:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.391 15:14:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:16.649 [2024-07-15 15:14:20.299952] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:09:16.649 15:14:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.649 15:14:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:09:16.649 15:14:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.649 15:14:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:16.649 15:14:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.649 15:14:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:09:16.649 15:14:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.649 15:14:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:16.649 15:14:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.649 15:14:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:09:16.649 15:14:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.649 15:14:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:16.649 15:14:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.649 15:14:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:16.649 15:14:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:09:16.649 15:14:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.649 15:14:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:16.649 15:14:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.649 15:14:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:09:16.649 15:14:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:09:16.649 15:14:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:16.649 15:14:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:16.649 15:14:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:16.649 15:14:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.649 15:14:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:16.649 15:14:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:16.649 15:14:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.649 15:14:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:16.649 15:14:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:16.649 15:14:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:09:16.649 15:14:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:16.649 15:14:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:16.649 15:14:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:16.649 15:14:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:16.649 15:14:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:16.907 15:14:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:16.907 15:14:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:16.907 15:14:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:09:16.907 15:14:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.907 15:14:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:16.907 15:14:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.907 15:14:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:09:16.907 15:14:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.907 15:14:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:16.907 15:14:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.907 15:14:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:09:16.907 15:14:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.907 15:14:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:16.907 15:14:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.907 15:14:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:16.907 15:14:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:09:16.907 15:14:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.907 15:14:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:16.907 15:14:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.907 15:14:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:09:16.907 15:14:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:09:16.907 15:14:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:16.907 15:14:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:16.907 15:14:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:16.907 15:14:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:16.907 15:14:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:17.165 15:14:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:17.165 15:14:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:09:17.165 15:14:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:09:17.165 15:14:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.165 15:14:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:17.165 15:14:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.165 15:14:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:17.165 15:14:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.165 15:14:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:17.165 15:14:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.165 15:14:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:09:17.165 15:14:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:17.165 15:14:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:17.165 15:14:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:17.165 15:14:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:17.165 15:14:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.165 15:14:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:17.165 15:14:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.165 15:14:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:09:17.165 15:14:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:17.165 15:14:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:09:17.165 15:14:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:17.165 15:14:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:17.165 15:14:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:17.165 15:14:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:17.165 15:14:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:17.165 15:14:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:09:17.165 15:14:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:17.165 15:14:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:09:17.165 15:14:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:09:17.165 15:14:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:17.165 15:14:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:17.165 15:14:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:17.422 15:14:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:17.422 15:14:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:09:17.422 15:14:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:09:17.422 15:14:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:17.422 15:14:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:17.422 15:14:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:17.423 15:14:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:17.423 15:14:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:17.423 15:14:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.423 15:14:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:17.423 15:14:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.423 15:14:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:09:17.423 15:14:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:17.423 15:14:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:17.423 15:14:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:17.423 15:14:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.423 15:14:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:17.423 15:14:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:17.423 15:14:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.423 15:14:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:09:17.423 15:14:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:17.423 15:14:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:09:17.423 15:14:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:17.423 15:14:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:17.423 15:14:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:17.423 15:14:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:17.423 15:14:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:17.681 15:14:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:09:17.681 15:14:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:17.681 15:14:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:09:17.681 15:14:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:09:17.681 15:14:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:17.681 15:14:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:17.681 15:14:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:17.681 15:14:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:09:17.681 15:14:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:09:17.681 15:14:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:17.681 15:14:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:09:17.681 15:14:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:17.681 15:14:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:17.681 15:14:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:17.681 15:14:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:09:17.681 15:14:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.681 15:14:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:17.939 15:14:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.939 15:14:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:17.939 15:14:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:09:17.939 15:14:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.939 15:14:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:17.939 15:14:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.939 15:14:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:09:17.939 15:14:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:09:17.939 15:14:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:17.939 15:14:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:17.939 15:14:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:17.939 15:14:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:17.939 15:14:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:17.939 15:14:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:17.939 15:14:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:09:17.939 15:14:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:09:17.939 15:14:21 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:09:17.939 15:14:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:17.939 15:14:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:09:17.939 15:14:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:17.939 15:14:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:09:17.939 15:14:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:17.939 15:14:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:17.939 rmmod nvme_tcp 00:09:17.939 rmmod nvme_fabrics 00:09:17.939 rmmod nvme_keyring 00:09:17.939 15:14:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:17.939 15:14:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:09:17.939 15:14:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:09:17.939 15:14:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 2911396 ']' 00:09:17.939 15:14:21 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 2911396 00:09:17.939 15:14:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 2911396 ']' 00:09:17.939 15:14:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 2911396 00:09:17.939 15:14:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:09:17.939 15:14:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:17.939 15:14:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2911396 00:09:18.197 15:14:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:18.197 15:14:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:18.197 15:14:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2911396' 00:09:18.197 killing process with pid 2911396 00:09:18.197 15:14:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 2911396 00:09:18.197 15:14:21 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 2911396 00:09:18.197 15:14:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:18.197 15:14:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:18.197 15:14:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:18.197 15:14:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:18.197 15:14:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:18.197 15:14:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:18.197 15:14:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:18.197 15:14:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.728 15:14:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:20.728 00:09:20.728 real 0m11.645s 00:09:20.728 user 0m12.511s 00:09:20.728 sys 0m5.928s 00:09:20.728 15:14:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:20.728 15:14:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:20.728 ************************************ 00:09:20.728 END TEST nvmf_referrals 00:09:20.728 ************************************ 00:09:20.728 15:14:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:20.728 15:14:24 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:20.728 15:14:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:20.729 15:14:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:20.729 15:14:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:20.729 ************************************ 00:09:20.729 START TEST nvmf_connect_disconnect 00:09:20.729 ************************************ 00:09:20.729 15:14:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:20.729 * Looking for test storage... 00:09:20.729 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:20.729 15:14:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:20.729 15:14:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:09:20.729 15:14:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:20.729 15:14:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:20.729 15:14:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:20.729 15:14:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:20.729 15:14:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:20.729 15:14:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:20.729 15:14:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:20.729 15:14:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:20.729 15:14:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:20.729 15:14:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:20.729 15:14:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:09:20.729 15:14:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:09:20.729 15:14:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:20.729 15:14:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:20.729 15:14:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:20.729 15:14:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:20.729 15:14:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:20.729 15:14:24 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:20.729 15:14:24 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:20.729 15:14:24 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:20.729 15:14:24 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.729 15:14:24 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.729 15:14:24 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.729 15:14:24 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:09:20.729 15:14:24 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.729 15:14:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:09:20.729 15:14:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:20.729 15:14:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:20.729 15:14:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:20.729 15:14:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:20.729 15:14:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:20.729 15:14:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:20.729 15:14:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:20.729 15:14:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:20.729 15:14:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:20.729 15:14:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:20.729 15:14:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:09:20.729 15:14:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:20.729 15:14:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:20.729 15:14:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:20.729 15:14:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:20.729 15:14:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:20.729 15:14:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:20.729 15:14:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:20.729 15:14:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.729 15:14:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:20.729 15:14:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:20.729 15:14:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:09:20.729 15:14:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:27.316 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:27.316 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:27.316 Found net devices under 0000:af:00.0: cvl_0_0 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:27.316 Found net devices under 0000:af:00.1: cvl_0_1 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:27.316 15:14:30 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:27.316 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:27.316 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:27.316 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:09:27.316 00:09:27.316 --- 10.0.0.2 ping statistics --- 00:09:27.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.316 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:09:27.316 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:27.316 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:27.316 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.260 ms 00:09:27.316 00:09:27.316 --- 10.0.0.1 ping statistics --- 00:09:27.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.316 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:09:27.316 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:27.316 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:09:27.316 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:27.317 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:27.317 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:27.317 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:27.317 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:27.317 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:27.317 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:27.317 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:09:27.317 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:27.317 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:27.317 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:27.317 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=2915460 00:09:27.317 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:27.317 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 2915460 00:09:27.317 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 2915460 ']' 00:09:27.317 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.317 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:27.317 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.317 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:27.317 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:27.317 [2024-07-15 15:14:31.124301] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:09:27.317 [2024-07-15 15:14:31.124359] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:27.317 EAL: No free 2048 kB hugepages reported on node 1 00:09:27.317 [2024-07-15 15:14:31.199392] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:27.575 [2024-07-15 15:14:31.274761] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:27.575 [2024-07-15 15:14:31.274800] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:27.575 [2024-07-15 15:14:31.274809] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:27.575 [2024-07-15 15:14:31.274818] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:27.575 [2024-07-15 15:14:31.274825] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:27.575 [2024-07-15 15:14:31.274881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:27.575 [2024-07-15 15:14:31.274978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:27.575 [2024-07-15 15:14:31.275060] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:27.575 [2024-07-15 15:14:31.275062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.140 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:28.140 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:09:28.140 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:28.140 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:28.140 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:28.140 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:28.140 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:28.140 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.140 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:28.140 [2024-07-15 15:14:31.977649] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:28.140 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.140 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:09:28.140 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.140 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:28.140 15:14:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.140 15:14:32 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:09:28.140 15:14:32 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:28.140 15:14:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.140 15:14:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:28.140 15:14:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.140 15:14:32 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:28.140 15:14:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.140 15:14:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:28.140 15:14:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.140 15:14:32 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:28.140 15:14:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.140 15:14:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:28.140 [2024-07-15 15:14:32.032209] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:28.140 15:14:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.140 15:14:32 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:09:28.140 15:14:32 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:09:28.141 15:14:32 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:09:32.325 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.607 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:38.887 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.176 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:45.459 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:45.459 15:14:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:09:45.459 15:14:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:09:45.459 15:14:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:45.459 15:14:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:09:45.459 15:14:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:45.459 15:14:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:09:45.459 15:14:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:45.459 15:14:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:45.459 rmmod nvme_tcp 00:09:45.718 rmmod nvme_fabrics 00:09:45.718 rmmod nvme_keyring 00:09:45.718 15:14:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:45.718 15:14:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:09:45.718 15:14:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:09:45.718 15:14:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 2915460 ']' 00:09:45.718 15:14:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 2915460 00:09:45.718 15:14:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 2915460 ']' 00:09:45.718 15:14:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 2915460 00:09:45.718 15:14:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:09:45.718 15:14:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:45.718 15:14:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2915460 00:09:45.718 15:14:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:45.718 15:14:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:45.718 15:14:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2915460' 00:09:45.718 killing process with pid 2915460 00:09:45.718 15:14:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 2915460 00:09:45.718 15:14:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 2915460 00:09:45.977 15:14:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:45.977 15:14:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:45.977 15:14:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:45.977 15:14:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:45.977 15:14:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:45.977 15:14:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.977 15:14:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:45.977 15:14:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.879 15:14:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:47.879 00:09:47.879 real 0m27.561s 00:09:47.879 user 1m14.376s 00:09:47.879 sys 0m7.119s 00:09:47.879 15:14:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:47.879 15:14:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:47.879 ************************************ 00:09:47.879 END TEST nvmf_connect_disconnect 00:09:47.880 ************************************ 00:09:48.138 15:14:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:48.138 15:14:51 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:09:48.138 15:14:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:48.138 15:14:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:48.138 15:14:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:48.138 ************************************ 00:09:48.138 START TEST nvmf_multitarget 00:09:48.138 ************************************ 00:09:48.138 15:14:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:09:48.138 * Looking for test storage... 00:09:48.138 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:48.138 15:14:51 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:48.138 15:14:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:09:48.138 15:14:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:48.138 15:14:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:48.138 15:14:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:48.138 15:14:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:48.138 15:14:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:48.138 15:14:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:48.138 15:14:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:48.138 15:14:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:48.138 15:14:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:48.138 15:14:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:48.138 15:14:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:09:48.138 15:14:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:09:48.138 15:14:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:48.138 15:14:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:48.138 15:14:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:48.138 15:14:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:48.138 15:14:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:48.138 15:14:51 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:48.138 15:14:51 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:48.138 15:14:51 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:48.138 15:14:51 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.138 15:14:51 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.138 15:14:51 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.138 15:14:51 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:09:48.138 15:14:51 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.138 15:14:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:09:48.138 15:14:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:48.138 15:14:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:48.138 15:14:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:48.138 15:14:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:48.138 15:14:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:48.138 15:14:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:48.138 15:14:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:48.138 15:14:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:48.138 15:14:51 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:48.138 15:14:51 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:09:48.138 15:14:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:48.138 15:14:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:48.138 15:14:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:48.138 15:14:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:48.138 15:14:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:48.138 15:14:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:48.138 15:14:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:48.138 15:14:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:48.138 15:14:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:48.138 15:14:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:48.138 15:14:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:09:48.138 15:14:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:54.702 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:54.702 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:09:54.702 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:54.702 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:54.702 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:54.702 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:54.702 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:54.702 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:09:54.702 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:54.702 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:09:54.702 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:09:54.702 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:09:54.702 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:09:54.702 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:09:54.702 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:09:54.702 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:54.702 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:54.702 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:54.702 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:54.702 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:54.702 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:54.702 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:54.702 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:54.702 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:54.702 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:54.702 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:54.702 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:54.703 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:54.703 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:54.703 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:54.703 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:54.703 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:54.703 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:54.703 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:54.703 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:54.703 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:54.703 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:54.703 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:54.703 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:54.703 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:54.703 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:54.703 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:54.703 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:54.703 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:54.703 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:54.703 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:54.703 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:54.703 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:54.703 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:54.703 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:54.703 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:54.703 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:54.703 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:54.703 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:54.703 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:54.703 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:54.703 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:54.703 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:54.703 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:54.703 Found net devices under 0000:af:00.0: cvl_0_0 00:09:54.703 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:54.703 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:54.703 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:54.703 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:54.703 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:54.703 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:54.703 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:54.703 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:54.703 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:54.703 Found net devices under 0000:af:00.1: cvl_0_1 00:09:54.703 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:54.703 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:54.703 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:09:54.703 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:54.703 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:54.703 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:54.703 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:54.703 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:54.703 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:54.703 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:54.703 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:54.703 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:54.703 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:54.703 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:54.703 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:54.703 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:54.703 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:54.703 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:54.703 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:54.960 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:54.960 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:54.960 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:54.960 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:54.960 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:54.960 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:54.960 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:54.960 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:54.960 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.309 ms 00:09:54.960 00:09:54.960 --- 10.0.0.2 ping statistics --- 00:09:54.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.960 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:09:54.960 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:54.960 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:54.960 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.244 ms 00:09:54.960 00:09:54.960 --- 10.0.0.1 ping statistics --- 00:09:54.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.960 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:09:54.960 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:54.960 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:09:54.960 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:54.960 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:54.960 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:54.960 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:54.960 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:54.960 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:54.960 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:54.960 15:14:58 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:09:54.960 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:54.960 15:14:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:54.960 15:14:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:54.960 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=2922417 00:09:54.960 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:54.960 15:14:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 2922417 00:09:54.960 15:14:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 2922417 ']' 00:09:54.960 15:14:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.960 15:14:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:54.960 15:14:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.960 15:14:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:54.960 15:14:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:55.218 [2024-07-15 15:14:58.875282] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:09:55.218 [2024-07-15 15:14:58.875335] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:55.218 EAL: No free 2048 kB hugepages reported on node 1 00:09:55.218 [2024-07-15 15:14:58.950485] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:55.218 [2024-07-15 15:14:59.025193] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:55.218 [2024-07-15 15:14:59.025229] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:55.218 [2024-07-15 15:14:59.025237] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:55.218 [2024-07-15 15:14:59.025246] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:55.218 [2024-07-15 15:14:59.025268] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:55.218 [2024-07-15 15:14:59.025314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:55.218 [2024-07-15 15:14:59.025408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:55.218 [2024-07-15 15:14:59.025493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:55.218 [2024-07-15 15:14:59.025494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.152 15:14:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:56.152 15:14:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:09:56.152 15:14:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:56.152 15:14:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:56.152 15:14:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:56.152 15:14:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:56.152 15:14:59 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:56.152 15:14:59 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:56.152 15:14:59 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:09:56.152 15:14:59 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:09:56.152 15:14:59 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:09:56.152 "nvmf_tgt_1" 00:09:56.152 15:14:59 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:09:56.152 "nvmf_tgt_2" 00:09:56.152 15:15:00 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:56.152 15:15:00 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:09:56.410 15:15:00 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:09:56.410 15:15:00 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:09:56.410 true 00:09:56.410 15:15:00 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:09:56.668 true 00:09:56.668 15:15:00 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:56.668 15:15:00 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:09:56.668 15:15:00 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:09:56.668 15:15:00 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:56.668 15:15:00 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:09:56.668 15:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:56.668 15:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:09:56.668 15:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:56.668 15:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:09:56.668 15:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:56.668 15:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:56.668 rmmod nvme_tcp 00:09:56.668 rmmod nvme_fabrics 00:09:56.668 rmmod nvme_keyring 00:09:56.668 15:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:56.668 15:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:09:56.668 15:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:09:56.668 15:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 2922417 ']' 00:09:56.668 15:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 2922417 00:09:56.668 15:15:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 2922417 ']' 00:09:56.668 15:15:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 2922417 00:09:56.668 15:15:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:09:56.668 15:15:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:56.668 15:15:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2922417 00:09:56.926 15:15:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:56.926 15:15:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:56.926 15:15:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2922417' 00:09:56.926 killing process with pid 2922417 00:09:56.926 15:15:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 2922417 00:09:56.926 15:15:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 2922417 00:09:56.926 15:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:56.926 15:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:56.926 15:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:56.926 15:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:56.926 15:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:56.926 15:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:56.926 15:15:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:56.926 15:15:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.495 15:15:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:59.495 00:09:59.495 real 0m11.009s 00:09:59.495 user 0m9.523s 00:09:59.495 sys 0m5.772s 00:09:59.495 15:15:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:59.495 15:15:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:59.495 ************************************ 00:09:59.495 END TEST nvmf_multitarget 00:09:59.495 ************************************ 00:09:59.495 15:15:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:59.495 15:15:02 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:09:59.495 15:15:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:59.495 15:15:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:59.495 15:15:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:59.495 ************************************ 00:09:59.495 START TEST nvmf_rpc 00:09:59.495 ************************************ 00:09:59.495 15:15:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:09:59.495 * Looking for test storage... 00:09:59.495 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:59.495 15:15:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:59.495 15:15:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:09:59.495 15:15:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:59.495 15:15:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:59.495 15:15:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:59.495 15:15:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:59.495 15:15:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:59.495 15:15:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:59.495 15:15:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:59.495 15:15:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:59.495 15:15:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:59.495 15:15:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:59.495 15:15:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:09:59.495 15:15:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:09:59.495 15:15:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:59.495 15:15:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:59.495 15:15:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:59.495 15:15:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:59.495 15:15:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:59.495 15:15:03 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:59.495 15:15:03 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:59.495 15:15:03 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:59.495 15:15:03 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.495 15:15:03 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.495 15:15:03 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.495 15:15:03 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:09:59.496 15:15:03 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.496 15:15:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:09:59.496 15:15:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:59.496 15:15:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:59.496 15:15:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:59.496 15:15:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:59.496 15:15:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:59.496 15:15:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:59.496 15:15:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:59.496 15:15:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:59.496 15:15:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:09:59.496 15:15:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:09:59.496 15:15:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:59.496 15:15:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:59.496 15:15:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:59.496 15:15:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:59.496 15:15:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:59.496 15:15:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.496 15:15:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:59.496 15:15:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.496 15:15:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:59.496 15:15:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:59.496 15:15:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:09:59.496 15:15:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:06.047 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:06.047 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:10:06.047 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:06.048 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:06.048 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:06.048 Found net devices under 0000:af:00.0: cvl_0_0 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:06.048 Found net devices under 0000:af:00.1: cvl_0_1 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:06.048 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:06.048 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:10:06.048 00:10:06.048 --- 10.0.0.2 ping statistics --- 00:10:06.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.048 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:06.048 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:06.048 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.254 ms 00:10:06.048 00:10:06.048 --- 10.0.0.1 ping statistics --- 00:10:06.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.048 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=2926941 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 2926941 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 2926941 ']' 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:06.048 15:15:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:06.306 [2024-07-15 15:15:09.964568] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:10:06.306 [2024-07-15 15:15:09.964615] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:06.306 EAL: No free 2048 kB hugepages reported on node 1 00:10:06.306 [2024-07-15 15:15:10.040564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:06.306 [2024-07-15 15:15:10.121779] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:06.306 [2024-07-15 15:15:10.121819] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:06.306 [2024-07-15 15:15:10.121829] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:06.306 [2024-07-15 15:15:10.121841] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:06.306 [2024-07-15 15:15:10.121848] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:06.306 [2024-07-15 15:15:10.121908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:06.306 [2024-07-15 15:15:10.122002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:06.306 [2024-07-15 15:15:10.122084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:06.306 [2024-07-15 15:15:10.122086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.240 15:15:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:07.240 15:15:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:10:07.240 15:15:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:07.240 15:15:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:07.240 15:15:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:07.240 15:15:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:07.240 15:15:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:10:07.240 15:15:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.240 15:15:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:07.240 15:15:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.240 15:15:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:10:07.240 "tick_rate": 2500000000, 00:10:07.240 "poll_groups": [ 00:10:07.240 { 00:10:07.240 "name": "nvmf_tgt_poll_group_000", 00:10:07.240 "admin_qpairs": 0, 00:10:07.240 "io_qpairs": 0, 00:10:07.240 "current_admin_qpairs": 0, 00:10:07.240 "current_io_qpairs": 0, 00:10:07.240 "pending_bdev_io": 0, 00:10:07.240 "completed_nvme_io": 0, 00:10:07.240 "transports": [] 00:10:07.240 }, 00:10:07.240 { 00:10:07.240 "name": "nvmf_tgt_poll_group_001", 00:10:07.240 "admin_qpairs": 0, 00:10:07.240 "io_qpairs": 0, 00:10:07.240 "current_admin_qpairs": 0, 00:10:07.240 "current_io_qpairs": 0, 00:10:07.240 "pending_bdev_io": 0, 00:10:07.240 "completed_nvme_io": 0, 00:10:07.240 "transports": [] 00:10:07.240 }, 00:10:07.240 { 00:10:07.240 "name": "nvmf_tgt_poll_group_002", 00:10:07.240 "admin_qpairs": 0, 00:10:07.240 "io_qpairs": 0, 00:10:07.240 "current_admin_qpairs": 0, 00:10:07.240 "current_io_qpairs": 0, 00:10:07.240 "pending_bdev_io": 0, 00:10:07.240 "completed_nvme_io": 0, 00:10:07.240 "transports": [] 00:10:07.240 }, 00:10:07.240 { 00:10:07.240 "name": "nvmf_tgt_poll_group_003", 00:10:07.240 "admin_qpairs": 0, 00:10:07.240 "io_qpairs": 0, 00:10:07.240 "current_admin_qpairs": 0, 00:10:07.240 "current_io_qpairs": 0, 00:10:07.240 "pending_bdev_io": 0, 00:10:07.240 "completed_nvme_io": 0, 00:10:07.240 "transports": [] 00:10:07.240 } 00:10:07.240 ] 00:10:07.240 }' 00:10:07.240 15:15:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:10:07.240 15:15:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:10:07.240 15:15:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:10:07.240 15:15:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:10:07.240 15:15:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:10:07.240 15:15:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:10:07.240 15:15:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:10:07.240 15:15:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:07.240 15:15:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.240 15:15:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:07.240 [2024-07-15 15:15:10.944068] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:07.240 15:15:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.240 15:15:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:10:07.240 15:15:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.240 15:15:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:07.240 15:15:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.240 15:15:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:10:07.240 "tick_rate": 2500000000, 00:10:07.240 "poll_groups": [ 00:10:07.240 { 00:10:07.240 "name": "nvmf_tgt_poll_group_000", 00:10:07.240 "admin_qpairs": 0, 00:10:07.240 "io_qpairs": 0, 00:10:07.240 "current_admin_qpairs": 0, 00:10:07.240 "current_io_qpairs": 0, 00:10:07.240 "pending_bdev_io": 0, 00:10:07.240 "completed_nvme_io": 0, 00:10:07.240 "transports": [ 00:10:07.240 { 00:10:07.240 "trtype": "TCP" 00:10:07.240 } 00:10:07.240 ] 00:10:07.240 }, 00:10:07.240 { 00:10:07.240 "name": "nvmf_tgt_poll_group_001", 00:10:07.240 "admin_qpairs": 0, 00:10:07.240 "io_qpairs": 0, 00:10:07.240 "current_admin_qpairs": 0, 00:10:07.240 "current_io_qpairs": 0, 00:10:07.240 "pending_bdev_io": 0, 00:10:07.240 "completed_nvme_io": 0, 00:10:07.240 "transports": [ 00:10:07.240 { 00:10:07.240 "trtype": "TCP" 00:10:07.240 } 00:10:07.240 ] 00:10:07.240 }, 00:10:07.240 { 00:10:07.240 "name": "nvmf_tgt_poll_group_002", 00:10:07.240 "admin_qpairs": 0, 00:10:07.240 "io_qpairs": 0, 00:10:07.240 "current_admin_qpairs": 0, 00:10:07.240 "current_io_qpairs": 0, 00:10:07.240 "pending_bdev_io": 0, 00:10:07.240 "completed_nvme_io": 0, 00:10:07.240 "transports": [ 00:10:07.240 { 00:10:07.240 "trtype": "TCP" 00:10:07.240 } 00:10:07.240 ] 00:10:07.240 }, 00:10:07.240 { 00:10:07.240 "name": "nvmf_tgt_poll_group_003", 00:10:07.240 "admin_qpairs": 0, 00:10:07.240 "io_qpairs": 0, 00:10:07.240 "current_admin_qpairs": 0, 00:10:07.240 "current_io_qpairs": 0, 00:10:07.240 "pending_bdev_io": 0, 00:10:07.240 "completed_nvme_io": 0, 00:10:07.240 "transports": [ 00:10:07.240 { 00:10:07.240 "trtype": "TCP" 00:10:07.240 } 00:10:07.240 ] 00:10:07.240 } 00:10:07.240 ] 00:10:07.240 }' 00:10:07.241 15:15:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:10:07.241 15:15:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:07.241 15:15:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:07.241 15:15:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:07.241 15:15:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:10:07.241 15:15:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:10:07.241 15:15:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:07.241 15:15:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:07.241 15:15:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:07.241 15:15:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:10:07.241 15:15:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:10:07.241 15:15:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:10:07.241 15:15:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:10:07.241 15:15:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:10:07.241 15:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.241 15:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:07.241 Malloc1 00:10:07.241 15:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.241 15:15:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:07.241 15:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.241 15:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:07.241 15:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.241 15:15:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:07.241 15:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.241 15:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:07.241 15:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.241 15:15:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:10:07.241 15:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.241 15:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:07.241 15:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.241 15:15:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:07.241 15:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.241 15:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:07.241 [2024-07-15 15:15:11.127801] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:07.241 15:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.241 15:15:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.2 -s 4420 00:10:07.241 15:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:10:07.241 15:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.2 -s 4420 00:10:07.241 15:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:10:07.241 15:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:07.241 15:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:10:07.241 15:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:07.241 15:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:10:07.241 15:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:07.241 15:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:10:07.241 15:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:10:07.241 15:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.2 -s 4420 00:10:07.499 [2024-07-15 15:15:11.161755] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e' 00:10:07.499 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:07.499 could not add new controller: failed to write to nvme-fabrics device 00:10:07.499 15:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:10:07.499 15:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:07.499 15:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:07.499 15:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:07.499 15:15:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:10:07.499 15:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.499 15:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:07.499 15:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.499 15:15:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:08.873 15:15:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:10:08.873 15:15:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:08.873 15:15:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:08.873 15:15:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:08.873 15:15:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:10.773 15:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:10.773 15:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:10.773 15:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:10.773 15:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:10.773 15:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:10.773 15:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:10.773 15:15:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:11.031 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.031 15:15:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:11.031 15:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:11.031 15:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:11.031 15:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:11.031 15:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:11.031 15:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:11.031 15:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:11.031 15:15:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:10:11.031 15:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.031 15:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:11.031 15:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.031 15:15:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:11.031 15:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:10:11.031 15:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:11.031 15:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:10:11.031 15:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:11.031 15:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:10:11.031 15:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:11.031 15:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:10:11.031 15:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:11.031 15:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:10:11.031 15:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:10:11.031 15:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:11.031 [2024-07-15 15:15:14.828281] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e' 00:10:11.031 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:11.031 could not add new controller: failed to write to nvme-fabrics device 00:10:11.031 15:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:10:11.031 15:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:11.031 15:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:11.031 15:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:11.031 15:15:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:10:11.031 15:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.031 15:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:11.031 15:15:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.031 15:15:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:12.404 15:15:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:10:12.404 15:15:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:12.404 15:15:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:12.404 15:15:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:12.404 15:15:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:14.936 15:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:14.936 15:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:14.936 15:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:14.936 15:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:14.936 15:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:14.936 15:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:14.936 15:15:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:14.936 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.936 15:15:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:14.936 15:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:14.936 15:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:14.936 15:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:14.936 15:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:14.937 15:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:14.937 15:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:14.937 15:15:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:14.937 15:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.937 15:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:14.937 15:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.937 15:15:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:10:14.937 15:15:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:14.937 15:15:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:14.937 15:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.937 15:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:14.937 15:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.937 15:15:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:14.937 15:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.937 15:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:14.937 [2024-07-15 15:15:18.386812] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:14.937 15:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.937 15:15:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:14.937 15:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.937 15:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:14.937 15:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.937 15:15:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:14.937 15:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.937 15:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:14.937 15:15:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.937 15:15:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:15.868 15:15:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:15.868 15:15:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:15.868 15:15:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:15.868 15:15:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:15.868 15:15:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:18.395 15:15:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:18.395 15:15:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:18.395 15:15:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:18.395 15:15:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:18.395 15:15:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:18.395 15:15:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:18.395 15:15:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:18.395 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.395 15:15:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:18.395 15:15:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:18.395 15:15:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:18.395 15:15:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:18.396 15:15:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:18.396 15:15:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:18.396 15:15:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:18.396 15:15:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:18.396 15:15:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.396 15:15:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.396 15:15:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.396 15:15:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:18.396 15:15:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.396 15:15:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.396 15:15:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.396 15:15:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:18.396 15:15:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:18.396 15:15:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.396 15:15:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.396 15:15:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.396 15:15:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:18.396 15:15:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.396 15:15:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.396 [2024-07-15 15:15:21.880360] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:18.396 15:15:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.396 15:15:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:18.396 15:15:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.396 15:15:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.396 15:15:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.396 15:15:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:18.396 15:15:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.396 15:15:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.396 15:15:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.396 15:15:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:19.769 15:15:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:19.769 15:15:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:19.769 15:15:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:19.769 15:15:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:19.769 15:15:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:21.688 15:15:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:21.688 15:15:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:21.688 15:15:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:21.688 15:15:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:21.688 15:15:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:21.688 15:15:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:21.688 15:15:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:21.688 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.688 15:15:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:21.688 15:15:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:21.688 15:15:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:21.688 15:15:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:21.688 15:15:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:21.688 15:15:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:21.688 15:15:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:21.688 15:15:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:21.688 15:15:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.688 15:15:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:21.688 15:15:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.688 15:15:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:21.688 15:15:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.688 15:15:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:21.688 15:15:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.688 15:15:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:21.688 15:15:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:21.688 15:15:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.688 15:15:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:21.688 15:15:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.688 15:15:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:21.688 15:15:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.688 15:15:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:21.688 [2024-07-15 15:15:25.414786] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:21.688 15:15:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.688 15:15:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:21.688 15:15:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.688 15:15:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:21.688 15:15:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.688 15:15:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:21.688 15:15:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.688 15:15:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:21.688 15:15:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.688 15:15:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:23.058 15:15:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:23.058 15:15:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:23.058 15:15:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:23.058 15:15:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:23.058 15:15:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:24.957 15:15:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:24.957 15:15:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:24.957 15:15:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:24.957 15:15:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:24.957 15:15:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:24.957 15:15:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:24.957 15:15:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:25.215 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.215 15:15:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:25.215 15:15:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:25.215 15:15:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:25.215 15:15:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:25.215 15:15:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:25.215 15:15:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:25.215 15:15:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:25.215 15:15:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:25.215 15:15:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.215 15:15:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:25.215 15:15:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.215 15:15:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:25.215 15:15:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.215 15:15:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:25.215 15:15:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.215 15:15:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:25.215 15:15:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:25.215 15:15:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.215 15:15:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:25.215 15:15:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.215 15:15:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:25.215 15:15:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.215 15:15:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:25.215 [2024-07-15 15:15:28.946011] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:25.215 15:15:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.215 15:15:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:25.215 15:15:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.215 15:15:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:25.215 15:15:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.215 15:15:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:25.215 15:15:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.215 15:15:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:25.215 15:15:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.215 15:15:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:26.652 15:15:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:26.652 15:15:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:26.652 15:15:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:26.652 15:15:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:26.652 15:15:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:28.554 15:15:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:28.554 15:15:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:28.554 15:15:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:28.554 15:15:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:28.554 15:15:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:28.554 15:15:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:28.554 15:15:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:28.554 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.554 15:15:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:28.554 15:15:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:28.554 15:15:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:28.554 15:15:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:28.554 15:15:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:28.554 15:15:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:28.554 15:15:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:28.554 15:15:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:28.554 15:15:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.554 15:15:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.813 15:15:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.813 15:15:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:28.813 15:15:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.813 15:15:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.813 15:15:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.813 15:15:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:28.813 15:15:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:28.813 15:15:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.813 15:15:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.813 15:15:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.813 15:15:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:28.813 15:15:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.813 15:15:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.813 [2024-07-15 15:15:32.485682] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:28.813 15:15:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.813 15:15:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:28.813 15:15:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.813 15:15:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.813 15:15:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.813 15:15:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:28.813 15:15:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.813 15:15:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.813 15:15:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.813 15:15:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:30.189 15:15:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:30.189 15:15:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:30.189 15:15:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:30.189 15:15:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:30.189 15:15:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:32.091 15:15:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:32.091 15:15:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:32.091 15:15:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:32.091 15:15:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:32.091 15:15:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:32.091 15:15:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:32.091 15:15:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:32.091 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.091 15:15:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:32.091 15:15:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:32.091 15:15:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:32.091 15:15:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:32.091 15:15:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:32.091 15:15:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:32.091 15:15:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:32.091 15:15:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:32.091 15:15:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.091 15:15:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.091 15:15:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.091 15:15:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:32.091 15:15:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.091 15:15:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.091 15:15:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.091 15:15:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:10:32.091 15:15:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:32.091 15:15:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:32.091 15:15:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.091 15:15:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.350 15:15:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.350 [2024-07-15 15:15:36.008289] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.350 [2024-07-15 15:15:36.056389] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.350 [2024-07-15 15:15:36.108537] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:32.350 15:15:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:32.351 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.351 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.351 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.351 15:15:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:32.351 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.351 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.351 [2024-07-15 15:15:36.156697] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:32.351 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.351 15:15:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:32.351 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.351 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.351 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.351 15:15:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:32.351 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.351 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.351 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.351 15:15:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.351 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.351 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.351 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.351 15:15:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:32.351 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.351 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.351 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.351 15:15:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:32.351 15:15:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:32.351 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.351 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.351 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.351 15:15:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:32.351 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.351 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.351 [2024-07-15 15:15:36.204859] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:32.351 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.351 15:15:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:32.351 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.351 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.351 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.351 15:15:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:32.351 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.351 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.351 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.351 15:15:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.351 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.351 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.351 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.351 15:15:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:32.351 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.351 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.351 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.351 15:15:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:10:32.351 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.351 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.610 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.610 15:15:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:10:32.610 "tick_rate": 2500000000, 00:10:32.610 "poll_groups": [ 00:10:32.610 { 00:10:32.610 "name": "nvmf_tgt_poll_group_000", 00:10:32.610 "admin_qpairs": 2, 00:10:32.610 "io_qpairs": 196, 00:10:32.610 "current_admin_qpairs": 0, 00:10:32.610 "current_io_qpairs": 0, 00:10:32.610 "pending_bdev_io": 0, 00:10:32.610 "completed_nvme_io": 246, 00:10:32.610 "transports": [ 00:10:32.610 { 00:10:32.610 "trtype": "TCP" 00:10:32.610 } 00:10:32.610 ] 00:10:32.610 }, 00:10:32.610 { 00:10:32.610 "name": "nvmf_tgt_poll_group_001", 00:10:32.610 "admin_qpairs": 2, 00:10:32.610 "io_qpairs": 196, 00:10:32.610 "current_admin_qpairs": 0, 00:10:32.610 "current_io_qpairs": 0, 00:10:32.610 "pending_bdev_io": 0, 00:10:32.610 "completed_nvme_io": 298, 00:10:32.610 "transports": [ 00:10:32.610 { 00:10:32.610 "trtype": "TCP" 00:10:32.610 } 00:10:32.610 ] 00:10:32.610 }, 00:10:32.610 { 00:10:32.610 "name": "nvmf_tgt_poll_group_002", 00:10:32.610 "admin_qpairs": 1, 00:10:32.610 "io_qpairs": 196, 00:10:32.610 "current_admin_qpairs": 0, 00:10:32.610 "current_io_qpairs": 0, 00:10:32.610 "pending_bdev_io": 0, 00:10:32.610 "completed_nvme_io": 294, 00:10:32.610 "transports": [ 00:10:32.610 { 00:10:32.610 "trtype": "TCP" 00:10:32.610 } 00:10:32.610 ] 00:10:32.610 }, 00:10:32.610 { 00:10:32.610 "name": "nvmf_tgt_poll_group_003", 00:10:32.610 "admin_qpairs": 2, 00:10:32.610 "io_qpairs": 196, 00:10:32.610 "current_admin_qpairs": 0, 00:10:32.610 "current_io_qpairs": 0, 00:10:32.610 "pending_bdev_io": 0, 00:10:32.610 "completed_nvme_io": 296, 00:10:32.610 "transports": [ 00:10:32.610 { 00:10:32.610 "trtype": "TCP" 00:10:32.610 } 00:10:32.610 ] 00:10:32.610 } 00:10:32.610 ] 00:10:32.610 }' 00:10:32.610 15:15:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:10:32.610 15:15:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:32.610 15:15:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:32.610 15:15:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:32.610 15:15:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:10:32.610 15:15:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:10:32.610 15:15:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:32.610 15:15:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:32.610 15:15:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:32.610 15:15:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 784 > 0 )) 00:10:32.610 15:15:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:10:32.610 15:15:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:10:32.610 15:15:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:10:32.610 15:15:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:32.610 15:15:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:10:32.610 15:15:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:32.610 15:15:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:10:32.610 15:15:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:32.610 15:15:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:32.610 rmmod nvme_tcp 00:10:32.610 rmmod nvme_fabrics 00:10:32.610 rmmod nvme_keyring 00:10:32.610 15:15:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:32.610 15:15:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:10:32.610 15:15:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:10:32.610 15:15:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 2926941 ']' 00:10:32.610 15:15:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 2926941 00:10:32.611 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 2926941 ']' 00:10:32.611 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 2926941 00:10:32.611 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:10:32.611 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:32.611 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2926941 00:10:32.611 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:32.611 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:32.611 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2926941' 00:10:32.611 killing process with pid 2926941 00:10:32.611 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 2926941 00:10:32.611 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 2926941 00:10:32.870 15:15:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:32.870 15:15:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:32.870 15:15:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:32.870 15:15:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:32.870 15:15:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:32.870 15:15:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.870 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:32.870 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:35.405 15:15:38 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:35.405 00:10:35.405 real 0m35.826s 00:10:35.405 user 1m47.024s 00:10:35.405 sys 0m8.109s 00:10:35.405 15:15:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:35.405 15:15:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.405 ************************************ 00:10:35.405 END TEST nvmf_rpc 00:10:35.405 ************************************ 00:10:35.405 15:15:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:35.405 15:15:38 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:10:35.405 15:15:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:35.405 15:15:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:35.405 15:15:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:35.405 ************************************ 00:10:35.405 START TEST nvmf_invalid 00:10:35.405 ************************************ 00:10:35.405 15:15:38 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:10:35.405 * Looking for test storage... 00:10:35.405 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:35.405 15:15:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:35.405 15:15:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:10:35.405 15:15:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:35.405 15:15:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:35.405 15:15:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:35.405 15:15:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:35.405 15:15:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:35.405 15:15:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:35.405 15:15:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:35.405 15:15:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:35.405 15:15:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:35.405 15:15:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:35.405 15:15:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:10:35.405 15:15:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:10:35.405 15:15:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:35.405 15:15:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:35.405 15:15:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:35.405 15:15:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:35.405 15:15:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:35.405 15:15:38 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:35.405 15:15:38 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:35.405 15:15:38 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:35.405 15:15:38 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.405 15:15:38 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.405 15:15:38 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.405 15:15:38 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:10:35.405 15:15:38 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.405 15:15:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:10:35.405 15:15:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:35.405 15:15:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:35.405 15:15:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:35.405 15:15:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:35.405 15:15:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:35.405 15:15:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:35.405 15:15:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:35.405 15:15:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:35.405 15:15:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:10:35.405 15:15:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:35.405 15:15:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:10:35.405 15:15:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:10:35.405 15:15:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:10:35.405 15:15:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:10:35.405 15:15:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:35.405 15:15:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:35.405 15:15:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:35.405 15:15:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:35.405 15:15:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:35.405 15:15:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.405 15:15:38 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:35.405 15:15:38 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:35.405 15:15:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:35.405 15:15:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:35.405 15:15:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:10:35.405 15:15:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:41.963 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:41.963 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:10:41.963 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:41.963 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:41.963 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:41.963 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:41.963 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:41.963 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:10:41.963 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:41.963 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:10:41.963 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:10:41.963 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:10:41.963 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:10:41.963 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:10:41.963 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:10:41.963 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:41.963 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:41.963 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:41.963 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:41.963 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:41.963 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:41.963 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:41.963 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:41.963 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:41.963 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:41.963 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:41.963 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:41.963 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:41.963 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:41.963 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:41.963 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:41.963 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:41.963 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:41.963 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:41.963 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:41.963 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:41.963 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:41.963 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:41.963 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:41.963 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:41.963 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:41.963 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:41.963 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:41.963 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:41.963 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:41.963 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:41.963 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:41.963 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:41.963 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:41.963 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:41.963 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:41.963 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:41.963 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:41.963 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:41.963 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:41.963 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:41.963 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:41.963 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:41.963 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:41.963 Found net devices under 0000:af:00.0: cvl_0_0 00:10:41.963 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:41.963 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:41.963 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:41.964 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:41.964 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:41.964 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:41.964 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:41.964 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:41.964 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:41.964 Found net devices under 0000:af:00.1: cvl_0_1 00:10:41.964 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:41.964 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:41.964 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:10:41.964 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:41.964 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:41.964 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:41.964 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:41.964 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:41.964 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:41.964 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:41.964 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:41.964 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:41.964 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:41.964 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:41.964 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:41.964 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:41.964 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:41.964 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:41.964 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:41.964 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:41.964 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:41.964 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:41.964 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:41.964 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:41.964 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:41.964 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:41.964 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:41.964 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:10:41.964 00:10:41.964 --- 10.0.0.2 ping statistics --- 00:10:41.964 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.964 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:10:41.964 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:41.964 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:41.964 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:10:41.964 00:10:41.964 --- 10.0.0.1 ping statistics --- 00:10:41.964 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.964 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:10:41.964 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:41.964 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:10:41.964 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:41.964 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:41.964 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:41.964 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:41.964 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:41.964 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:41.964 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:41.964 15:15:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:10:41.964 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:41.964 15:15:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:41.964 15:15:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:41.964 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=2935270 00:10:41.964 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 2935270 00:10:41.964 15:15:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:41.964 15:15:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 2935270 ']' 00:10:41.964 15:15:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:41.964 15:15:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:41.964 15:15:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:41.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:41.964 15:15:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:41.964 15:15:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:41.964 [2024-07-15 15:15:45.737051] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:10:41.964 [2024-07-15 15:15:45.737105] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:41.964 EAL: No free 2048 kB hugepages reported on node 1 00:10:41.964 [2024-07-15 15:15:45.812732] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:42.222 [2024-07-15 15:15:45.887997] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:42.222 [2024-07-15 15:15:45.888032] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:42.222 [2024-07-15 15:15:45.888041] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:42.222 [2024-07-15 15:15:45.888049] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:42.222 [2024-07-15 15:15:45.888072] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:42.222 [2024-07-15 15:15:45.888121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:42.222 [2024-07-15 15:15:45.888211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:42.222 [2024-07-15 15:15:45.888297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:42.222 [2024-07-15 15:15:45.888299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.787 15:15:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:42.787 15:15:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:10:42.787 15:15:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:42.787 15:15:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:42.787 15:15:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:42.787 15:15:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:42.787 15:15:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:42.787 15:15:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode11273 00:10:43.044 [2024-07-15 15:15:46.759086] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:10:43.044 15:15:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:10:43.044 { 00:10:43.044 "nqn": "nqn.2016-06.io.spdk:cnode11273", 00:10:43.044 "tgt_name": "foobar", 00:10:43.044 "method": "nvmf_create_subsystem", 00:10:43.044 "req_id": 1 00:10:43.044 } 00:10:43.044 Got JSON-RPC error response 00:10:43.044 response: 00:10:43.044 { 00:10:43.044 "code": -32603, 00:10:43.044 "message": "Unable to find target foobar" 00:10:43.044 }' 00:10:43.044 15:15:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:10:43.044 { 00:10:43.044 "nqn": "nqn.2016-06.io.spdk:cnode11273", 00:10:43.044 "tgt_name": "foobar", 00:10:43.044 "method": "nvmf_create_subsystem", 00:10:43.045 "req_id": 1 00:10:43.045 } 00:10:43.045 Got JSON-RPC error response 00:10:43.045 response: 00:10:43.045 { 00:10:43.045 "code": -32603, 00:10:43.045 "message": "Unable to find target foobar" 00:10:43.045 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:10:43.045 15:15:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:10:43.045 15:15:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode9686 00:10:43.045 [2024-07-15 15:15:46.935739] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9686: invalid serial number 'SPDKISFASTANDAWESOME' 00:10:43.302 15:15:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:10:43.302 { 00:10:43.302 "nqn": "nqn.2016-06.io.spdk:cnode9686", 00:10:43.302 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:10:43.302 "method": "nvmf_create_subsystem", 00:10:43.302 "req_id": 1 00:10:43.302 } 00:10:43.302 Got JSON-RPC error response 00:10:43.302 response: 00:10:43.302 { 00:10:43.302 "code": -32602, 00:10:43.302 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:10:43.302 }' 00:10:43.302 15:15:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:10:43.302 { 00:10:43.302 "nqn": "nqn.2016-06.io.spdk:cnode9686", 00:10:43.302 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:10:43.302 "method": "nvmf_create_subsystem", 00:10:43.302 "req_id": 1 00:10:43.302 } 00:10:43.302 Got JSON-RPC error response 00:10:43.302 response: 00:10:43.302 { 00:10:43.302 "code": -32602, 00:10:43.302 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:10:43.302 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:43.302 15:15:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:10:43.302 15:15:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode9695 00:10:43.302 [2024-07-15 15:15:47.124312] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9695: invalid model number 'SPDK_Controller' 00:10:43.302 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:10:43.302 { 00:10:43.302 "nqn": "nqn.2016-06.io.spdk:cnode9695", 00:10:43.302 "model_number": "SPDK_Controller\u001f", 00:10:43.302 "method": "nvmf_create_subsystem", 00:10:43.302 "req_id": 1 00:10:43.302 } 00:10:43.302 Got JSON-RPC error response 00:10:43.302 response: 00:10:43.302 { 00:10:43.302 "code": -32602, 00:10:43.302 "message": "Invalid MN SPDK_Controller\u001f" 00:10:43.302 }' 00:10:43.302 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:10:43.302 { 00:10:43.302 "nqn": "nqn.2016-06.io.spdk:cnode9695", 00:10:43.302 "model_number": "SPDK_Controller\u001f", 00:10:43.302 "method": "nvmf_create_subsystem", 00:10:43.302 "req_id": 1 00:10:43.302 } 00:10:43.302 Got JSON-RPC error response 00:10:43.302 response: 00:10:43.302 { 00:10:43.302 "code": -32602, 00:10:43.302 "message": "Invalid MN SPDK_Controller\u001f" 00:10:43.302 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:10:43.302 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:10:43.302 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:10:43.302 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:43.302 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:43.302 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:43.302 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:43.302 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:43.302 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:10:43.302 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:10:43.302 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:10:43.302 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:43.302 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:43.302 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:10:43.302 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:10:43.302 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:10:43.302 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:43.302 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:43.302 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:10:43.302 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:10:43.302 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:10:43.302 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:43.302 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:43.302 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:10:43.302 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:10:43.302 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:10:43.302 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:43.302 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:43.302 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:10:43.302 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:10:43.302 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:10:43.302 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:43.302 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:43.302 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:10:43.302 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:10:43.302 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:10:43.302 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:43.302 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ 1 == \- ]] 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '1x\rqVi_It$vd~@v0QeV^' 00:10:43.560 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '1x\rqVi_It$vd~@v0QeV^' nqn.2016-06.io.spdk:cnode13434 00:10:43.818 [2024-07-15 15:15:47.473498] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13434: invalid serial number '1x\rqVi_It$vd~@v0QeV^' 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:10:43.818 { 00:10:43.818 "nqn": "nqn.2016-06.io.spdk:cnode13434", 00:10:43.818 "serial_number": "1x\\rqVi_It$vd~@v0QeV^", 00:10:43.818 "method": "nvmf_create_subsystem", 00:10:43.818 "req_id": 1 00:10:43.818 } 00:10:43.818 Got JSON-RPC error response 00:10:43.818 response: 00:10:43.818 { 00:10:43.818 "code": -32602, 00:10:43.818 "message": "Invalid SN 1x\\rqVi_It$vd~@v0QeV^" 00:10:43.818 }' 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:10:43.818 { 00:10:43.818 "nqn": "nqn.2016-06.io.spdk:cnode13434", 00:10:43.818 "serial_number": "1x\\rqVi_It$vd~@v0QeV^", 00:10:43.818 "method": "nvmf_create_subsystem", 00:10:43.818 "req_id": 1 00:10:43.818 } 00:10:43.818 Got JSON-RPC error response 00:10:43.818 response: 00:10:43.818 { 00:10:43.818 "code": -32602, 00:10:43.818 "message": "Invalid SN 1x\\rqVi_It$vd~@v0QeV^" 00:10:43.818 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:10:43.818 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:43.819 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ " == \- ]] 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '"fiOZI;R^M3g%z._k5d3tvmiE_nn|F*s'\''OtL\NLD' 00:10:44.077 15:15:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '"fiOZI;R^M3g%z._k5d3tvmiE_nn|F*s'\''OtL\NLD' nqn.2016-06.io.spdk:cnode2894 00:10:44.077 [2024-07-15 15:15:47.983200] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2894: invalid model number '"fiOZI;R^M3g%z._k5d3tvmiE_nn|F*s'OtL\NLD' 00:10:44.335 15:15:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:10:44.335 { 00:10:44.335 "nqn": "nqn.2016-06.io.spdk:cnode2894", 00:10:44.335 "model_number": "\"fiOZI;R^M3g%z._k5d3tvmiE_nn|F*s'\''OtL\\N\u007fLD", 00:10:44.335 "method": "nvmf_create_subsystem", 00:10:44.335 "req_id": 1 00:10:44.335 } 00:10:44.335 Got JSON-RPC error response 00:10:44.335 response: 00:10:44.335 { 00:10:44.335 "code": -32602, 00:10:44.335 "message": "Invalid MN \"fiOZI;R^M3g%z._k5d3tvmiE_nn|F*s'\''OtL\\N\u007fLD" 00:10:44.335 }' 00:10:44.335 15:15:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:10:44.335 { 00:10:44.335 "nqn": "nqn.2016-06.io.spdk:cnode2894", 00:10:44.335 "model_number": "\"fiOZI;R^M3g%z._k5d3tvmiE_nn|F*s'OtL\\N\u007fLD", 00:10:44.335 "method": "nvmf_create_subsystem", 00:10:44.335 "req_id": 1 00:10:44.335 } 00:10:44.335 Got JSON-RPC error response 00:10:44.335 response: 00:10:44.335 { 00:10:44.335 "code": -32602, 00:10:44.335 "message": "Invalid MN \"fiOZI;R^M3g%z._k5d3tvmiE_nn|F*s'OtL\\N\u007fLD" 00:10:44.335 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:10:44.335 15:15:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:10:44.335 [2024-07-15 15:15:48.171907] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:44.335 15:15:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:10:44.592 15:15:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:10:44.592 15:15:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:10:44.592 15:15:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:10:44.592 15:15:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:10:44.592 15:15:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:10:44.850 [2024-07-15 15:15:48.553185] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:10:44.850 15:15:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:10:44.850 { 00:10:44.850 "nqn": "nqn.2016-06.io.spdk:cnode", 00:10:44.850 "listen_address": { 00:10:44.850 "trtype": "tcp", 00:10:44.850 "traddr": "", 00:10:44.850 "trsvcid": "4421" 00:10:44.850 }, 00:10:44.850 "method": "nvmf_subsystem_remove_listener", 00:10:44.850 "req_id": 1 00:10:44.850 } 00:10:44.850 Got JSON-RPC error response 00:10:44.850 response: 00:10:44.850 { 00:10:44.850 "code": -32602, 00:10:44.850 "message": "Invalid parameters" 00:10:44.850 }' 00:10:44.850 15:15:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:10:44.850 { 00:10:44.850 "nqn": "nqn.2016-06.io.spdk:cnode", 00:10:44.850 "listen_address": { 00:10:44.850 "trtype": "tcp", 00:10:44.850 "traddr": "", 00:10:44.850 "trsvcid": "4421" 00:10:44.850 }, 00:10:44.850 "method": "nvmf_subsystem_remove_listener", 00:10:44.850 "req_id": 1 00:10:44.850 } 00:10:44.850 Got JSON-RPC error response 00:10:44.850 response: 00:10:44.850 { 00:10:44.850 "code": -32602, 00:10:44.850 "message": "Invalid parameters" 00:10:44.850 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:10:44.850 15:15:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25932 -i 0 00:10:44.850 [2024-07-15 15:15:48.737755] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25932: invalid cntlid range [0-65519] 00:10:45.108 15:15:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:10:45.108 { 00:10:45.108 "nqn": "nqn.2016-06.io.spdk:cnode25932", 00:10:45.108 "min_cntlid": 0, 00:10:45.108 "method": "nvmf_create_subsystem", 00:10:45.108 "req_id": 1 00:10:45.108 } 00:10:45.108 Got JSON-RPC error response 00:10:45.108 response: 00:10:45.108 { 00:10:45.108 "code": -32602, 00:10:45.108 "message": "Invalid cntlid range [0-65519]" 00:10:45.108 }' 00:10:45.108 15:15:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:10:45.108 { 00:10:45.108 "nqn": "nqn.2016-06.io.spdk:cnode25932", 00:10:45.108 "min_cntlid": 0, 00:10:45.108 "method": "nvmf_create_subsystem", 00:10:45.108 "req_id": 1 00:10:45.108 } 00:10:45.108 Got JSON-RPC error response 00:10:45.108 response: 00:10:45.108 { 00:10:45.108 "code": -32602, 00:10:45.108 "message": "Invalid cntlid range [0-65519]" 00:10:45.108 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:45.108 15:15:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9872 -i 65520 00:10:45.108 [2024-07-15 15:15:48.922431] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9872: invalid cntlid range [65520-65519] 00:10:45.108 15:15:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:10:45.108 { 00:10:45.108 "nqn": "nqn.2016-06.io.spdk:cnode9872", 00:10:45.108 "min_cntlid": 65520, 00:10:45.108 "method": "nvmf_create_subsystem", 00:10:45.108 "req_id": 1 00:10:45.108 } 00:10:45.108 Got JSON-RPC error response 00:10:45.108 response: 00:10:45.108 { 00:10:45.108 "code": -32602, 00:10:45.108 "message": "Invalid cntlid range [65520-65519]" 00:10:45.108 }' 00:10:45.108 15:15:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:10:45.108 { 00:10:45.108 "nqn": "nqn.2016-06.io.spdk:cnode9872", 00:10:45.108 "min_cntlid": 65520, 00:10:45.108 "method": "nvmf_create_subsystem", 00:10:45.108 "req_id": 1 00:10:45.108 } 00:10:45.108 Got JSON-RPC error response 00:10:45.108 response: 00:10:45.108 { 00:10:45.108 "code": -32602, 00:10:45.108 "message": "Invalid cntlid range [65520-65519]" 00:10:45.108 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:45.108 15:15:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9740 -I 0 00:10:45.366 [2024-07-15 15:15:49.098976] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9740: invalid cntlid range [1-0] 00:10:45.366 15:15:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:10:45.366 { 00:10:45.366 "nqn": "nqn.2016-06.io.spdk:cnode9740", 00:10:45.366 "max_cntlid": 0, 00:10:45.366 "method": "nvmf_create_subsystem", 00:10:45.366 "req_id": 1 00:10:45.366 } 00:10:45.366 Got JSON-RPC error response 00:10:45.366 response: 00:10:45.366 { 00:10:45.366 "code": -32602, 00:10:45.366 "message": "Invalid cntlid range [1-0]" 00:10:45.366 }' 00:10:45.366 15:15:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:10:45.366 { 00:10:45.366 "nqn": "nqn.2016-06.io.spdk:cnode9740", 00:10:45.366 "max_cntlid": 0, 00:10:45.366 "method": "nvmf_create_subsystem", 00:10:45.366 "req_id": 1 00:10:45.366 } 00:10:45.366 Got JSON-RPC error response 00:10:45.366 response: 00:10:45.366 { 00:10:45.366 "code": -32602, 00:10:45.366 "message": "Invalid cntlid range [1-0]" 00:10:45.366 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:45.366 15:15:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3245 -I 65520 00:10:45.624 [2024-07-15 15:15:49.279602] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3245: invalid cntlid range [1-65520] 00:10:45.624 15:15:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:10:45.624 { 00:10:45.624 "nqn": "nqn.2016-06.io.spdk:cnode3245", 00:10:45.624 "max_cntlid": 65520, 00:10:45.624 "method": "nvmf_create_subsystem", 00:10:45.624 "req_id": 1 00:10:45.624 } 00:10:45.624 Got JSON-RPC error response 00:10:45.624 response: 00:10:45.624 { 00:10:45.624 "code": -32602, 00:10:45.624 "message": "Invalid cntlid range [1-65520]" 00:10:45.624 }' 00:10:45.624 15:15:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:10:45.624 { 00:10:45.624 "nqn": "nqn.2016-06.io.spdk:cnode3245", 00:10:45.624 "max_cntlid": 65520, 00:10:45.624 "method": "nvmf_create_subsystem", 00:10:45.624 "req_id": 1 00:10:45.624 } 00:10:45.624 Got JSON-RPC error response 00:10:45.624 response: 00:10:45.624 { 00:10:45.624 "code": -32602, 00:10:45.624 "message": "Invalid cntlid range [1-65520]" 00:10:45.624 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:45.624 15:15:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18987 -i 6 -I 5 00:10:45.624 [2024-07-15 15:15:49.472263] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18987: invalid cntlid range [6-5] 00:10:45.624 15:15:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:10:45.624 { 00:10:45.624 "nqn": "nqn.2016-06.io.spdk:cnode18987", 00:10:45.624 "min_cntlid": 6, 00:10:45.624 "max_cntlid": 5, 00:10:45.624 "method": "nvmf_create_subsystem", 00:10:45.624 "req_id": 1 00:10:45.624 } 00:10:45.624 Got JSON-RPC error response 00:10:45.624 response: 00:10:45.624 { 00:10:45.624 "code": -32602, 00:10:45.624 "message": "Invalid cntlid range [6-5]" 00:10:45.624 }' 00:10:45.624 15:15:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:10:45.624 { 00:10:45.624 "nqn": "nqn.2016-06.io.spdk:cnode18987", 00:10:45.624 "min_cntlid": 6, 00:10:45.624 "max_cntlid": 5, 00:10:45.624 "method": "nvmf_create_subsystem", 00:10:45.624 "req_id": 1 00:10:45.624 } 00:10:45.624 Got JSON-RPC error response 00:10:45.624 response: 00:10:45.624 { 00:10:45.624 "code": -32602, 00:10:45.624 "message": "Invalid cntlid range [6-5]" 00:10:45.624 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:45.624 15:15:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:10:45.882 15:15:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:10:45.882 { 00:10:45.882 "name": "foobar", 00:10:45.882 "method": "nvmf_delete_target", 00:10:45.882 "req_id": 1 00:10:45.882 } 00:10:45.882 Got JSON-RPC error response 00:10:45.882 response: 00:10:45.882 { 00:10:45.882 "code": -32602, 00:10:45.882 "message": "The specified target doesn'\''t exist, cannot delete it." 00:10:45.882 }' 00:10:45.882 15:15:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:10:45.882 { 00:10:45.882 "name": "foobar", 00:10:45.882 "method": "nvmf_delete_target", 00:10:45.882 "req_id": 1 00:10:45.882 } 00:10:45.882 Got JSON-RPC error response 00:10:45.882 response: 00:10:45.882 { 00:10:45.882 "code": -32602, 00:10:45.882 "message": "The specified target doesn't exist, cannot delete it." 00:10:45.882 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:10:45.882 15:15:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:10:45.882 15:15:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:10:45.882 15:15:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:45.882 15:15:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:10:45.882 15:15:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:45.882 15:15:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:10:45.882 15:15:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:45.882 15:15:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:45.882 rmmod nvme_tcp 00:10:45.882 rmmod nvme_fabrics 00:10:45.882 rmmod nvme_keyring 00:10:45.882 15:15:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:45.882 15:15:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:10:45.882 15:15:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:10:45.882 15:15:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 2935270 ']' 00:10:45.882 15:15:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 2935270 00:10:45.882 15:15:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 2935270 ']' 00:10:45.882 15:15:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 2935270 00:10:45.882 15:15:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:10:45.882 15:15:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:45.882 15:15:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2935270 00:10:45.882 15:15:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:45.882 15:15:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:45.882 15:15:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2935270' 00:10:45.882 killing process with pid 2935270 00:10:45.882 15:15:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 2935270 00:10:45.882 15:15:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 2935270 00:10:46.140 15:15:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:46.140 15:15:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:46.140 15:15:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:46.140 15:15:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:46.140 15:15:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:46.140 15:15:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.140 15:15:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:46.140 15:15:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:48.671 15:15:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:48.671 00:10:48.671 real 0m13.151s 00:10:48.671 user 0m20.182s 00:10:48.671 sys 0m6.215s 00:10:48.671 15:15:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:48.671 15:15:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:48.671 ************************************ 00:10:48.671 END TEST nvmf_invalid 00:10:48.671 ************************************ 00:10:48.671 15:15:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:48.671 15:15:52 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:10:48.671 15:15:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:48.671 15:15:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:48.671 15:15:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:48.671 ************************************ 00:10:48.671 START TEST nvmf_abort 00:10:48.671 ************************************ 00:10:48.671 15:15:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:10:48.671 * Looking for test storage... 00:10:48.671 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:48.671 15:15:52 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:48.671 15:15:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:10:48.671 15:15:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:48.671 15:15:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:48.671 15:15:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:48.671 15:15:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:48.671 15:15:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:48.671 15:15:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:48.671 15:15:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:48.671 15:15:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:48.671 15:15:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:48.671 15:15:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:48.671 15:15:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:10:48.671 15:15:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:10:48.671 15:15:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:48.671 15:15:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:48.671 15:15:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:48.671 15:15:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:48.671 15:15:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:48.671 15:15:52 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:48.671 15:15:52 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:48.671 15:15:52 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:48.671 15:15:52 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.671 15:15:52 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.671 15:15:52 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.671 15:15:52 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:10:48.671 15:15:52 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.671 15:15:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:10:48.671 15:15:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:48.671 15:15:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:48.671 15:15:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:48.671 15:15:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:48.672 15:15:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:48.672 15:15:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:48.672 15:15:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:48.672 15:15:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:48.672 15:15:52 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:48.672 15:15:52 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:10:48.672 15:15:52 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:10:48.672 15:15:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:48.672 15:15:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:48.672 15:15:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:48.672 15:15:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:48.672 15:15:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:48.672 15:15:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:48.672 15:15:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:48.672 15:15:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:48.672 15:15:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:48.672 15:15:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:48.672 15:15:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:10:48.672 15:15:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:55.279 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:55.279 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:10:55.279 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:55.279 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:55.279 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:55.279 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:55.279 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:55.279 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:10:55.279 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:55.279 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:55.280 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:55.280 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:55.280 Found net devices under 0000:af:00.0: cvl_0_0 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:55.280 Found net devices under 0000:af:00.1: cvl_0_1 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:55.280 15:15:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:55.280 15:15:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:55.280 15:15:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:55.280 15:15:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:55.280 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:55.280 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:10:55.280 00:10:55.280 --- 10.0.0.2 ping statistics --- 00:10:55.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.280 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:10:55.280 15:15:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:55.280 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:55.280 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.252 ms 00:10:55.280 00:10:55.280 --- 10.0.0.1 ping statistics --- 00:10:55.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.280 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:10:55.280 15:15:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:55.280 15:15:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:10:55.280 15:15:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:55.280 15:15:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:55.280 15:15:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:55.280 15:15:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:55.280 15:15:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:55.280 15:15:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:55.280 15:15:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:55.280 15:15:59 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:10:55.280 15:15:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:55.280 15:15:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:55.280 15:15:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:55.280 15:15:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=2939912 00:10:55.280 15:15:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 2939912 00:10:55.280 15:15:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:55.280 15:15:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 2939912 ']' 00:10:55.280 15:15:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.280 15:15:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:55.280 15:15:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.280 15:15:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:55.280 15:15:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:55.280 [2024-07-15 15:15:59.130380] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:10:55.280 [2024-07-15 15:15:59.130427] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:55.280 EAL: No free 2048 kB hugepages reported on node 1 00:10:55.539 [2024-07-15 15:15:59.205088] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:55.539 [2024-07-15 15:15:59.277963] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:55.539 [2024-07-15 15:15:59.278000] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:55.539 [2024-07-15 15:15:59.278009] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:55.539 [2024-07-15 15:15:59.278017] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:55.539 [2024-07-15 15:15:59.278024] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:55.539 [2024-07-15 15:15:59.278130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:55.539 [2024-07-15 15:15:59.278216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:55.539 [2024-07-15 15:15:59.278218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:56.104 15:15:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:56.104 15:15:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:10:56.104 15:15:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:56.104 15:15:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:56.104 15:15:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:56.104 15:15:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:56.104 15:15:59 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:10:56.104 15:15:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.104 15:15:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:56.105 [2024-07-15 15:15:59.990112] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:56.105 15:15:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.105 15:15:59 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:10:56.105 15:15:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.105 15:15:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:56.363 Malloc0 00:10:56.363 15:16:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.363 15:16:00 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:56.363 15:16:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.363 15:16:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:56.363 Delay0 00:10:56.363 15:16:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.363 15:16:00 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:56.363 15:16:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.363 15:16:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:56.363 15:16:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.363 15:16:00 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:10:56.363 15:16:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.363 15:16:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:56.363 15:16:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.363 15:16:00 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:56.363 15:16:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.363 15:16:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:56.363 [2024-07-15 15:16:00.054241] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:56.363 15:16:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.363 15:16:00 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:56.363 15:16:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.363 15:16:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:56.363 15:16:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.363 15:16:00 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:10:56.363 EAL: No free 2048 kB hugepages reported on node 1 00:10:56.363 [2024-07-15 15:16:00.168688] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:58.890 Initializing NVMe Controllers 00:10:58.890 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:58.890 controller IO queue size 128 less than required 00:10:58.890 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:10:58.890 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:10:58.890 Initialization complete. Launching workers. 00:10:58.890 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 41190 00:10:58.890 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 41255, failed to submit 62 00:10:58.890 success 41194, unsuccess 61, failed 0 00:10:58.890 15:16:02 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:58.890 15:16:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.890 15:16:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:58.890 15:16:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.890 15:16:02 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:10:58.890 15:16:02 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:10:58.890 15:16:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:58.890 15:16:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:10:58.890 15:16:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:58.890 15:16:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:10:58.890 15:16:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:58.890 15:16:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:58.890 rmmod nvme_tcp 00:10:58.890 rmmod nvme_fabrics 00:10:58.890 rmmod nvme_keyring 00:10:58.890 15:16:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:58.890 15:16:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:10:58.890 15:16:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:10:58.890 15:16:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 2939912 ']' 00:10:58.890 15:16:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 2939912 00:10:58.890 15:16:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 2939912 ']' 00:10:58.890 15:16:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 2939912 00:10:58.890 15:16:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:10:58.890 15:16:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:58.890 15:16:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2939912 00:10:58.890 15:16:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:58.890 15:16:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:58.890 15:16:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2939912' 00:10:58.890 killing process with pid 2939912 00:10:58.890 15:16:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 2939912 00:10:58.890 15:16:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 2939912 00:10:58.890 15:16:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:58.890 15:16:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:58.890 15:16:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:58.890 15:16:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:58.890 15:16:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:58.890 15:16:02 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:58.890 15:16:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:58.890 15:16:02 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:01.422 15:16:04 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:01.422 00:11:01.422 real 0m12.669s 00:11:01.422 user 0m13.557s 00:11:01.422 sys 0m6.485s 00:11:01.422 15:16:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:01.422 15:16:04 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:01.422 ************************************ 00:11:01.422 END TEST nvmf_abort 00:11:01.422 ************************************ 00:11:01.422 15:16:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:01.422 15:16:04 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:11:01.422 15:16:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:01.422 15:16:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:01.422 15:16:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:01.422 ************************************ 00:11:01.422 START TEST nvmf_ns_hotplug_stress 00:11:01.422 ************************************ 00:11:01.422 15:16:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:11:01.422 * Looking for test storage... 00:11:01.422 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:01.422 15:16:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:01.422 15:16:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:11:01.422 15:16:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:01.422 15:16:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:01.422 15:16:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:01.422 15:16:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:01.422 15:16:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:01.422 15:16:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:01.422 15:16:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:01.422 15:16:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:01.422 15:16:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:01.422 15:16:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:01.423 15:16:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:11:01.423 15:16:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:11:01.423 15:16:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:01.423 15:16:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:01.423 15:16:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:01.423 15:16:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:01.423 15:16:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:01.423 15:16:04 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:01.423 15:16:04 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:01.423 15:16:04 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:01.423 15:16:04 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.423 15:16:04 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.423 15:16:04 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.423 15:16:04 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:11:01.423 15:16:04 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.423 15:16:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:11:01.423 15:16:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:01.423 15:16:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:01.423 15:16:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:01.423 15:16:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:01.423 15:16:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:01.423 15:16:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:01.423 15:16:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:01.423 15:16:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:01.423 15:16:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:01.423 15:16:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:11:01.423 15:16:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:01.423 15:16:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:01.423 15:16:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:01.423 15:16:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:01.423 15:16:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:01.423 15:16:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:01.423 15:16:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:01.423 15:16:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:01.423 15:16:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:01.423 15:16:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:01.423 15:16:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:11:01.423 15:16:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:07.979 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:07.979 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:07.979 Found net devices under 0000:af:00.0: cvl_0_0 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:07.979 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:07.980 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:07.980 Found net devices under 0000:af:00.1: cvl_0_1 00:11:07.980 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:07.980 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:07.980 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:11:07.980 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:07.980 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:07.980 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:07.980 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:07.980 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:07.980 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:07.980 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:07.980 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:07.980 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:07.980 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:07.980 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:07.980 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:07.980 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:07.980 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:07.980 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:07.980 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:07.980 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:07.980 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:07.980 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:07.980 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:07.980 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:07.980 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:07.980 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:07.980 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:07.980 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.312 ms 00:11:07.980 00:11:07.980 --- 10.0.0.2 ping statistics --- 00:11:07.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.980 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:11:07.980 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:07.980 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:07.980 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:11:07.980 00:11:07.980 --- 10.0.0.1 ping statistics --- 00:11:07.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.980 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:11:07.980 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:07.980 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:11:07.980 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:07.980 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:07.980 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:07.980 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:07.980 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:07.980 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:07.980 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:07.980 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:11:07.980 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:07.980 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:07.980 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:07.980 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=2944144 00:11:07.980 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:07.980 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 2944144 00:11:07.980 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 2944144 ']' 00:11:07.980 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.980 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:07.980 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.980 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:07.980 15:16:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:08.239 [2024-07-15 15:16:11.896356] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:11:08.239 [2024-07-15 15:16:11.896411] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:08.239 EAL: No free 2048 kB hugepages reported on node 1 00:11:08.239 [2024-07-15 15:16:11.972124] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:08.239 [2024-07-15 15:16:12.045713] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:08.239 [2024-07-15 15:16:12.045749] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:08.239 [2024-07-15 15:16:12.045761] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:08.239 [2024-07-15 15:16:12.045769] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:08.239 [2024-07-15 15:16:12.045776] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:08.239 [2024-07-15 15:16:12.045883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:08.239 [2024-07-15 15:16:12.045967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:08.239 [2024-07-15 15:16:12.045969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:08.804 15:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:08.804 15:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:11:08.804 15:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:08.804 15:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:08.804 15:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.062 15:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:09.062 15:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:11:09.062 15:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:09.062 [2024-07-15 15:16:12.908986] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:09.062 15:16:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:09.319 15:16:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:09.575 [2024-07-15 15:16:13.282607] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:09.575 15:16:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:09.575 15:16:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:11:09.833 Malloc0 00:11:09.833 15:16:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:10.090 Delay0 00:11:10.090 15:16:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:10.348 15:16:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:11:10.348 NULL1 00:11:10.348 15:16:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:10.606 15:16:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2944530 00:11:10.606 15:16:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2944530 00:11:10.606 15:16:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:11:10.606 15:16:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:10.606 EAL: No free 2048 kB hugepages reported on node 1 00:11:11.980 Read completed with error (sct=0, sc=11) 00:11:11.980 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:11.980 15:16:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:11.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:11.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:11.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:11.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:11.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:11.981 15:16:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:11:11.981 15:16:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:11:12.238 true 00:11:12.238 15:16:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2944530 00:11:12.238 15:16:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:13.171 15:16:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:13.171 15:16:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:11:13.171 15:16:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:11:13.171 true 00:11:13.428 15:16:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2944530 00:11:13.428 15:16:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:13.428 15:16:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:13.686 15:16:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:11:13.686 15:16:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:11:13.944 true 00:11:13.944 15:16:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2944530 00:11:13.944 15:16:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:14.878 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:14.878 15:16:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:15.137 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:15.137 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:15.137 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:15.137 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:15.137 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:15.137 15:16:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:11:15.137 15:16:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:11:15.396 true 00:11:15.396 15:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2944530 00:11:15.396 15:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:16.331 15:16:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:16.331 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:16.331 15:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:11:16.331 15:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:11:16.589 true 00:11:16.589 15:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2944530 00:11:16.589 15:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:16.848 15:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:16.848 15:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:11:16.848 15:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:11:17.106 true 00:11:17.106 15:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2944530 00:11:17.106 15:16:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:17.364 15:16:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:17.623 15:16:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:11:17.623 15:16:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:11:17.623 true 00:11:17.623 15:16:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2944530 00:11:17.623 15:16:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:17.881 15:16:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:18.139 15:16:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:11:18.139 15:16:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:11:18.139 true 00:11:18.139 15:16:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2944530 00:11:18.139 15:16:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:19.580 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:19.580 15:16:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:19.580 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:19.580 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:19.580 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:19.580 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:19.580 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:19.580 15:16:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:11:19.580 15:16:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:11:19.580 true 00:11:19.839 15:16:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2944530 00:11:19.839 15:16:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:20.773 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:20.773 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:11:20.773 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:11:20.773 true 00:11:21.030 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2944530 00:11:21.030 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:21.030 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:21.287 15:16:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:11:21.287 15:16:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:11:21.287 true 00:11:21.287 15:16:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2944530 00:11:21.287 15:16:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:21.544 15:16:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:21.544 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:21.544 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:21.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:21.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:21.819 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:21.819 [2024-07-15 15:16:25.527183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.819 [2024-07-15 15:16:25.527257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.819 [2024-07-15 15:16:25.527297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.819 [2024-07-15 15:16:25.527334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.819 [2024-07-15 15:16:25.527370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.819 [2024-07-15 15:16:25.527409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.819 [2024-07-15 15:16:25.527448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.819 [2024-07-15 15:16:25.527490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.819 [2024-07-15 15:16:25.527526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.819 [2024-07-15 15:16:25.527567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.819 [2024-07-15 15:16:25.527603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.819 [2024-07-15 15:16:25.527640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.819 [2024-07-15 15:16:25.527680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.819 [2024-07-15 15:16:25.527714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.819 [2024-07-15 15:16:25.527762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.819 [2024-07-15 15:16:25.527804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.819 [2024-07-15 15:16:25.527852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.819 [2024-07-15 15:16:25.527894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.819 [2024-07-15 15:16:25.527936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.819 [2024-07-15 15:16:25.527974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.819 [2024-07-15 15:16:25.528017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.819 [2024-07-15 15:16:25.528059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.819 [2024-07-15 15:16:25.528101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.819 [2024-07-15 15:16:25.528145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.819 [2024-07-15 15:16:25.528183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.819 [2024-07-15 15:16:25.528229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.819 [2024-07-15 15:16:25.528269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.819 [2024-07-15 15:16:25.528300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.819 [2024-07-15 15:16:25.528339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.819 [2024-07-15 15:16:25.528377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.819 [2024-07-15 15:16:25.528415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.819 [2024-07-15 15:16:25.528453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.819 [2024-07-15 15:16:25.528492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.819 [2024-07-15 15:16:25.528539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.819 [2024-07-15 15:16:25.528592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.819 [2024-07-15 15:16:25.528637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.819 [2024-07-15 15:16:25.528681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.819 [2024-07-15 15:16:25.528728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.819 [2024-07-15 15:16:25.528769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.819 [2024-07-15 15:16:25.528811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.819 [2024-07-15 15:16:25.528864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.819 [2024-07-15 15:16:25.528911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.819 [2024-07-15 15:16:25.528956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.819 [2024-07-15 15:16:25.528999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.819 [2024-07-15 15:16:25.529044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.819 [2024-07-15 15:16:25.529086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.819 [2024-07-15 15:16:25.529128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.819 [2024-07-15 15:16:25.529175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.819 [2024-07-15 15:16:25.529220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.819 [2024-07-15 15:16:25.529266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.819 [2024-07-15 15:16:25.529315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.819 [2024-07-15 15:16:25.529358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.819 [2024-07-15 15:16:25.529401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.819 [2024-07-15 15:16:25.529445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.819 [2024-07-15 15:16:25.529502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.819 [2024-07-15 15:16:25.529544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.529587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.529630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.529675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.529726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.529765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.529807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.529853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.529898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.530064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.530110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.530149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.530196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.530239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.530281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.530319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.530355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.530397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.530436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.530475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.530520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.530558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.530605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.530643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.530689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.530729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.530774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.530806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.530852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.530888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.530929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.530974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.531013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.531055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.531094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.531131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.531166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.531206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.531247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.531284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.531321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.531361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.532082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.532139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.532182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.532224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.532263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.532307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.532354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.532397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.532440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.532483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.532535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.532577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.532623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.532667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.532700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.532734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.532773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.532812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.532864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.532904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.532943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.532983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.533021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.533070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.533110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.533146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.533184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.533216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.533254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.533290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.533328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.533377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.533410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.533450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.533489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.533527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.533565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.533604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.533645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.533681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.533723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.533759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.533808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.533857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.533903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.533942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.533987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.534030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.534070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.534117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.534161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.534202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.534246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.534296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.534340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.534383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.534428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.534475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.534517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.534558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.534598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.534642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.534682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.820 [2024-07-15 15:16:25.534728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.534893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.534941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.534980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.535011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.535047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.535083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.535128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.535168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.535206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.535247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.535285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.535327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.535367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.535406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.535450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.535482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.535521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.535558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.535602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.535640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.535683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.535722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.535761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.535799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.535844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.535883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.535917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.535947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.535982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.536019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.536056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.536094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.536132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.536172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.536211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.536257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.536300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.536342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.536387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.536429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.536479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.536521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.536563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.536612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.536654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.536696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.536738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.536783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.536825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.536874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.536921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.536963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.537008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.537058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.537101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.537141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.537183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.537227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.537268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.537305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.537352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.537391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.537423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.537876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.537919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.537958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.537996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.538034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.538072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.538110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.538147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.538184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.538220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.538255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.538295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.538335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.538382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.538425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.538467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.538508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.538553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.538597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.538638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.538687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.538732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.538775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.538823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.538871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.538914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.538960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.539002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.539047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.539091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.539132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.539180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.539222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.539265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.539311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.539355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.539396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.539449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.539490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.539532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.539579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.821 [2024-07-15 15:16:25.539618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.539655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.539703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.539739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.539779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.539816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.539861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.539896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.539934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.539972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.540011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.540048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.540093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.540130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.540176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.540211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.540249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.540286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.540322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.540354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.540389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.540428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.540465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.540934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.540984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.541024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.541069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.541111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.541161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.541205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.541247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.541287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.541336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.541382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.541419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.541457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.541502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.541540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.541574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.541612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.541651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.541687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.541733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.541773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.541821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.541861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.541896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.541933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.541971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.542006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.542044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.542082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.542118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.542156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.542194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.542234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.542274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.542312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.542353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.542397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.542438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.542480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.542523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.542569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.542615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.542657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.542695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.542729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.542769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.542809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.542855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.542897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.542935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.542973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.543005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.543043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.543082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.543126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.543166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.543209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.543259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.543301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.543343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.543385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.543441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.543486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.543947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.543990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.544031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.544077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.544108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.544149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.544184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.544222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.544263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.544298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.544338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.544376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.544418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.544456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.544492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.544529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.544566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.544609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.544654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.822 [2024-07-15 15:16:25.544695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.544738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.544789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.544838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.544880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.544926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.544969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.545013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.545056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.545096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.545141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.545184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.545225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.545268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.545311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.545369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.545410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.545452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.545494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.545542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.545588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.545632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.545672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.545717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.545761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.545804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.545849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.545895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.545943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.545988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.546030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.546072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.546125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.546166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.546208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.546248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.546293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.546338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.546381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.546420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.546460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.546499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.546540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.546583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.546619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.547089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.547132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.547163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.547201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.547238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.547276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.547313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.547362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.547400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.547432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.547467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.547503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.547542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.547580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.547619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.547660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.547698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.547737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.547776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.547820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.547870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.547915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.547959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.548002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.548047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.548095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.548138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.548174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.548206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.548249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.548285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.548320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.548358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.548396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.548434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.548470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.548509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.548548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.548588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.548628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.548675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.548717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.548760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.548804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.823 [2024-07-15 15:16:25.548858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.548900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.548941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.548985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.549028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.549069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.549102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.549144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.549187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.549226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.549265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.549313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.549347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.549387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.549429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.549471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.549509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.549546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.549583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.550081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.550132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.550175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.550216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.550258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.550301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.550345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.550389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.550430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.550475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.550519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.550565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.550608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.550651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.550694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.550739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.550795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.550846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.550892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.550936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.550978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.551020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.551065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.551109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.551158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.551201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.551245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.551288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.551337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.551383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.551427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.551472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.551518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.551561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.551608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.551655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.551704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.551745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.551794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.551843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.551887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.551934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.551978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.552033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.552077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.552125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.552170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.552215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.552258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.552313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.552362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.552402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.552445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.552486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.552525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.552557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.552598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.552641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.552680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.552720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.552758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.552801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.552849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.552890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.553054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.553371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.553412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.553445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.553482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.553519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.553556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.553593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.553634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.553674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.553708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.553745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.553788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.553829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.553873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.553916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.553953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.553998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.554036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.554078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.554121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.554167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.824 [2024-07-15 15:16:25.554213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.554260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.554306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.554355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.554402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.554451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.554497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.554545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.554592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.554636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.554680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.554729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.554778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.554820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.554866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.554900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.554936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.554977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.555016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.555062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.555101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.555139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.555180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.555218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.555252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.555289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.555327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.555364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.555402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.555441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.555471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.555509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.555553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.555593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.555628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.555669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.555708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.555746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.555785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.555824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.555875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.555922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.556387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.556435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.556479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.556525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.556571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.556614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 15:16:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:11:21.825 [2024-07-15 15:16:25.556661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.556709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.556758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.556799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.556849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.556896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.556941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.556988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 15:16:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:11:21.825 [2024-07-15 15:16:25.557031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.557075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.557118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.557158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.557197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.557239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.557280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.557327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.557366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.557403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.557447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.557489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.557529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.557569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.557610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.557652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.557688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.557726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.557766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.557806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.557852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.557892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.557930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.557973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.558015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.558061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.558103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.558142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.558189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.558233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.558278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.558330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.558384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.558431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.558478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.558525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.558575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.558619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.558664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.558712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.558762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.558809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.558860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.558906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.558952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.558999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.559043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.559087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.825 [2024-07-15 15:16:25.559132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.559173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.559360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.559685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.559727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.559767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.559810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.559856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.559902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.559945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.559983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.560026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.560062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.560103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.560141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.560180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.560229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.560269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.560318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.560354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.560388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.560428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.560468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.560508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.560547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.560591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.560629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.560668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.560708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.560750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.560788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.560841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.560889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.560932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.560976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.561023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.561070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.561119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.561168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.561220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.561276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.561320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.561364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.561408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.561454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.561498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.561546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.561595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.561638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.561686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.561728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.561776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.561829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.561881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.561930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.561974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.562022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.562069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.562113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.562147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.562188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.562226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.562265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.562312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.562355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.562822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.562872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.562907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.562942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.562980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.563018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.563056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.563100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.563140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.563181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.563224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.563265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.563304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.563344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.563387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.563430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.563475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.563523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.563569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.563615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.563658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.563705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.563750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.563796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.563850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.563896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.563942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.563989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.564036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.564086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.564130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.564179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.564222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.564268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.564301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.564339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.564383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.564422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.564462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.564502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.826 [2024-07-15 15:16:25.564540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.564579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.564619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.564658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.564703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.564738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.564778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.564819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.564863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.564902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.564942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.564984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.565027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.565066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.565112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.565161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.565207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.565253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.565301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.565346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.565387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.565434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.565478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.565524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.565696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.566039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.566087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.566133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.566180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.566226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.566272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.566319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.566363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.566410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.566454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.566499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.566547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.566591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.566636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.566680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.566726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.566772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.566820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.566883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.566930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.566977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.567018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.567062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.567102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.567142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.567186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.567220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.567260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.567301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.567339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.567378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.567418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.567458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.567499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.567539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.567582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.567620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.567652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.567687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.567724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.567774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.567815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.567865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.567910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.567951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.567993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.568030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.568066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.568108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.568146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.568184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.568223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.568264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.568305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.568351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.568390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.568427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.568470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.568515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.568560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.568605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.568649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.568694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.569157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.569204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.569244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.569296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.569333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.827 [2024-07-15 15:16:25.569374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.569415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.569449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.569489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.569533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.569568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.569609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.569651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.569694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.569734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.569774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.569820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.569866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.569906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.569946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.569978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.570019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.570057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.570104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.570149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.570197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.570239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.570282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.570336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.570382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.570429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.570474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.570518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.570559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.570599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.570641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.570689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.570727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.570772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.570815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.570866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.570909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.570952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.571002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.571052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.571097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.571144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.571189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.571237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.571291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.571336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.571378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.571426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.571468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.571516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.571559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.571602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.571646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.571690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.571734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.571780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.571836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.571881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.571928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.572104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:21.828 [2024-07-15 15:16:25.572445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.572490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.572529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.572571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.572610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.572650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.572689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.572726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.572762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.572800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.572847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.572891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.572932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.572977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.573013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.573056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.573097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.573131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.573174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.573213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.573255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.573299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.573339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.573377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.573420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.573459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.573498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.573538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.573577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.573623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.573669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.573715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.573766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.573816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.573870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.573917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.573969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.574013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.574058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.574107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.574153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.574201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.574248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.574292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.828 [2024-07-15 15:16:25.574340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.574388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.574433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.574486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.574528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.574575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.574616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.574653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.574691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.574735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.574772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.574810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.574859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.574902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.574941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.574984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.575024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.575062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.575101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.575619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.575663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.575700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.575737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.575775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.575813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.575866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.575912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.575954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.576001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.576046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.576090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.576135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.576179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.576226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.576271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.576316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.576362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.576406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.576451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.576495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.576538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.576585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.576630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.576674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.576722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.576772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.576815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.576867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.576913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.576957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.577007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.577059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.577104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.577149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.577193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.577233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.577269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.577312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.577358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.577401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.577446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.577484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.577524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.577567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.577605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.577648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.577681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.577719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.577760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.577799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.577846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.577888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.577929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.577969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.578005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.578045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.578079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.578115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.578155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.578198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.578236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.578275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.578314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.578778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.578824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.578874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.578917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.578961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.579012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.579060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.579109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.579168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.579214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.579260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.579307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.579350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.579395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.579438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.579487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.579531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.579579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.579627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.579671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.579717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.579764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.579808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.829 [2024-07-15 15:16:25.579866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.579915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.579957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.580005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.580051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.580095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.580139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.580186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.580232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.580276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.580322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.580364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.580406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.580446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.580485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.580526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.580559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.580606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.580643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.580682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.580723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.580766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.580809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.580858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.580902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.580940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.580978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.581010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.581053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.581091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.581137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.581176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.581218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.581259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.581300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.581339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.581378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.581417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.581456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.581498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.581978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.582025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.582073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.582119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.582173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.582219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.582265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.582309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.582353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.582398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.582446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.582492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.582529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.582569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.582613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.582654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.582693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.582740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.582779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.582820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.582874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.582912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.582947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.582986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.583022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.583059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.583100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.583139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.583179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.583219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.583256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.583295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.583336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.583377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.583421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.583467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.583511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.583557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.583609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.583654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.583701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.583743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.583785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.583830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.583883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.583928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.583975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.584021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.584066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.584109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.584155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.584196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.584240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.584291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.584339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.584383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.584428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.584475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.584518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.584563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.584610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.584654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.584699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.584743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.585190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.830 [2024-07-15 15:16:25.585228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.585269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.585309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.585350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.585391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.585440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.585482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.585526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.585564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.585606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.585648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.585686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.585722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.585765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.585802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.585852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.585892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.585927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.585966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.586009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.586048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.586087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.586127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.586168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.586207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.586243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.586282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.586323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.586364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.586404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.586449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.586500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.586557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.586604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.586650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.586696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.586742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.586787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.586830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.586878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.586924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.586981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.587029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.587079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.587131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.587177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.587221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.587269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.587304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.587347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.587389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.587435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.587477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.587517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.587555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.587593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.587633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.587672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.587707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.587745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.587790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.587842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.588396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.588445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.588491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.588541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.588585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.588633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.588674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.588719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.588764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.588808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.588862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.588906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.588952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.589007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.589062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.589108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.589152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.589200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.589245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.589286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.589328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.589377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.589424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.589471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.589517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.589563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.589609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.589655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.589703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.589758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.589803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.589855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.589902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.589947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.589992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.590031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.590070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.590118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.590158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.590200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.590239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.590274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.590312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.590355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.831 [2024-07-15 15:16:25.590404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.590444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.590484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.590531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.590572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.590613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.590653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.590689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.590729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.590764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.590806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.590848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.590891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.590929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.590968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.591010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.591052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.591092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.591131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.591170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.591632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.591678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.591720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.591767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.591816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.591870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.591909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.591946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.591987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.592026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.592066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.592113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.592152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.592191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.592231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.592277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.592316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.592348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.592393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.592437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.592474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.592515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.592554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.592598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.592636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.592674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.592715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.592760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.592796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.592844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.592887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.592934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.592989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.593034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.593080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.593125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.593171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.593215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.593263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.593308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.593353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.593397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.593441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.593484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.593530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.593575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.593628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.593674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.593721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.593765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.593810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.593858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.593909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.593960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.594000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.594045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.594092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.594136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.594181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.594230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.594278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.594321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.594366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.594827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.594881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.594922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.594959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.594993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.595034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.595075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.595116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.595154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.595197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.832 [2024-07-15 15:16:25.595235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.595275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.595314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.595354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.595393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.595433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.595471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.595512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.595553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.595591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.595629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.595668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.595699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.595736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.595776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.595814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.595857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.595901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.595943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.595990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.596034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.596080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.596129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.596176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.596219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.596266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.596309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.596356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.596402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.596449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.596492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.596536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.596581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.596627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.596671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.596718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.596764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.596810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.596865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.596907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.596952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.597000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.597038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.597087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.597119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.597159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.597202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.597241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.597280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.597327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.597367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.597406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.597439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.597476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.597650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.598034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.598079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.598127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.598173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.598218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.598262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.598309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.598366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.598415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.598458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.598503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.598549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.598591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.598636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.598682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.598728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.598781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.598827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.598882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.598930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.598973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.599017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.599067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.599113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.599159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.599204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.599251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.599296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.599340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.599387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.599431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.599476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.599520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.599569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.599623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.599666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.599709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.599756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.599801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.599852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.599899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.599943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.599988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.600037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.600082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.600129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.600171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.600220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.600263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.600306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.833 [2024-07-15 15:16:25.600349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.600393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.600426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.600466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.600510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.600551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.600591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.600632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.600675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.600720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.600757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.600796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.600846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.601314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.601359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.601392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.601433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.601469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.601511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.601549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.601595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.601636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.601672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.601710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.601750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.601787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.601825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.601873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.601915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.601957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.601998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.602044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.602088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.602131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.602179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.602234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.602278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.602321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.602363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.602395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.602436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.602472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.602506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.602547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.602588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.602637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.602676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.602724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.602756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.602797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.602844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.602880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.602919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.602964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.602997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.603037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.603075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.603114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.603159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.603203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.603243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.603282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.603320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.603363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.603402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.603440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.603480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.603528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.603570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.603615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.603660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.603706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.603750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.603791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.603844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.603891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.603942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.604398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.604441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.604489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.604529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.604561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.604599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.604636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.604684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.604719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.604760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.604800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.604844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.604883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.604922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.604965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.605009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.605052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.605098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.605155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.605198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.605244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.605286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.605325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.605372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.605415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.605458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.605501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.605544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.605587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.834 [2024-07-15 15:16:25.605630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.605674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.605725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.605769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.605814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.605868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.605909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.605951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.605996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.606040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.606084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.606132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.606177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.606221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.606264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.606307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.606360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.606400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.606442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.606487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.606530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.606576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.606617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.606661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.606709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.606762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.606805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.606853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.606893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.606938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.606975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.607023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.607065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.607111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.607150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.607639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.607681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.607711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.607749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.607788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.607821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.607891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.607931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.607968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.608008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.608044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.608083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.608119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.608158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.608198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.608236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.608281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.608321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.608359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.608399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.608441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.608484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.608525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.608569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.608615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.608659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.608700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.608750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.608794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.608846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.608892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.608934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.608966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.609006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.609047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.609087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.609126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.609166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.609214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.609248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.609284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.609327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.609371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.609410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.609448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.609484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.609524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.609565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.609595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.609633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.609672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.609712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.609749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.609786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.609825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.609867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.609906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.609948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.609990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.610030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.610069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.610117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.610161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.610203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.610648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.610696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.610740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.610782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.610826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.610872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.610912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.835 [2024-07-15 15:16:25.610950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.610991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.611030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.611070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.611107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.611139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.611173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.611210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.611245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.611283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.611322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.611360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.611396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.611438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.611479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.611521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.611564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.611610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.611652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.611700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.611745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.611789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.611830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.611881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.611924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.611966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.612016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.612058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.612098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.612141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.612192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.612231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.612275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.612319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.612360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.612405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.612447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.612493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.612538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.612579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.612622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.612672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.612715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.612769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.612813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.612862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.612907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.612962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.613002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.613044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.613090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.613139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.613188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.613231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.613274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.613315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.613358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.613802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.613849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.613888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.613935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.613974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.614025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.614063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.614109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.614141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.614180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.614216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.614265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.614303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.614338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.614375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.614417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.614456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.614495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.614536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.614576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.614614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.614650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.614693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.614732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.614775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.614819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.614869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.614918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.614962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.615004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.615050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.615089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.615132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.615179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.615220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.615251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.836 [2024-07-15 15:16:25.615290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.615329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.615364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.615401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.615448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.615486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.615530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.615569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.615600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.615636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.615676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.615714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.615750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.615786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.615826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.615869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.615908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.615949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.615987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.616028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.616067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.616104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.616142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.616183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.616223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.616261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.616302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.616756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.616799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.616853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.616906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.616948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.616989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.617029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.617072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.617115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.617168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.617211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.617256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.617302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.617351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.617396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.617441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.617489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.617531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.617581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.617624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.617669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.617713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.617757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.617800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.617849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.617896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.617938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.617977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.618018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.618058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.618099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.618137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.618179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.618213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.618250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.618287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.618324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.618369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.618408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.618446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.618487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.618528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.618570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.618608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.618642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.618677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.618717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.618756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.618804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.618848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.618886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.618924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.618963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.619002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.619040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.619079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.619119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.619157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.619194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.619232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.619275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.619320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.619364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.619407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.619584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.619936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.619986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.620030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.620070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.620115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:21.837 [2024-07-15 15:16:25.620155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.620200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.620242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.620283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.620325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.620368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.620414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.620461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.837 [2024-07-15 15:16:25.620504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.620548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.620591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.620648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.620694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.620738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.620777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.620816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.620861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.620903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.620945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.620992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.621024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.621066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.621102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.621140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.621179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.621218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.621255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.621294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.621333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.621375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.621413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.621459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.621490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.621527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.621566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.621606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.621642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.621684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.621721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.621760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.621797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.621840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.621881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.621922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.621964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.622003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.622044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.622090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.622134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.622175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.622219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.622266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.622308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.622354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.622398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.622441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.622481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.622532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.623041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.623085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.623126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.623163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.623205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.623243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.623282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.623321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.623361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.623398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.623437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.623473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.623504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.623543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.623581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.623623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.623659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.623700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.623748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.623792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.623840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.623884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.623945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.623987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.624028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.624070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.624121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.624164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.624202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.624242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.624284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.624322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.624358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.624396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.624434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.624474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.624514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.624554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.624599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.624642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.624688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.624733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.624790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.624829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.624878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.624921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.624963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.625006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.625049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.625094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.625142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.625183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.625229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.625275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.625316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.838 [2024-07-15 15:16:25.625360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.625408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.625469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.625514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.625559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.625604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.625649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.625693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.625737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.626223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.626274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.626320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.626373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.626420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.626467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.626512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.626557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.626601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.626650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.626694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.626740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.626782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.626826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.626877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.626919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.626967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.627010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.627051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.627093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.627135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.627174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.627207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.627247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.627295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.627333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.627373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.627416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.627455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.627504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.627542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.627579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.627616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.627655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.627699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.627739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.627781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.627821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.627867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.627908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.627948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.627984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.628022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.628063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.628101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.628138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.628175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.628216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.628256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.628293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.628330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.628367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.628404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.628444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.628488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.628532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.628577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.628624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.628669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.628717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.628775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.628824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.628875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.628920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.629370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.629407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.629444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.629485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.629525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.629566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.629605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.629635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.629666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.629702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.629744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.629785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.629826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.629871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.629915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.629959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.630003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.630045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.630086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.630119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.630157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.630203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.630258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.630304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.630349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.630393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.630437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.630481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.630528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.630576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.630622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.630670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.630716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.839 [2024-07-15 15:16:25.630759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.630803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.630857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.630908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.630954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.630998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.631038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.631078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.631124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.631164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.631204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.631244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.631283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.631328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.631371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.631403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.631445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.631484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.631524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.631565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.631603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.631645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.631682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.631725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.631775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.631825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.631876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.631927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.631981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.632024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.632070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.632527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.632576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.632621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.632666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.632713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.632757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.632799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.632855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.632902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.632952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.633009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.633056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.633100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.633146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.633193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.633238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.633299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.633347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.633393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.633440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.633484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.633532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.633580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.633624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.633671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.633725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.633773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.633818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.633869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.633912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.633961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.634008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.634052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.634093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.634132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.634180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.634221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.634264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.634309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.634349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.634381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.634418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.634456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.634497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.634540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.634579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.634618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.634660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.634705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.634744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.634783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.634816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.634861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.634902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.634941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.634982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.635026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.635062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.635099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.635138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.635179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.635213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.635253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.840 [2024-07-15 15:16:25.635293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.635773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.635814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.635852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.635888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.635929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.635960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.635994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.636024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.636056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.636088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.636123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.636164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.636203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.636234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.636267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.636298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.636328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.636359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.636389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.636419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.636449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.636480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.636510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.636541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.636571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.636602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.636633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.636664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.636698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.636740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.636777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.636817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.636861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.636899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.636941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.636979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.637016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.637061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.637106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.637153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.637204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.637259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.637304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.637348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.637390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.637439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.637485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.637531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.637582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.637631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.637678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.637722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.637768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.637815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.637862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.637906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.637950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.637993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.638035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.638078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.638128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.638167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.638211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.638729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.638779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.638825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.638877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.638920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.638970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.639017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.639067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.639120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.639164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.639208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.639252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.639295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.639345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.639395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.639439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.639482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.639529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.639575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.639619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.639662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.639705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.639756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.639815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.639870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.639916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.639964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.640010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.640056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.640099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.640146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.640196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.640243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.640287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.640334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.640380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.640423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.640468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.640518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.640565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.640611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.841 [2024-07-15 15:16:25.640658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.640704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.640746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.640793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.640843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.640893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.640941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.640980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.641022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.641068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.641112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.641152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.641192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.641233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.641274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.641312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.641354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.641391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.641430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.641475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.641514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.641554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.641594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.642048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.642094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.642139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.642178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.642213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.642253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.642296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.642334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.642376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.642418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.642460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.642501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.642539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.642577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.642615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.642646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.642684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.642723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.642761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.642800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.642842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.642877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.642916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.642950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.642980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.643008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.643038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.643068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.643105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.643149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.643185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.643216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.643246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.643277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.643307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.643339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.643370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.643409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.643449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.643489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.643529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.643572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.643612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.643655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.643717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.643760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.643807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.643862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.643907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.643955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.644000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.644048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.644088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.644134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.644180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.644233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.644279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.644327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.644373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.644419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.644465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.644510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.644555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.645027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.645074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.645120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.645165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.645208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.645254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.645293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.645333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.645373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.645413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.645458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.645501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.645543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.645582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.645616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.645658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.645699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.645742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.645788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.645829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.842 [2024-07-15 15:16:25.645877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.645914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.645957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.645997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.646038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.646076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.646113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.646152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.646193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.646233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.646276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.646316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.646355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.646400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.646445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.646492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.646537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.646581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.646625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.646676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.646729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.646771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.646813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.646862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.646904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.646951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.646996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.647049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.647095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.647141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.647184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.647226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.647276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.647321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.647368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.647413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.647459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.647501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.647548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.647594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.647642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.647693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.647737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.647783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.648240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.648278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.648314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.648356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.648397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.648441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.648480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.648524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.648572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.648613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.648656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.648698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.648731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.648774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.648817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.648866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.648904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.648945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.648986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.649025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.649072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.649111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.649148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.649193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.649226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.649264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.649303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.649336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.649381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.649418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.649459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.649498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.649538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.649571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.649609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.649646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.649681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.649714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.649755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.649800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.649846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.649889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.649931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.649971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.650017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.650062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.650107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.650156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.650200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.650244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.650293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.650337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.650381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.650428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.650474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.650522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.650570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.650619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.650673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.843 [2024-07-15 15:16:25.650715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.650762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.650807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.650856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.650899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.651368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.651422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.651473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.651513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.651555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.651599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.651638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.651684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.651723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.651766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.651809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.651862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.651902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.651934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.651978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.652022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.652070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.652116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.652154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.652201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.652237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.652278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.652323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.652362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.652394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.652433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.652476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.652515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.652556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.652598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.652639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.652678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.652724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.652772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.652815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.652867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.652915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.652959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.653002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.653052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.653102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.653146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.653191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.653240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.653287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.653332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.653376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.653423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.653469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.653511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.653555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.653601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.653649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.653695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.653741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.653787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.653837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.653882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.653926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.653976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.654024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.654069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.654114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.654158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.654598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.654642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.654683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.654723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.654764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.654800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.654850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.654893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.654936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.654976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.655018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.655057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.655091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.655131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.655170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.655210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.655253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.655295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.655335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.655373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.655414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.655447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.655479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.844 [2024-07-15 15:16:25.655517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.655555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.655598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.655639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.655673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.655710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.655748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.655787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.655826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.655863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.655902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.655945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.655983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.656016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.656057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.656095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.656135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.656176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.656214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.656259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.656303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.656350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.656392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.656438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.656485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.656540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.656585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.656632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.656679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.656725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.656780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.656841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.656889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.656934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.656982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.657029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.657076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.657123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.657168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.657212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.657687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.657736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.657782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.657830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.657875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.657917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.657959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.658000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.658041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.658082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.658122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.658160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.658208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.658241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.658284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.658322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.658364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.658406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.658444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.658488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.658530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.658580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.658619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.658661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.658694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.658731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.658771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.658812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.658856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.658897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.658937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.658978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.659023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.659067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.659113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.659167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.659211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.659254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.659298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.659342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.659387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.659434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.659482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.659531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.659575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.659621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.659669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.659715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.659762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.659813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.659864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.659908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.659955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.660004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.660048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.660091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.660138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.660186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.660234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.660279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.660322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.660366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.660409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.660455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.845 [2024-07-15 15:16:25.660629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.660965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.661013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.661055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.661101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.661142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.661180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.661221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.661260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.661296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.661339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.661375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.661416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.661458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.661498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.661539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.661580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.661627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.661666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.661706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.661737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.661783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.661822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.661863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.661902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.661943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.661986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.662028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.662073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.662112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.662152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.662188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.662226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.662269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.662312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.662352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.662400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.662439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.662480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.662521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.662569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.662618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.662666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.662712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.662756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.662802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.662853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.662903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.662945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.662988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.663037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.663092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.663142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.663185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.663231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.663278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.663322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.663368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.663413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.663456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.663500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.663544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.663587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.663634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.664105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.664149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.664195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.664236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.664280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.664319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.664362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.664407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.664452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.664489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.664532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.664576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.664617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.664663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.664708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.664749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.664790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.664840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.664886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.664926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.664959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.664996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.665037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.665076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.665120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.665163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.665208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.665247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.665288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.665327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.665366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.665413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.665460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.665512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.665558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.665607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.665654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.665700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.665751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.846 [2024-07-15 15:16:25.665795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.665850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.665895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.665942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.665987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.666033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.666084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.666126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.666171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.666213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.666259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.666305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.666353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.666399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.666448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.666506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.666554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.666603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.666646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.666693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.666737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.666782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.666827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.666880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.666928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.667374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.667417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.667458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.667498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.667534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.667579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.667621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.667663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.667703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.667735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.667773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.667815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.667864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.667904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.667945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.667989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.668030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.668069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.668109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.668151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.668182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.668222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.668263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.668302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.668347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.668397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.668440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.668485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.668531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.668578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.668622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.668669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.668713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.668755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.668801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.668847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.668889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.668933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.668976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.669026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.669066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.669111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.669157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.669201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.669245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.669290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.669331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.669371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.669414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.669459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.669502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.669535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.669573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.669610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.669649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.669690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.669735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.669778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.669813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.669861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.669901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.669940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.669983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.670483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.670532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:21.847 [2024-07-15 15:16:25.670581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.670629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.670674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.670721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.670769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.670818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.670870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.670915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.670961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.671007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.671055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.671108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.671161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.671208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.847 [2024-07-15 15:16:25.671257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.671302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.671349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.671393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.671440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.671488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.671536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.671584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.671627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.671673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.671721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.671764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.671812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.671863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.671908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.671954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.671997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.672043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.672090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.672137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.672182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.672228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.672271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.672318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.672363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.672411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.672458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.672505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.672550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.672593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.672640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.672685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.672722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.672770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.672809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.672856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.672889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.672929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.672967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.673007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.673047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.673089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.673133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.673175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.673217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.673256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.673304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.673339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.673784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.673829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.673874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.673913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.673955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.673995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.674036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.674074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.674113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.674153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.674194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.674232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.674266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.674311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.674358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.674407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.674452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.674497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.674545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.674589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.674635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.674678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.674721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.674758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.674796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.674842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.674885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.674926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.674977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.675018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.675053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.675095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.675135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.675175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.675212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.675251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.675296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.675335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.675377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.675418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.675462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.675504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.675544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.675584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.675621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.848 [2024-07-15 15:16:25.675664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.675705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.675747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.675790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.675842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.675887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.675931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.675976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.676028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.676077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.676124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.676170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.676214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.676262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.676308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.676358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.676403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.676447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.676911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.676962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.677005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.677047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.677088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.677132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.677175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.677220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.677262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.677306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.677352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.677401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.677448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.677494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.677539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.677587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.677631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.677676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.677719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.677763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.677812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.677865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.677912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.677957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.678003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.678046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.678091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.678134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.678179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.678224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.678272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.678317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.678361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.678407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.678450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.678495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.678539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.678593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.678639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.678684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.678727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.678771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.678814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.678867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.678911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.678960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.679013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.679061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.679108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.679147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.679187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.679238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.679282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.679324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.679367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.679408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.679441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.679482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.679522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.679564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.679603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.679654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.679695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.679738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.679906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.680239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.680284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.680328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.680368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.680408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.680445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.680487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.680532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.680571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.680616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.680660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.680707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.680754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.680798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.680853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.680899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.680954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.680997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.681044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.849 [2024-07-15 15:16:25.681087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.681122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.681166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.681204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.681245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.681286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.681324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.681362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.681396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.681436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.681476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.681512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.681556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.681598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.681637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.681670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.681703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.681734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.681764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.681806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.681853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.681896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.681936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.681974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.682016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.682055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.682096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.682138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.682176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.682218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.682266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.682312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.682360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.682408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.682451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.682496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.682541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.682588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.682632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.682679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.682733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.682779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.682824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.682873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.683316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.683362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.683407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.683446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.683484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.683528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.683567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.683612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.683660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.683707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.683755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.683806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.683859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.683909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.683958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.684005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.684050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.684097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.684145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.684194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.684236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.684281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.684326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.684377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.684422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.684469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.684513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.684556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.684599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.684644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.684692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.684738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.684783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.684842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.684888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.684934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.684978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.685023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.685068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.685114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.685160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.685206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.685253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.685299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.685346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.685391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.685431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.685472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.685513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.685559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.685602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.685642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.685688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.685727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.685769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.685802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.685850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.685890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.685933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.850 [2024-07-15 15:16:25.685972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.686019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.686058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.686098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.686145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.686316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.686679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.686723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.686760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.686801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.686846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.686886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.686927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.686965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.687011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.687060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.687109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.687160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.687211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.687258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.687302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.687349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.687395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.687440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.687483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.687517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.687561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.687604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.687644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.687685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.687725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.687763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.687797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.687842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.687883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.687923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.687961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.687997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.688040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.688082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.688114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.688145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.688177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.688215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.688264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.688306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.688349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.688389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.688431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.688473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.688517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.688556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.688595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.688638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.688686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.688734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.688781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.688825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.688878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.688924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.688970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.689013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.689058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.689107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.689161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.689205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.689254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.689298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.689751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.689794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.689839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.689878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.689916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.689958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.689994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.690044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.690090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.690135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.690178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.690224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.690268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.690312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.690357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.690405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.690451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.690495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.690541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.690588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.690629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.690672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.690725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.690769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.690816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.690869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.690917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.690963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.691007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.691053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.691099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.691150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.691197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.691244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.691291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.691336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.691381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.851 [2024-07-15 15:16:25.691429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.691475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.691533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.691579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.691629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.691672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.691717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.691763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.691815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.691864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.691911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.691950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.691990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.692030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.692068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.692108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.692148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.692187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.692230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.692270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.692312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.692356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.692395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.692437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.692486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.692529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.692568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.692754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.693114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.693158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.693198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.693236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.693286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.693337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.693382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.693427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.693474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.693517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.693562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.693607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.693657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.693706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.693750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.693790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.693837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.693879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.693916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.693956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.694002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.694045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.694088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.694127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.694162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.694197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.694234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.694275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.694317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.694356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.694399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.694439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.694480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.694523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.694567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.694608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.694650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.694688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.694728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.694771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.694812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.694864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.694912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.694956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.695002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.695046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.695089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.695134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.695182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.695236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.695283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.695330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.695379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.695434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.695486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.695533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.695577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.695622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.695666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.695707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.695747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.695793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.696292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.696337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.696373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.696414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.696458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.696502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.696546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.696587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.696632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.696679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.696731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.696777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.696819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.696871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.696918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.852 [2024-07-15 15:16:25.696962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.697010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.697055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.697106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.697153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.697199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.697246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.697294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.697338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.697385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.697430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.697476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.697520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.697567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.697624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.697668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.697713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.697757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.697798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.697849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.697901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.697946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.697990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.698037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.698081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.698128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.698176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.698226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.698271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.698314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.698358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.698398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.698440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.698482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.698526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.698567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.698600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.698643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.698683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.698723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.698769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.698809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.698852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.698896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.698940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.698981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.699014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.699051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.699092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.699269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.699621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.699664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.699701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.699743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.699786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.699843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.699893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.699939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.699987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.700035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.700081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.700128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.700172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.700224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.700267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.700300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.700341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.700381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.700421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.700461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.700505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.700548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.700580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.700621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.700663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.700705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.700742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.700781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.700822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.700868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.700899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.700930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.700961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.700999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.701041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.701080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.701119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.701163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.701202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.853 [2024-07-15 15:16:25.701242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.701284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.701326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.701373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.701420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.701467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.701511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.701557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.701606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.701652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.701699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.701746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.701795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.701853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.701906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.701957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.702004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.702051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.702096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.702142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.702187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.702229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.702271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.702722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.702769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.702805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.702860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.702907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.702958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.703005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.703047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.703098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.703142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.703185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.703232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.703275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.703324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.703379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.703430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.703473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.703517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.703560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.703607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.703656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.703701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.703748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.703794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.703847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.703892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.703939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.703987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.704031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.704074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.704125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.704175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.704220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.704266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.704310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.704357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.704402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.704449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.704494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.704540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.704587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.704629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.704667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.704713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.704755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.704800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.704847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.704895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.704940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.704977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.705017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.705054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.705104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.705145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.705186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.705229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.705270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.705316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.705355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.705398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.705432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.705474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.705520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.705561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.705756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.706113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.706163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.706209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.706255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.706300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.706344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.706389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.706434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.706479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.706528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.706571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.706605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.706647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.706688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.706734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.706777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.706816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.854 [2024-07-15 15:16:25.706860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.855 [2024-07-15 15:16:25.706902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.855 [2024-07-15 15:16:25.706939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.855 [2024-07-15 15:16:25.706979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.855 [2024-07-15 15:16:25.707024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.855 [2024-07-15 15:16:25.707066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.855 [2024-07-15 15:16:25.707107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.855 [2024-07-15 15:16:25.707147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.855 [2024-07-15 15:16:25.707192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.855 [2024-07-15 15:16:25.707232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.855 [2024-07-15 15:16:25.707274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.855 [2024-07-15 15:16:25.707317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.855 [2024-07-15 15:16:25.707360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.855 [2024-07-15 15:16:25.707398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.855 [2024-07-15 15:16:25.707437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.855 [2024-07-15 15:16:25.707475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.855 [2024-07-15 15:16:25.707515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.855 [2024-07-15 15:16:25.707561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.855 [2024-07-15 15:16:25.707607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.855 [2024-07-15 15:16:25.707652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.855 [2024-07-15 15:16:25.707699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.855 [2024-07-15 15:16:25.707748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.855 [2024-07-15 15:16:25.707795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.855 [2024-07-15 15:16:25.707844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.855 [2024-07-15 15:16:25.707889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.855 [2024-07-15 15:16:25.707934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.855 [2024-07-15 15:16:25.707984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.855 [2024-07-15 15:16:25.708028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.855 [2024-07-15 15:16:25.708076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.855 [2024-07-15 15:16:25.708126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.855 [2024-07-15 15:16:25.708172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.855 [2024-07-15 15:16:25.708217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.855 [2024-07-15 15:16:25.708262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.855 [2024-07-15 15:16:25.708308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.855 [2024-07-15 15:16:25.708353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.855 [2024-07-15 15:16:25.708398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.855 [2024-07-15 15:16:25.708443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.855 [2024-07-15 15:16:25.708490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.855 [2024-07-15 15:16:25.708549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.855 [2024-07-15 15:16:25.708597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.855 [2024-07-15 15:16:25.708645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.855 [2024-07-15 15:16:25.708691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.855 [2024-07-15 15:16:25.708737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.855 [2024-07-15 15:16:25.708783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:21.855 [2024-07-15 15:16:25.708831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.709294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.709339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.709382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.709426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.709467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.709508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.709551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.709586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.709628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.709667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.709717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.709758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.709811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.709859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.709901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.709949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.709989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.710035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.710069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.710111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.710147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.710187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.710229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.710268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.710311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.710355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.710394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.710433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.710475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.710521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.710570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.710614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.710658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.710703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.710749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.710794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.710845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.710894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.710946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.710994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.711043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.711090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.711136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.711179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.711227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.711269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.711316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.711360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.711400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.711442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.711486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.711525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.711559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.711597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.711639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.711681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.711723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.711766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.711806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.711848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.711888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.711930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.128 [2024-07-15 15:16:25.711971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.712008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.712187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.712522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.712569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.712612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.712659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.712703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.712752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.712806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.712857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.712902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.712945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.712990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.713037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.713082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.713123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.713170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.713218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.713267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.713315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.713363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.713413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.713461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.713505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.713551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.713595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.713640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.713687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.713729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.713772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.713805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.713851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.713889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.713934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.713978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.714019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.714058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.714095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.714139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.714184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.714225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.714274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.714308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.714344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.714385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.714425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.714465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.714509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.714549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.714597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.714632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.714670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.714711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.714755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.714795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.714838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.714879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.714917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.714953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.714992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.715031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.715070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.715113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.715159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.715681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.715733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.715778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.715825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.715876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.715920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.715965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.716013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.716059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.716104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.716146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.716192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.716232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.716265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.716306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.716345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.716383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.716431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.716470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.716510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.716549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.716593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.716635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.716675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.716718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.716751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.716793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.716838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.716881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.716923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.716964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.717006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.717042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.717080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.717121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.717165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.717204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.717248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.717286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.717328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.717369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.717409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.717448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.717492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.717544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.717589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.717633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.717676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.717721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.717768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.717815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.717872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.717919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.717968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.718011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.718056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.718103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.718149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.718195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.718241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.718284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.718332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.718390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.718435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.718611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.718955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.719003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.719042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.719082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.719123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.719163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.719201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.719243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.719282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.719322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.719361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.719398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.719431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.719474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.719511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.719553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.719591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.719637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.719681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.719722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.719762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.719805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.719849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.719892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.719929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.719970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.720010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.720048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.720085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.720130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.720167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.720207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.720246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.720293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.720336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.720383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.720430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.720476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.720531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.720576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.720622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.720666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.720711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.720756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.720801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.720854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.720906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.720958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.721001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.721046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.721094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.721140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.721184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.721229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.129 [2024-07-15 15:16:25.721278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.721328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.721374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.721417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.721461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.721504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.721554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.721600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.722095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.722141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.722182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.722223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.722258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.722299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.722343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:22.130 [2024-07-15 15:16:25.722395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.722434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.722476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.722511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.722552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.722591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.722634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.722677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.722715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.722756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.722798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.722845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.722884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.722923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.722964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.722996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.723037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.723083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 true 00:11:22.130 [2024-07-15 15:16:25.723127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.723170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.723214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.723262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.723308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.723352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.723397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.723442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.723491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.723535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.723577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.723623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.723668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.723710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.723752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.723806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.723857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.723899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.723941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.723989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.724035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.724076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.724118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.724162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.724203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.724246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.724285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.724324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.724361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.724400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.724441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.724480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.724523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.724564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.724603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.724642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.724681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.724716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.724760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.724937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.725282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.725330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.725375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.725422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.725466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.725513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.725557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.725605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.725650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.725696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.725743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.725785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.725828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.725878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.725931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.725979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.726024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.726070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.726114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.726160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.726204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.726251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.726297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.726343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.726387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.726435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.726481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.726524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.726569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.726615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.726657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.726698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.726743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.726783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.726824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.726871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.726904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.726944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.726983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.727019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.727058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.727100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.727144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.727184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.727226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.727266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.727306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.727338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.727376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.727412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.727450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.727489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.727527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.727567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.727613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.727645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.727688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.727730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.727775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.727813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.727867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.727907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.728398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.728450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.728481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.728523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.728568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.728606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.728652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.728694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.728741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.728784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.728822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.728868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.728911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.728949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.728992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.729034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.729073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.729112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.729152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.729192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.729231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.729265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.729301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.729340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.729375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.729418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.729461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.729504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.729553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.729603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.729646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.729688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.729734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.729778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.729825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.729877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.729928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.729974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.130 [2024-07-15 15:16:25.730017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.730063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.730109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.730153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.730197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.730245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.730287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.730332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.730373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.730417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.730462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.730503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.730549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.730585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.730625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.730663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.730701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.730739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.730781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.730820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.730865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.730908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.730948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.730991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.731031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.731075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.731254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.731598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.731645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.731688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.731734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.731778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.731821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.731869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.731918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.731963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.732011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.732067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.732112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.732157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.732204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.732247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.732293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.732337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.732385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.732438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.732483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.732528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.732572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.732619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.732661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.732704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.732749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.732796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.732853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.732901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.732951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.732995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.733042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.733086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.733131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.733178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.733222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.733262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.733307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.733347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.733394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.733441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.733481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.733522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.733562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.733595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.733638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.733679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.733720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.733762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.733802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.733849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.733888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.733933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.733974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.734020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.734053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.734094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.734137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.734176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.734215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.734260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.734314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.734829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.734876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.734916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.734966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.735015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.735056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.735090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.735129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.735167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.735209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.735243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.735285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.735315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.735345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.735384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.735423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.735456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.735497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.735527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.735557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.735586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.735615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.735644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.735672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.735701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.735731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.735765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.735805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.735850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.735893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.735933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.735974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.736012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.736054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.736095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.736133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.736166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.736209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.736258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.736306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.736354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.736402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.736448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.736493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.736537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.736583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.736636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.736684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.736730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.736775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.736822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.736872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.736919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.736966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.737010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.737054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.737102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.737144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.737188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.737237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.131 [2024-07-15 15:16:25.737279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.737319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.737361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.737401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.737609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.737944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.737991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.738039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.738085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.738131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.738176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.738229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.738273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.738316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.738361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.738407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.738458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.738508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.738553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.738595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.738638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.738687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.738730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.738775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.738817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.738867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.738913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.738959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.739007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.739052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.739095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.739144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.739186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.739232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.739277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.739322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.739370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.739416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.739466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.739509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.739554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.739599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.739645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.739699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.739749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.739791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.739845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.739892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.739941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.739987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.740031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.740079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.740120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.740161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.740199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.740238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.740286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.740327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.740375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.740417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.740455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.740499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.740544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.740585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.740626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.740664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.740711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.741190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.741232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.741274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.741311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.741354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.741390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.741430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.741469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.741509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.741550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.741589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.741629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.741670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.741710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.741750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.741781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.741826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.741871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.741911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.741950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.741987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.742017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.742058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.742095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.742133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.742163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.742191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.742221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.742250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.742280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.742310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.742339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.742368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.742409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.742448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.742488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.742533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.742573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.742613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.742657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.742702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.742754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.742800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.742849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.742893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.742940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.742986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.743030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.743077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.743122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.743166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.743212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.743260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.743315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.743359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.743404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.743447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.743493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.743543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.743585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.743632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.743673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.743716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.743757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.743923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.744342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.744389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.744431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.744481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.744528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.744581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.744626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.744673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.744719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.744765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.744808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.744859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.744905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.744952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.745004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.745050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.745096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.745140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.745184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.745230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.745275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.745326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.745369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.745414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.745458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.745503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.745549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.745594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.745644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.745698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.745743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.745790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.745840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.745884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.745929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.745973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.746020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.746080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.746127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.746176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.746220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.746263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.746310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.746355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.746404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.746453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.746497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.746543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.746588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.746633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.132 [2024-07-15 15:16:25.746678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.746722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.746763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.746809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.746857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.746899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.746946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.746986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.747027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.747068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.747103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.747146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.747597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.747642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.747685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 15:16:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2944530 00:11:22.133 [2024-07-15 15:16:25.747729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.747771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.747810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.747858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.747907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.747948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.747980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.748019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.748058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 15:16:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:22.133 [2024-07-15 15:16:25.748097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.748139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.748178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.748218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.748258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.748301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.748339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.748380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.748412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.748449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.748489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.748520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.748561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.748593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.748626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.748665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.748707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.748740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.748771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.748801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.748830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.748866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.748904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.748951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.748985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.749016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.749049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.749079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.749111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.749145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.749176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.749208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.749240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.749271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.749307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.749350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.749389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.749429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.749469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.749509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.749551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.749593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.749635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.749672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.749713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.749750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.749789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.749840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.749884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.749932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.749983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.750029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.750204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.750541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.750590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.750633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.750674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.750716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.750767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.750820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.750872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.750919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.750963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.751010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.751053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.751101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.751151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.751197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.751244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.751292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.751338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.751382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.751429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.751476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.751523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.751571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.751616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.751660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.751705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.751748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.751794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.751855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.751902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.751947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.751992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.752038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.752083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.752128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.752174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.752228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.752273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.752318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.752367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.752412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.752461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.752506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.752553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.752600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.752647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.752692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.752740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.752786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.752836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.752882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.752923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.752961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.753008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.753049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.753094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.753135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.753176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.753222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.753261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.753298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.753339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.753781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.753823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.753872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.753913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.753953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.753993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.754037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.754077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.754115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.754152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.754189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.754228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.754266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.754305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.754348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.133 [2024-07-15 15:16:25.754392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.754437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.754479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.754522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.754562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.754595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.754635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.754676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.754713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.754751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.754789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.754828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.754873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.754912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.754955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.754996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.755041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.755075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.755114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.755153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.755186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.755228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.755258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.755298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.755340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.755378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.755414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.755447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.755477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.755507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.755543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.755588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.755623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.755654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.755690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.755730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.755771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.755807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.755856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.755896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.755936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.755989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.756032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.756076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.756121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.756165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.756213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.756266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.756313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.756482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.756817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.756872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.756916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.756964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.757007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.757050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.757095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.757141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.757191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.757240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.757293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.757346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.757392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.757435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.757477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.757521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.757566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.757604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.757655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.757694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.757737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.757776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.757817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.757868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.757905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.757941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.757978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.758019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.758061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.758108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.758149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.758187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.758234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.758277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.758326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.758364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.758407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.758441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.758479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.758518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.758558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.758598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.758634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.758676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.758720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.758757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.758798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.758851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.758899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.758945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.758993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.759039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.759082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.759126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.759172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.759224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.759277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.759322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.759364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.759411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.759457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.759500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.759978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.760028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.760073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.760117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.760160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.760207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.760250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.760295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.760334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.760380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.760421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.760457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.760496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.760535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.760576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.760627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.760667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.760708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.760748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.760788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.760829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.760875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.760914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.760948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.760985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.761023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.761061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.761104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.761142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.761178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.761219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.761258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.761298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.761338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.761378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.761417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.761458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.761495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.761544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.761590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.761633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.761677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.761721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.761769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.761815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.761868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.761916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.761963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.762008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.762053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.762097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.762141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.762188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.762234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.762280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.762323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.762367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.762407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.762446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.762479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.762523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.762566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.762605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.762650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.762818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.763183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.763227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.763269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.763308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.763345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.763386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.763426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.763464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.134 [2024-07-15 15:16:25.763514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.763560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.763605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.763648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.763690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.763745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.763798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.763855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.763900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.763946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.763991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.764035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.764078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.764122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.764170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.764216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.764261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.764307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.764351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.764398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.764442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.764491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.764542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.764585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.764633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.764672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.764707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.764749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.764789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.764840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.764884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.764925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.764966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.765005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.765044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.765077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.765118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.765157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.765200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.765245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.765288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.765327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.765367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.765406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.765449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.765486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.765525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.765568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.765609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.765651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.765692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.765732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.765771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.765810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.766286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.766338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.766383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.766430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.766471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.766519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.766567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.766619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.766664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.766711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.766754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.766798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.766854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.766902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.766957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.767004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.767047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.767093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.767137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.767180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.767224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.767272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.767315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.767360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.767407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.767450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.767496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.767540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.767598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.767648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.767691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.767736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.767784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.767827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.767874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.767912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.767949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.767991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.768038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.768074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.768109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.768150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.768191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.768235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.768277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.768317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.768359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.768400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.768439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.768482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.768520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.768566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.768603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.768646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.768685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.768725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.768765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.768802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.768847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.768892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.768937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.768974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.769011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.769051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.769224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.769562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.769610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.769657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.769701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.769743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.769787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.769839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.769876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.769915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.769953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.769999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.770039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.770076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.770115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.770151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.770190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.770229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.770270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.770312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.770344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.770382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.770414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.770443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.770473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.770515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.770553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.770593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.770633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.770669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.770712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.770755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.770796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.770839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.770883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.770922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.770964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.771002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.771048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.771094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.771139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.771186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.771235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.771283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.771330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.771373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.771416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.771460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.771503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.771550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.771596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.771643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.771688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.771735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.771782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.771822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.771871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.771918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.771961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.772009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.772055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.135 [2024-07-15 15:16:25.772098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.772145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:22.136 [2024-07-15 15:16:25.772616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.772666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.772710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.772756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.772803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.772861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.772907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.772950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.773000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.773043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.773084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.773122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.773163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.773209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.773253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.773296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.773338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.773374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.773414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.773453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.773494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.773543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.773584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.773627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.773671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.773712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.773756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.773794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.773829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.773874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.773910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.773952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.773993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.774035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.774075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.774112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.774150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.774189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.774226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.774266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.774309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.774352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.774400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.774444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.774486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.774534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.774577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.774619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.774665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.774713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.774761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.774813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.774866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.774912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.774959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.775006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.775056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.775101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.775144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.775189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.775237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.775292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.775344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.775390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.775561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.775912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.775955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.776000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.776043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.776090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.776129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.776162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.776200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.776238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.776279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.776318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.776356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.776404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.776442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.776485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.776526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.776562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.776593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.776631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.776673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.776707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.776749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.776790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.776846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.776892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.776938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.776983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.777028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.777073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.777122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.777171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.777217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.777258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.777298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.777345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.777385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.777425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.777458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.777499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.777538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.777580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.777619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.777660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.777700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.777740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.777782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.777824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.777873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.777920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.777962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.778004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.778049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.778097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.778149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.778200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.778245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.778290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.778333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.778377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.778421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.778464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.778506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.778555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.779026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.779079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.779129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.779176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.779222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.779265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.779308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.779351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.779404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.779451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.779494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.779541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.779588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.779629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.779680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.779718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.779776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.779816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.779863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.779906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.779941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.779978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.780017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.780056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.780098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.780139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.780177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.780216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.780255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.780296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.780341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.780374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.780417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.780454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.780494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.780532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.136 [2024-07-15 15:16:25.780566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.780604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.780644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.780682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.780721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.780759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.780800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.780845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.780883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.780924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.780967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.781017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.781062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.781108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.781153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.781198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.781242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.781287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.781328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.781371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.781415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.781470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.781520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.781568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.781613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.781656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.781701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.781747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.781931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.782254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.782300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.782340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.782389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.782431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.782468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.782501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.782541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.782579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.782616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.782654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.782694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.782733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.782771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.782812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.782856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.782894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.782925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.782969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.783005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.783045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.783085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.783123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.783163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.783206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.783254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.783297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.783343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.783401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.783445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.783489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.783530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.783575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.783613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.783651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.783694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.783733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.783766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.783805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.783849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.783896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.783932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.783971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.784010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.784048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.784094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.784141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.784183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.784230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.784276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.784322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.784370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.784414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.784457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.784500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.784545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.784593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.784646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.784691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.784739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.784786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.784838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.785316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.785367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.785411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.785452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.785496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.785542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.785588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.785635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.785679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.785727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.785772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.785815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.785865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.785913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.785962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.786009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.786053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.786094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.786136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.786183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.786227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.786266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.786313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.786353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.786393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.786433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.786470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.786512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.786562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.786599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.786640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.786684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.786720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.786764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.786805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.786859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.786901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.786936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.786976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.787014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.787053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.787093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.787133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.787174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.787212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.787250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.787292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.787332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.787369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.787407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.787447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.787485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.787526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.787569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.787608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.787651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.787690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.787735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.787781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.787828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.787883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.787937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.787988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.788034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.788207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.788531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.788576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.788615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.788655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.788692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.788736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.788770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.788814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.788857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.788899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.788946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.788986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.789020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.789060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.789100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.789139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.789179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.789220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.789260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.789302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.789343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.789390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.789430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.789470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.789506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.789554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.137 [2024-07-15 15:16:25.789598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.789643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.789687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.789735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.789780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.789826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.789876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.789929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.789975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.790021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.790061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.790107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.790148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.790190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.790229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.790270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.790310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.790352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.790396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.790439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.790485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.790527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.790572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.790616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.790662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.790708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.790755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.790801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.790854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.790898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.790944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.790991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.791037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.791088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.791132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.791183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.791651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.791700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.791748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.791792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.791846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.791890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.791936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.791981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.792023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.792071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.792113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.792155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.792201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.792246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.792290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.792341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.792392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.792440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.792486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.792528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.792573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.792616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.792661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.792708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.792749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.792784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.792828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.792873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.792916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.792955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.792993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.793030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.793074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.793115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.793155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.793199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.793234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.793275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.793314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.793354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.793395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.793432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.793474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.793518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.793557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.793606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.793647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.793693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.793729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.793771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.793812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.793860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.793905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.793948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.793989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.794030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.794072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.794110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.794151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.794187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.794233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.794279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.794323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.794368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.794553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.794899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.794945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.794986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.795025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.795069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.795103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.795139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.795174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.795212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.795253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.795289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.795321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.795362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.795406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.795449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.795491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.795528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.795573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.795612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.795655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.795696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.795732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.795774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.795812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.795862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.795910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.795955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.796000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.796044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.796090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.796134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.796177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.796223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.796272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.796319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.796371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.796414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.796459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.796503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.796551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.796596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.796645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.796687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.796726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.796766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.796804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.796851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.796895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.796935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.796977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.797011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.797049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.797089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.797135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.797175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.797216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.797257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.797298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.797335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.797381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.797426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.797469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.797516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.797992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.798039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.798088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.798137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.798183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.138 [2024-07-15 15:16:25.798230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.798274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.798317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.798363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.798409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.798457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.798503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.798547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.798590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.798637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.798685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.798738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.798782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.798827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.798875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.798919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.798968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.799017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.799062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.799110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.799154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.799201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.799247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.799294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.799348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.799394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.799439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.799486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.799530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.799574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.799621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.799663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.799702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.799742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.799780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.799825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.799877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.799919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.799961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.800001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.800037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.800075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.800113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.800152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.800198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.800238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.800288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.800329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.800366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.800415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.800458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.800497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.800537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.800576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.800618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.800656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.800694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.800731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.800774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.800937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.801296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.801335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.801374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.801407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.801451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.801489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.801531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.801570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.801609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.801640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.801681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.801723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.801753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.801781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.801811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.801843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.801882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.801923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.801966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.802003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.802042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.802083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.802113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.802164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.802207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.802248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.802293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.802341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.802402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.802446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.802491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.802538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.802583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.802626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.802673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.802715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.802756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.802792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.802829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.802872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.802910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.802948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.802986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.803027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.803065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.803105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.803148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.803195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.803243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.803297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.803342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.803385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.803429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.803473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.803521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.803567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.803622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.803666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.803712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.803761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.803808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.803860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.804348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.804400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.804451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.804497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.804544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.804593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.804639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.804688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.804733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.804779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.804824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.804876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.804922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.804982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.805027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.805072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.805117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.805161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.805207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.805249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.805294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.805339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.805384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.805429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.805473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.805517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.805568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.805612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.805657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.805703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.805747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.805790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.805837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.805880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.805923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.805964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.806003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.806043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.806090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.806133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.806170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.806203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.806242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.806279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.806327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.806365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.806413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.806454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.806493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.806542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.806582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.806622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.806654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.806693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.806741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.806780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.806819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.806866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.806906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.806952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.806996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.807036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.807087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.139 [2024-07-15 15:16:25.807120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.807289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.807640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.807687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.807728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.807770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.807808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.807854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.807893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.807928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.807967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.808006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.808043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.808073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.808115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.808149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.808179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.808209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.808239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.808269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.808299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.808330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.808360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.808389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.808419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.808449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.808480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.808510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.808540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.808570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.808600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.808630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.808660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.808690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.808730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.808773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.808811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.808854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.808891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.808931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.808970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.809011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.809054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.809097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.809128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.809174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.809219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.809266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.809309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.809354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.809399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.809446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.809495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.809538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.809586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.809630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.809673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.809721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.809766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.809809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.809861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.809905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.809951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.809997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.810469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.810518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.810558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.810604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.810647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.810688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.810727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.810761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.810800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.810844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.810886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.810936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.810975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.811017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.811059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.811100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.811139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.811178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.811224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.811270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.811318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.811365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.811410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.811459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.811501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.811546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.811590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.811636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.811686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.811730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.811777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.811820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.811874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.811923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.811977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.812023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.812068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.812113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.812157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.812198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.812240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.812289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.812341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.812393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.812450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.812494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.812538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.812582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.812627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.812671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.812714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.812761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.812804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.812856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.812904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.812948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.812992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.813040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.813079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.813119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.813160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.813205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.813246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.813287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.813450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.813775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.813818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.813865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.813906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.813947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.813984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.814025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.814065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.814109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.814152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.814190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.814232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.814271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.814312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.814355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.814396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.814434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.814473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.814514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.814554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.814590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.814625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.814664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.814703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.814734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.814773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.814812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.814847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.814880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.814910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.814939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.814969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.814997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.815035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.815082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.815117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.815147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.815176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.815207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.815237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.815268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.815298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.815326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.815357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.815389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.815418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.815460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.815499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.815540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.815579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.815620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.140 [2024-07-15 15:16:25.815660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.815698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.815738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.815779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.815818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.815861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.815901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.815946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.815993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.816046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.816093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.816571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.816622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.816666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.816712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.816755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.816800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.816853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.816900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.816945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.816990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.817034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.817079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.817130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.817176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.817220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.817266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.817312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.817356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.817399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.817448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.817492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.817535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.817582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.817627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.817672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.817717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.817761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.817806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.817858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.817900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.817945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.817990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.818041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.818089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.818138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.818183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.818226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.818269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.818314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.818367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.818415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.818460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.818507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.818552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.818595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.818647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.818693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.818738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.818784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.818831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.818879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.818928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.818979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.819023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.819066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.819108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.819151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.819193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.819237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.819284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.819330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.819373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.819412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.819447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.819611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.819921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.819962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.820009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.820050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.820095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.820134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.820175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.820213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.820254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.820292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.820325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.820363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.820409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.820447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.820484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:22.141 [2024-07-15 15:16:25.820527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.820568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.820608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.820645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.820688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.820726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.820760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.820799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.820849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.820890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.820934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.820974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.821012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.821050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.821094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.821137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.821176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.821216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.821248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.821286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.821325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.821355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.821396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.821426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.821455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.821486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.821524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.821566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.821603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.821645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.821675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.821704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.821737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.821781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.821822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.821858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.821889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.821932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.821974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.822013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.822050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.822093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.822135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.822176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.822219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.822264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.822310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.822780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.822838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.822886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.822930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.822973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.823023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.823071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.823118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.823164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.823211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.823253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.823301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.823345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.823393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.823437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.823482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.823522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.823569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.823618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.823664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.823706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.823747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.823789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.823829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.823882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.823926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.823965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.824008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.824049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.824087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.824120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.824159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.824201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.824240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.824286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.141 [2024-07-15 15:16:25.824324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.824369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.824410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.824456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.824497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.824534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.824568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.824608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.824644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.824685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.824727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.824767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.824809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.824854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.824895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.824935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.824976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.825018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.825062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.825113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.825170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.825216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.825259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.825303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.825348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.825395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.825443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.825484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.825529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.825726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.826076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.826124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.826166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.826209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.826253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.826296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.826352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.826394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.826442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.826488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.826534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.826579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.826619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.826657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.826703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.826736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.826775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.826817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.826869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.826913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.826956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.826997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.827040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.827084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.827125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.827162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.827195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.827232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.827273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.827314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.827358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.827395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.827435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.827472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.827507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.827549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.827590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.827639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.827678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.827720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.827761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.827801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.827844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.827883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.827923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.827972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.828015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.828059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.828105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.828150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.828211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.828257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.828302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.828349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.828395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.828438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.828484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.828538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.828584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.828629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.828677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.828724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.828770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.829252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.829297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.829337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.829375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.829413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.829453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.829491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.829532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.829571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.829614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.829657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.829697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.829736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.829779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.829819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.829866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.829918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.829961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.830005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.830054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.830105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.830158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.830204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.830249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.830294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.830335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.830382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.830422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.830460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.830499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.830530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.830571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.830610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.830650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.830689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.830738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.830781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.830824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.830874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.830918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.830974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.831022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.831069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.831111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.831155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.831201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.831248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.831302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.831356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.831399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.831442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.831488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.831535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.831581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.831631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.831675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.831721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.831768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.831817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.831866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.831916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.831965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.832011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.832058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.832230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.832544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.832584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.832622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.832663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.832704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.832746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.832790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.832831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.832875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.832923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.832965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.832997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.833037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.833078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.833121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.833158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.833205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.833250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.833291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.833328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.833367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.833407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.833452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.833500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.833543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.833589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.142 [2024-07-15 15:16:25.833634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.833680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.833724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.833769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.833812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.833864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.833910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.833954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.834002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.834057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.834104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.834147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.834193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.834235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.834282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.834329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.834373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.834428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.834472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.834518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.834563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.834610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.834655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.834694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.834739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.834780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.834820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.834857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.834900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.834942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.834985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.835026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.835066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.835110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.835150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.835192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.835237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.835772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.835818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.835865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.835903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.835946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.835992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.836036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.836081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.836126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.836171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.836218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.836264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.836311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.836367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.836413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.836456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.836499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.836545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.836593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.836644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.836688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.836735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.836782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.836827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.836879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.836923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.836965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.837005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.837047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.837083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.837121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.837159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.837200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.837248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.837292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.837332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.837375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.837420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.837458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.837505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.837539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.837580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.837629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.837666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.837704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.837743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.837781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.837822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.837869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.837909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.837953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.837991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.838036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.838077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.838124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.838180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.838229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.838278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.838326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.838373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.838421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.838462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.838503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.838547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.838725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.839068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.839115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.839161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.839206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.839252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.839296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.839335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.839368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.839409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.839447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.839489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.839530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.839579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.839621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.839664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.839703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.839742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.839789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.839826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.839874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.839911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.839956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.839995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.840036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.840076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.840117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.840162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.840205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.840244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.840282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.840323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.840369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.840411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.840456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.840502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.840546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.840593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.840641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.840687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.840731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.840775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.840822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.840871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.840921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.840976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.841018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.841062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.841109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.841153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.841196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.841244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.841297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.841354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.841400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.841448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.841491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.841535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.841576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.841613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.841651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.841690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.841731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.842198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.842240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.842285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.842323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.842363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.842403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.842443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.143 [2024-07-15 15:16:25.842484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.842520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.842574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.842629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.842672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.842718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.842767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.842811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.842861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.842905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.842949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.842996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.843045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.843088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.843135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.843178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.843222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.843269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.843321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.843368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.843414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.843459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.843491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.843531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.843569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.843607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.843645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.843686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.843726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.843768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.843812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.843860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.843897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.843936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.843972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.844014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.844055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.844095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.844142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.844182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.844227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.844266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.844303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.844349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.844398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.844449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.844496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.844540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.844588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.844633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.844680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.844726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.844783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.844840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.844887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.844935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.844985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.845162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.845490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.845540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.845584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.845628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.845672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.845717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.845761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.845806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.845854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.845894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.845927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.845971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.846008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.846058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.846101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.846145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.846191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.846230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.846274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.846313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.846353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.846385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.846425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.846464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.846504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.846544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.846583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.846623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.846664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.846710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.846751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.846793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.846844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.846882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.846925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.846965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.847009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.847057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.847113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.847162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.847207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.847251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.847296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.847343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.847390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.847437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.847483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.847526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.847576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.847624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.847671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.847716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.847763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.847811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.847861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.847910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.847954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.847999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.848040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.848080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.848123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.848159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.848618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.848662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.848700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.848738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.848778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.848819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.848875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.848915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.848962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.849002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.849038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.849084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.849135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.849184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.849228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.849273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.849320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.849365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.849412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.849460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.849506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.849553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.849596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.849641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.849690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.849742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.849787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.849830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.849878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.849916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.849953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.849995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.850033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.850078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.850117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.850156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.850195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.850237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.850281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.850323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.850363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.850398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.850437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.850476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.850519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.850560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.850603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.850648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.850694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.850736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.850776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.850811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.850854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.850898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.850941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.850984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.851024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.851064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.851101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.851143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.851181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.851221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.851267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.851317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.144 [2024-07-15 15:16:25.851489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.851824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.851880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.851923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.851969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.852019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.852074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.852117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.852160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.852202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.852247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.852293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.852339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.852386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.852430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.852478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.852521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.852565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.852611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.852658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.852702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.852741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.852780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.852826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.852868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.852901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.852941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.852981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.853037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.853078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.853128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.853170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.853216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.853258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.853300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.853340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.853373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.853413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.853452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.853492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.853533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.853571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.853608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.853645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.853684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.853726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.853765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.853810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.853854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.853894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.853934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.853977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.854020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.854062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.854107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.854155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.854200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.854246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.854290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.854339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.854388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.854440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.854498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.854998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.855052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.855098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.855143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.855192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.855238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.855292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.855343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.855393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.855438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.855485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.855530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.855571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.855615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.855649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.855693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.855733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.855777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.855816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.855866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.855906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.855944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.855991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.856036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.856078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.856113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.856157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.856196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.856235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.856274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.856315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.856353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.856397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.856437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.856473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.856509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.856549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.856593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.856636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.856679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.856718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.856755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.856796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.856841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.856878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.856924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.856968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.857013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.857059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.857102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.857151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.857198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.857247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.857291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.857337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.857388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.857434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.857477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.857521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.857566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.857610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.857654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.857700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.857754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.857935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.858259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.858304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.858347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.858383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.858431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.858471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.858511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.858550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.858591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.858635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.858677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.858720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.858760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.858797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.858845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.858887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.858927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.858977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.859011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.859046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.859087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.859128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.859170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.859212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.859252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.859290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.859330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.859367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.859407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.859441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.859489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.859538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.859584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.859631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.859678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.859722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.859766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.859812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.859860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.859906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.859953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.145 [2024-07-15 15:16:25.860016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.860061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.860107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.860154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.860201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.860245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.860289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.860335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.860380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.860426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.860471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.860519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.860567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.860617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.860662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.860704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.860740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.860781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.860821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.860863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.860903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.861349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.861391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.861434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.861472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.861505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.861543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.861582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.861624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.861667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.861707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.861747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.861787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.861828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.861875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.861912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.861948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.861991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.862045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.862091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.862136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.862182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.862230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.862275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.862319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.862366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.862411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.862455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.862498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.862540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.862588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.862644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.862690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.862737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.862780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.862823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.862875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.862908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.862949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.862988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.863028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.863066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.863111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.863150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.863195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.863232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.863278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.863311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.863353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.863393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.863432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.863474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.863513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.863552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.863592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.863630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.863674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.863720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.863770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.863816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.863864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.863909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.863958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.864008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.864053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.864227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.864567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.864619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.864667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.864716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.864761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.864806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.864864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.864912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.864959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.865003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.865047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.865091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.865134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.865176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.865227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.865271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.865316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.865365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.865412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.865460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.865504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.865545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.865592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.865632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.865670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.865714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.865749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.865789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.865827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.865879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.865920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.865963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.866005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.866049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.866087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.866124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.866166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.866200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.866243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.866283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.866325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.866365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.866408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.866455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.866497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.866535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.866578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.866618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.866656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.866697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.866738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.866776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.866813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.866858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.866899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.866942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.866980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.867025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.867074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.867122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.867165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.867210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.867681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.867729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.867767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.867808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.867855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.867900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.867938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.867985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.868024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.868056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.868098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.868134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.868177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.868216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.868259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.868300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.868340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.868380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.868420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.868460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.868497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.868534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.868581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.868628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.868676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.868720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.868764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.868810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.868861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.868904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.868953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.868998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.869041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.869092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.869133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.869173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.869214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.146 [2024-07-15 15:16:25.869256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.869298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.869339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.869382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.869427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.869462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.869511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.869557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.869607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.869651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.869697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.869737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.869781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.869828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.869884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.869931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.869978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.870025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.870070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.870116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.870161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.870206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.870250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.870302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.870352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.870399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.870443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.870619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.870976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.871022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.871064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:22.147 [2024-07-15 15:16:25.871112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.871168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.871211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.871254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.871302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.871349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.871396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.871442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.871488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.871539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.871585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.871628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.871672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.871719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.871761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.871796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.871844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.871887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.871939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.871982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.872025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.872065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.872106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.872146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.872186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.872226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.872265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.872298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.872335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.872375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.872416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.872457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.872498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.872536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.872589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.872635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.872677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.872718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.872752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.872789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.872825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.872874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.872919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.872962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.873005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.873043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.873081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.873119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.873162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.873205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.873255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.873300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.873346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.873394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.873437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.873482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.873530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.873574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.873618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.873667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.874117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.874166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.874216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.874254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.874288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.874327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.874367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.874403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.874443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.874484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.874522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.874564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.874607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.874645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.874687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.874724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.874763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.874793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.874822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.874867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.874908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.874948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.874986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.875023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.875068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.875112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.875159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.875205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.875250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.875296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.875341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.875386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.875430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.875477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.875531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.875575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.875618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.875666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.875709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.875753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.875797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.875845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.875887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.875931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.875973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.876019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.876057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.876097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.876139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.876186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.876227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.876264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.876305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.876343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.876387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.876427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.876468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.876510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.876552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.876594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.876638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.876682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.876727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.876773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.876958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.877290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.877338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.877385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.877432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.877476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.877522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.877568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.877613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.877666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.877716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.877765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.877809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.877862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.877908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.877952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.877997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.878054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.878101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.878148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.878195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.878239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.878286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.878332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.878376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.878421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.878466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.878512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.878555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.147 [2024-07-15 15:16:25.878600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.878648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.878703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.878749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.878795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.878846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.878892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.878933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.878984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.879025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.879064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.879109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.879147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.879188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.879231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.879264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.879305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.879344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.879383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.879424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.879471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.879510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.879551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.879596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.879634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.879675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.879711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.879749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.879792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.879841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.879881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.879922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.879966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.880006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.880050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.880571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.880615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.880648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.880685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.880726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.880764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.880800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.880846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.880887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.880926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.880963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.880999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.881030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.881059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.881089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.881119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.881148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.881179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.881222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.881262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.881306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.881345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.881383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.881429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.881464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.881513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.881573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.881618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.881664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.881711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.881759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.881804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.881854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.881898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.881941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.881979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.882018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.882059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.882106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.882149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.882191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.882231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.882269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.882308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.882349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.882388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.882426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.882462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.882504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.882552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.882601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.882651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.882703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.882750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.882796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.882844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.882893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.882941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.882986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.883031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.883078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.883129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.883183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.883238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.883701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.883752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.883797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.883847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.883891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.883938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.883988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.884035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.884081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.884129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.884171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.884215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.884268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.884316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.884361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.884404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.884448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.884494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.884540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.884598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.884646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.884690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.884733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.884778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.884823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.884873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.884921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.884964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.885015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.885062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.885112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.885156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.885201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.885243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.885285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.885326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.885370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.885413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.885459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.885497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.885530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.885576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.885615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.885656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.885697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.885738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.885783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.885825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.885872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.885915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.885953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.885988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.886027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.886066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.886116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.886158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.886200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.886245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.886284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.886327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.886368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.886404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.886445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.886921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.886966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.887010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.887051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.887092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.887132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.887174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.887214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.148 [2024-07-15 15:16:25.887258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.887298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.887343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.887388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.887432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.887478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.887526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.887574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.887622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.887665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.887700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.887739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.887779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.887815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.887864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.887904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.887950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.887993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.888026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.888069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.888106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.888145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.888186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.888230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.888272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.888311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.888351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.888389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.888426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.888471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.888517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.888565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.888613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.888670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.888715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.888760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.888806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.888860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.888905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.888948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.888995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.889040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.889084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.889131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.889178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.889221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.889265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.889306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.889349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.889393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.889434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.889477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.889510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.889551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.889593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.889631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.890154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.890194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.890231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.890276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.890318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.890364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.890409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.890454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.890501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.890545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.890589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.890637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.890684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.890731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.890778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.890824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.890880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.890924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.890971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.891015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.891065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.891111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.891158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.891207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.891262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.891309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.891355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.891401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.891449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.891495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.891542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.891600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.891642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.891687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.891735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.891778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.891819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.891872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.891923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.891970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.892015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.892061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.892105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.892150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.892193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.892232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.892265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.892306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.892344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.892386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.892436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.892480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.892527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.892566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.892607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.892652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.892700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.892734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.892773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.892811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.892863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.892908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.892957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.893417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.893464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.893507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.893549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.893588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.893631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.893672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.893708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.893753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.893802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.893859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.893911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.893957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.893999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.894043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.894086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.894130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.894178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.894222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.894267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.894315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.894360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.894400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.894443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.894491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.894537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.894583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.894637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.894682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.894726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.894772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.894820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.894870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.894918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.894962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.895010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.895051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.895093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.895140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.895176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.895220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.895262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.895300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.895345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.895384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.895427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.895472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.895511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.895549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.895590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.895624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.895663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.895702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.895743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.895781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.895819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.895861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.895900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.895944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.895983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.896022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.896060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.896098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.896143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.896586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.149 [2024-07-15 15:16:25.896635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.896681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.896727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.896777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.896824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.896878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.896924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.896967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.897013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.897056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.897100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.897134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.897175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.897214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.897256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.897296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.897332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.897373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.897409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.897451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.897488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.897529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.897570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.897602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.897647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.897688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.897729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.897768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.897806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.897857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.897896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.897936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.897975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.898012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.898057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.898100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.898147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.898191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.898238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.898279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.898324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.898372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.898428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.898473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.898518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.898564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.898606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.898652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.898694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.898744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.898806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.898857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.898906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.898950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.898996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.899042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.899087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.899137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.899182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.899229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.899274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.899317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.899883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.899936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.899976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.900009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.900048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.900098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.900138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.900178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.900214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.900252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.900291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.900330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.900368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.900406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.900445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.900484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.900518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.900551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.900594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.900632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.900673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.900714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.900759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.900803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.900854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.900899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.900946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.900989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.901040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.901090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.901147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.901192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.901234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.901283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.901328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.901371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.901417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.901462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.901506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.901551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.901595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.901638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.901681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.901720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.901760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.901794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.901842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.901883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.901923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.901963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.902002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.902042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.902082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.902123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.902159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.902199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.902236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.902275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.902312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.902353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.902391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.902430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.902474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 [2024-07-15 15:16:25.902517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.150 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:22.150 15:16:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:22.150 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:22.150 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:22.150 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:22.431 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:22.431 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:22.431 [2024-07-15 15:16:26.106050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.431 [2024-07-15 15:16:26.106121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.431 [2024-07-15 15:16:26.106165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.431 [2024-07-15 15:16:26.106206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.431 [2024-07-15 15:16:26.106250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.431 [2024-07-15 15:16:26.106295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.431 [2024-07-15 15:16:26.106335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.431 [2024-07-15 15:16:26.106379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.431 [2024-07-15 15:16:26.106423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.431 [2024-07-15 15:16:26.106472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.431 [2024-07-15 15:16:26.106515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.431 [2024-07-15 15:16:26.106559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.431 [2024-07-15 15:16:26.106608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.431 [2024-07-15 15:16:26.106652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.431 [2024-07-15 15:16:26.106693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.431 [2024-07-15 15:16:26.106740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.431 [2024-07-15 15:16:26.106782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.431 [2024-07-15 15:16:26.106823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.431 [2024-07-15 15:16:26.106870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.431 [2024-07-15 15:16:26.106914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.431 [2024-07-15 15:16:26.106967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.431 [2024-07-15 15:16:26.107011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.431 [2024-07-15 15:16:26.107053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.107096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.107144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.107193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.107251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.107291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.107335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.107382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.107421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.107466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.107504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.107552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.107592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.107622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.107662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.107702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.107744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.107784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.107837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.107877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.107919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.107956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.107998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.108037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.108070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.108105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.108143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.108181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.108215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.108254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.108296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.108331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.108369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.108408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.108448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.108485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.108523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.108564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.108605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.108646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.108694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.108740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.109211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.109262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.109310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.109357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.109403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.109445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.109485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.109532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.109583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.109624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.109667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.109711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.109756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.109801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.109848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.109889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.109935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.109966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.110007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.110049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.110086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.110127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.110164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.110201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.110248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.110288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.110326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.110364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.110411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.110443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.110478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.110512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.110558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.110597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.110636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.110678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.110714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.110752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.110792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.110839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.110878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.110917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.110956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.110999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.111043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.111088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.111135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.111177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.111218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.111265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.111311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.111355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.111399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.111448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.111491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.111534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.111577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.111622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.111667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.111711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.111753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.111798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.111851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.112317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.112362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.112409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.112449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.112491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.112529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.112559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.112598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.112639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.112678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.112719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.112760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.112798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.112845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.112885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.112924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.112969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.113005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.113034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.113069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.113102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.113144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.113187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.113230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.113273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.113321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.113374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.113421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.113466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.113509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.113556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.113601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.113641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.113684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.113728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.113772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.113815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.113866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.432 [2024-07-15 15:16:26.113908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.113955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.114001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.114042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.114085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.114122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.114160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.114200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.114245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.114283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.114322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.114357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.114398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.114437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.114473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.114512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.114553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.114592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.114630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.114667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.114706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.114746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.114783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.114825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.114875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.114921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.115384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.115432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.115475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.115518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.115562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.115607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.115653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.115697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.115741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.115787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.115829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.115880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.115926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.115971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.116014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.116056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.116099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.116143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.116188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.116229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.116266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.116311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.116347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.116387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.116428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.116459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.116498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.116536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.116576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.116616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.116666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.116706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.116751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.116793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.116843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.116882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.116919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.116959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.116999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.117039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.117089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.117127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.117164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.117199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.117238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.117280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.117316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.117355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.117394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.117430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.117468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.117502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.117545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.117603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.117648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.117693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.117735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.117778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.117830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.117883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.117927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.117970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.118015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.118477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.118524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.118568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.118609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.118646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.118686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.118728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.118768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.118815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.118855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.118894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.118934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.118976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.119013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.119051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.119093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.119137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.119174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.119213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.119252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.119291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.119329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.119365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.119396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.119437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.119476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.119508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.119551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.119593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.119639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.119685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.119732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.119775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.119818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.119871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.119914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.119956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.120003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.120052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.120104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.120155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.120200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.120242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.120285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.120327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.120384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.120430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.120476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.120522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.120565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.120612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.120659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.120721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.120763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.120805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.433 [2024-07-15 15:16:26.120850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.120900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.120940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.120980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.121020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.121059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.121094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.121135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.121175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.121637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.121679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.121720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.121767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.121809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.121854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.121895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.121932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.121976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.122020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.122065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.122113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.122159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.122206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.122253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.122299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.122343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.122387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.122436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.122487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.122534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.122578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.122623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.122667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.122715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.122765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.122817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.122872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.122919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.122964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.123009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.123052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.123096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.123141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.123196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.123242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.123287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.123335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.123381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.123429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.123472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.123519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.123564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.123609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.123654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.123696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.123741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.123791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.123839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.123882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.123918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.123954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.123995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.124033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.124072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.124109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.124149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.124190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.124231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.124269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.124311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.124355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.124395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.124866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.124910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.124950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.124986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.125024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.125060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.125102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.125140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.125179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.125218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.125259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.125295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.125332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.125371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.125407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.125450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.125498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.125547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.125592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.125635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.125679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.125723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.125774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.125823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.125874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.125921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.125966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.126015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.126067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.126106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.126141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.126180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.126220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.126261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.126305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.126345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.126382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.126425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.126473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.126511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.126546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.126584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.126621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.126660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.126698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.126741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.126779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.126819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.126865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.434 [2024-07-15 15:16:26.126904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.126942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.126982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.127028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.127072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.127117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.127164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.127220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.127261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.127309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.127360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.127406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.127451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.127494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.127541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:22.435 [2024-07-15 15:16:26.128013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.128063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.128110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.128159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.128205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.128257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.128304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.128353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.128396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.128441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.128488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.128535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.128577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.128616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.128654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.128701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.128743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.128777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.128818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.128864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.128904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.128947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.128994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.129036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.129085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.129127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.129176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.129216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.129258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.129300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.129342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.129382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.129422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.129462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.129499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.129541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.129580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.129617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.129660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.129705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.129748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.129787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.129828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.129874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.129912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.129950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.129996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.130043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.130098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.130142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.130185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.130231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.130276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.130321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.130364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.130408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.130452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.130498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.130540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.130584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.130628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.130672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.130718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.131184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.131227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.131271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.131307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.131346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.131390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.131429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.131469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.131519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.131562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.131601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.131641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.131682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.131720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.131757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.131803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.131847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.131888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.131930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.131969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.132004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.132040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.132079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.132119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.132162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.132205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.132253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.132301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.132352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.132398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.132443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.132485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.132528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.132571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.132617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.132665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.132710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.132756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.132802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.132855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.132900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.132948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.132993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.133034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.133083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.133121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.133161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.133199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.133241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.133288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.133331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.133367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.133406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.133443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.133480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.133519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.133560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.133601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.133639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.133678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.133727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.133776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.133822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.133878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.134345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.134393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.435 [2024-07-15 15:16:26.134439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.134486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.134531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.134574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.134620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.134666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.134712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.134759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.134805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.134860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.134909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.134954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.134998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.135043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.135088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.135133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.135179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.135232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.135284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.135330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.135373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.135421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.135462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.135508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.135557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.135612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.135656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.135702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.135744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.135792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.135838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.135880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.135920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.135958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.135997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.136029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.136068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.136111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.136152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.136196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.136239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.136282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.136328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.136369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.136408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.136456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.136488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.136530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.136579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.136616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.136656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.136698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.136746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.136787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.136827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.136872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.136910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.136949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.136991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.137036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.137078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 15:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:11:22.436 [2024-07-15 15:16:26.137566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.137615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.137653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.137695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.137736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.137784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.137826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 15:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:11:22.436 [2024-07-15 15:16:26.137873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.137909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.137950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.137987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.138028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.138059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.138097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.138127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.138157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.138187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.138216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.138246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.138277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.138319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.138356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.138397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.138435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.138474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.138514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.138553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.138592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.138630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.138671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.138715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.138757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.138805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.138856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.138901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.138948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.138997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.139047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.139091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.139136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.139183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.139227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.139273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.139318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.139365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.139409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.139455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.139500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.139545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.139590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.139635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.139682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.139730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.139778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.139829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.139880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.139924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.139970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.140012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.140054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.140102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.140140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.140190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.140230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.140656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.140702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.140745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.140785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.140824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.140880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.140930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.140978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.141026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.141073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.141117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.141165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.141211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.141260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.141303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.141346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.436 [2024-07-15 15:16:26.141389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.141432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.141480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.141527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.141574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.141618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.141668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.141710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.141752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.141795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.141849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.141899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.141947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.141992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.142037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.142081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.142126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.142170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.142219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.142261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.142306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.142353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.142398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.142444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.142491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.142536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.142584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.142628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.142671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.142714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.142755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.142803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.142848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.142897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.142942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.142983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.143026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.143064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.143097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.143134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.143172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.143211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.143257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.143300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.143342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.143387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.143426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.143903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.143950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.143990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.144029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.144069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.144106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.144148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.144185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.144225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.144270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.144307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.144347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.144390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.144423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.144462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.144506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.144546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.144579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.144609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.144640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.144671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.144701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.144732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.144762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.144791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.144822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.144857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.144888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.144918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.144949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.144981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.145012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.145043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.145076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.145119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.145161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.145203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.145245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.145287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.145327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.145368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.145409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.145450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.145491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.145527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.145572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.145617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.145662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.145714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.145760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.145803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.145853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.145899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.145946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.145991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.146042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.146095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.146141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.146188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.146231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.146277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.146323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.146374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.146425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.146887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.146929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.146971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.147015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.147056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.147096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.147144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.147178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.147218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.147255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.147302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.147345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.147384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.147432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.147470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.147512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.147552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.147591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.147640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.147685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.147730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.147778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.147820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.147873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.147919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.147963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.148010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.148055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.148097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.148140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.148185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.437 [2024-07-15 15:16:26.148241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.148288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.148334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.148385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.148427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.148470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.148515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.148559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.148602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.148646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.148692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.148737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.148781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.148826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.148876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.148925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.148971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.149020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.149064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.149119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.149172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.149215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.149260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.149309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.149353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.149399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.149448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.149494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.149536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.149575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.149616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.149658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.150115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.150167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.150208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.150250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.150298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.150332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.150370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.150414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.150462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.150504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.150540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.150575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.150613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.150651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.150690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.150727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.150766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.150805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.150849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.150896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.150931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.150970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.151011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.151043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.151081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.151120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.151162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.151203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.151242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.151276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.151317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.151346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.151376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.151406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.151444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.151487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.151523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.151555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.151585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.151624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.151667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.151710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.151749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.151786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.151837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.151882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.151930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.151983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.152039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.152086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.152131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.152174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.152221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.152266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.152312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.152358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.152402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.152444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.152490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.152533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.152580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.152626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.152676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.152719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.153195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.153249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.153295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.153345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.153390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.153436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.153479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.153522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.153567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.153606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.153650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.153691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.153732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.153776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.153815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.153857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.153901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.153940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.153985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.154024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.154075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.154120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.154160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.154202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.154242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.154281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.154317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.154355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.154395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.154433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.154474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.154515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.154558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.438 [2024-07-15 15:16:26.154601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.154647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.154690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.154731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.154774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.154821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.154872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.154932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.154979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.155021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.155067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.155112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.155156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.155200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.155247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.155300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.155345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.155390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.155439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.155482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.155525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.155575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.155625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.155673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.155715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.155763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.155808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.155856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.155903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.155947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.156408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.156449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.156488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.156528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.156567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.156614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.156655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.156700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.156739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.156779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.156826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.156879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.156913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.156955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.156991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.157035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.157075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.157121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.157155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.157191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.157232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.157270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.157317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.157362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.157400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.157438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.157476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.157508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.157551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.157589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.157628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.157672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.157709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.157754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.157797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.157849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.157902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.157950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.157994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.158040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.158086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.158133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.158177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.158218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.158262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.158309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.158360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.158406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.158453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.158500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.158543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.158585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.158625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.158664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.158709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.158742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.158787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.158829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.158877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.158920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.158967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.159008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.159055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.159100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.159628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.159677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.159722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.159771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.159815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.159867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.159912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.159958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.160005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.160054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.160096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.160140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.160187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.160232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.160280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.160326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.160373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.160423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.160468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.160513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.160556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.160603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.160658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.160706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.160750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.160804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.160853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.160898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.160946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.160994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.161045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.161088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.161135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.161179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.161225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.161276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.161326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.161373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.161419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.161462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.161505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.161550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.161598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.161641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.161682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.161721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.161761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.161800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.161839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.161882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.161935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.439 [2024-07-15 15:16:26.161981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.162022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.162063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.162104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.162146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.162188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.162226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.162270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.162305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.162343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.162381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.162419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.162888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.162932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.162970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.163007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.163045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.163088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.163127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.163165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.163213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.163256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.163303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.163346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.163395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.163439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.163482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.163526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.163573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.163620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.163667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.163713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.163766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.163810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.163863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.163908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.163949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.163987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.164023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.164070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.164110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.164160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.164199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.164239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.164280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.164319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.164362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.164406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.164438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.164476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.164516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.164556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.164599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.164640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.164678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.164720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.164759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.164805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.164851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.164891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.164935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.164977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.165016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.165056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.165097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.165139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.165185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.165228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.165272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.165318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.165361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.165406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.165456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.165512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.165560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.165606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.166079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.166120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.166155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.166192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.166229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.166267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.166307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.166351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.166391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.166432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.166474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.166514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.166553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.166598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.166647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.166690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.166734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.166780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.166826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.166881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.166927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.166973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.167019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.167063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.167108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.167154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.167194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.167241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.167289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.167341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.167391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.167438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.167483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.167528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.167572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.167618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.167663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.167709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.167765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.167811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.167865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.167910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.167956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.168000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.168047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.168091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.168142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.168182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.168220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.168260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.168300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.168338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.168385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.168424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.168467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.168511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.168545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.168588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.168630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.168675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.168716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.168767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.168813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.169317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.169360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.169402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.169443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.440 [2024-07-15 15:16:26.169478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.169531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.169580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.169629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.169674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.169718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.169762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.169806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.169862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.169908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.169956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.169999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.170043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.170088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.170133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.170180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.170235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.170277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.170319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.170357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.170396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.170441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.170481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.170523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.170561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.170601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.170640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.170683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.170723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.170755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.170794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.170841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.170884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.170927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.170971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.171008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.171052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.171094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.171134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.171172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.171208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.171248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.171286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.171325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.171364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.171402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.171447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.171494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.171538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.171583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.171625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.171675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.171721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.171766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.171813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.171862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.171906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.171954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.171998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.172043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.172502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.172550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.172594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.172640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.172683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.172729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.172779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.172825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.172874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.172920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.172964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.173009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.173057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.173098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.173145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.173183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.173215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.173251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.173290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.173338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.173375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.173423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.173464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.173507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.173549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.173590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.173634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.173668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.173705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.173745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.173782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.173823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.173871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.173913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.173954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.173996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.174036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.174076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.174116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.174157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.174195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.174233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.174273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.174322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.174367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.174410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.174454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.174503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.174550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.174604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.174647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.174691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.174735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.174776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.174818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.174872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.174929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.174976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.175020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.175066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.175109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.175157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.175205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.175674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.175721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.175765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.441 [2024-07-15 15:16:26.175812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.175862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.175905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.175944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.175984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.176029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.176069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.176102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.176142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.176180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.176220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.176263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.176304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.176343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.176381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.176421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.176464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.176506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.176544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.176583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.176625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.176662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.176702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.176739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.176779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.176819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.176863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.176904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.176946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.176986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.177022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.177070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.177117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.177168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.177218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.177264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.177312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.177357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.177405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.177449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.177484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.177523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.177564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.177602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.177642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.177680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.177720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.177760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.177793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.177830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.177874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.177915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.177955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.177999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.178038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.178077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.178120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.178161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.178208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.178251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.178292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:22.442 [2024-07-15 15:16:26.178757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.178814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.178868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.178914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.178962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.179006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.179048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.179092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.179138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.179185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.179234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.179281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.179329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.179371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.179411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.179451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.179492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.179540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.179582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.179617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.179653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.179693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.179731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.179769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.179807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.179853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.179894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.179935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.179968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.180011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.180055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.180097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.180143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.180189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.180237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.180282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.180328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.180374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.180420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.180465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.180507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.180555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.180597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.180642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.180689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.180733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.180776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.180820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.180872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.180918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.180965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.181009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.181055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.181099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.181144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.181187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.181236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.181287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.181332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.181378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.181423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.181468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.181519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.182005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.182051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.182090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.182130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.182174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.182217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.182259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.182298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.182339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.182379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.182421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.182468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.182508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.182547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.182594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.182634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.182679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.182713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.182757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.182795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.182850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.442 [2024-07-15 15:16:26.182897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.182935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.182970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.183010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.183047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.183089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.183127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.183166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.183205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.183246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.183284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.183326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.183363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.183402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.183447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.183501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.183553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.183598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.183649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.183695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.183743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.183787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.183830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.183871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.183910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.183950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.183989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.184033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.184074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.184121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.184160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.184196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.184237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.184274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.184315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.184352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.184397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.184439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.184470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.184500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.184531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.184566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.184612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.185130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.185179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.185225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.185274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.185321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.185365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.185413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.185456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.185505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.185549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.185591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.185636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.185688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.185735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.185780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.185825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.185876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.185921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.185962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.186003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.186043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.186084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.186128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.186170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.186208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.186247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.186288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.186329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.186370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.186413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.186453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.186495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.186542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.186590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.186636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.186683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.186727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.186772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.186819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.186870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.186917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.186971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.187029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.187074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.187117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.187161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.187202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.187245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.187296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.187346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.187391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.187434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.187480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.187522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.187567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.187611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.187660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.187707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.187751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.187796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.187843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.187888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.187940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.188412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.188461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.188504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.188544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.188589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.188630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.188674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.188713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.188745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.188787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.188840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.188885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.188927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.188968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.189012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.189050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.189092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.189136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.189177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.189212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.189250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.189288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.189329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.189367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.189402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.189443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.189485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.189527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.189568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.189607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.189648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.189689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.189728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.189770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.189814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.189867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.189920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.189967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.443 [2024-07-15 15:16:26.190013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.190059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.190103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.190148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.190194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.190231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.190271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.190311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.190349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.190386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.190426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.190466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.190503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.190536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.190573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.190614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.190656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.190695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.190735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.190779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.190813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.190855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.190892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.190934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.190973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.191008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.191486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.191546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.191591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.191636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.191682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.191727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.191774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.191820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.191867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.191914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.191954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.191994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.192041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.192083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.192121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.192164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.192207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.192250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.192289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.192322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.192365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.192403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.192444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.192481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.192521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.192564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.192601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.192639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.192681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.192726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.192771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.192816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.192865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.192908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.192954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.193002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.193049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.193097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.193142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.193186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.193229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.193273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.193320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.193367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.193411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.193461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.193505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.193549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.193595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.193641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.193689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.193738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.193781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.193827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.193874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.193918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.193966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.194013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.194059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.194106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.194152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.194198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.194242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.194717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.194766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.194798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.194841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.194881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.194922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.194960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.194999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.195037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.195084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.195122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.195160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.195202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.195241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.195274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.195312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.195353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.195389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.195425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.195462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.195498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.195535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.195577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.195611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.195644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.195679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.195713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.195751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.195789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.195827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.195870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.195903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.195943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.195981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.196019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.196059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.196099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.196144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.444 [2024-07-15 15:16:26.196195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.196238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.196280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.196322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.196362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.196403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.196444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.196483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.196522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.196555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.196596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.196630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.196669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.196705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.196743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.196777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.196821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.196864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.196894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.196930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.196968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.197007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.197049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.197090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.197126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.197165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.197705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.197747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.197791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.197830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.197880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.197923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.197963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.198007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.198052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.198093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.198134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.198177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.198220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.198264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.198311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.198356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.198404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.198449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.198492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.198536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.198583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.198629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.198670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.198712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.198759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.198802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.198849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.198889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.198939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.198972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.199011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.199047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.199084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.199134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.199172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.199209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.199249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.199291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.199332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.199375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.199418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.199461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.199503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.199545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.199586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.199643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.199682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.199726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.199771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.199817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.199868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.199917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.199960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.200002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.200046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.200095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.200136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.200177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.200219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.200270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.200326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.200367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.200413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.200892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.200947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.200991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.201035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.201082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.201120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.201161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.201200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.201239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.201287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.201328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.201376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.201417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.201451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.201492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.201533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.201575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.201614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.201654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.201698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.201737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.201777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.201818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.201865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.201898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.201933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.201972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.202013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.202052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.202087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.202128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.202170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.202211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.202253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.202293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.202332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.202371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.202409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.202458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.202502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.202546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.202591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.202634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.202694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.202740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.202785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.202838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.202880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.202926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.202961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.203001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.203046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.203093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.203129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.203179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.445 [2024-07-15 15:16:26.203218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.203264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.203302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.203340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.203379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.203420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.203465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.203507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.203552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.204143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.204194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.204242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.204287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.204335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.204380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.204426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.204473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.204520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.204566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.204610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.204658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.204709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.204754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.204800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.204854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.204897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.204940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.204985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.205029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.205076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.205124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.205172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.205224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.205269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.205318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.205362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.205406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.205451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.205498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.205545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.205589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.205637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.205680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.205725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.205771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.205811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.205861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.205902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.205949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.205987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.206034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.206066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.206109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.206156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.206198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.206239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.206281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.206323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.206362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.206406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.206452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.206492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.206531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.206573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.206612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.206658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.206698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.206743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.206783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.206821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.206865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.206906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.207402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.207453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.207495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.207540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.207584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.207631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.207681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.207730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.207774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.207820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.207873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.207918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.207964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.208007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.208051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.208102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.208147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.208193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.208239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.208281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.208333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.208378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.208422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.208469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.208514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.208562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.208606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.208650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.208693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.208737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.208777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.208817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.208866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.208906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.208946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.208983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.209025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.209064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.209103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.209144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.209189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.209232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.209276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.209309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.209349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.209387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.209423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.209464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.209503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.209542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.209581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.209620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.209654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.209697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.209736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.209778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.209820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.209863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.209906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.209942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.209981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.210016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.210060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.210096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.210539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.210586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.210631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.210675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.210717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.210757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.210802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.210859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.210906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.210950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.446 [2024-07-15 15:16:26.210996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.211042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.211088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.211133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.211180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.211228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.211275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.211324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.211374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.211421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.211460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.211503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.211543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.211574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.211616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.211654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.211694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.211738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.211775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.211815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.211864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.211903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.211947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.211992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.212025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.212062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.212101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.212147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.212187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.212234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.212271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.212311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.212352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.212392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.212431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.212478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.212519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.212560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.212601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.212646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.212689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.212731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.212776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.212822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.212873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.212937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.212981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.213029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.213073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.213120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.213162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.213208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.213253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.213720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.213768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.213813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.213862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.213903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.213942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.213976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.214015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.214056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.214096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.214142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.214181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.214226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.214265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.214304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.214354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.214397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.214432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.214474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.214508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.214546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.214584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.214628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.214667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.214709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.214749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.214790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.214838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.214883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.214929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.214973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.215019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.215064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.215111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.215158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.215204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.215246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.215292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.215338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.215386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.215439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.215481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.215524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.215569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.215612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.215656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.215701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.215734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.215773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.215810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.215858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.215901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.215940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.215986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.216026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.216072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.216113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.216158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.216200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.216236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.216276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.216315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.216355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.216396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.216954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.217000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.217041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.217088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.217138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.217185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.217229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.217272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.217318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.217362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.447 [2024-07-15 15:16:26.217409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.217459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.217504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.217548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.217591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.217636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.217681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.217728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.217772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.217820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.217871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.217916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.217958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.218003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.218048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.218093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.218140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.218190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.218239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.218285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.218331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.218375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.218419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.218466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.218514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.218558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.218604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.218646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.218686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.218730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.218773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.218807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.218857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.218899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.218940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.218982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.219028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.219072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.219115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.219158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.219201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.219243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.219283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.219326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.219363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.219402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.219439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.219483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.219524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.219567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.219611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.219652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.219695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.220200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.220255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.220302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.220347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.220392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.220437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.220486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.220532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.220578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.220622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.220667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.220717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.220770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.220813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.220866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.220917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.220967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.221011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.221061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.221106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.221150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.221192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.221239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.221288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.221341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.221388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.221436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.221482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.221525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.221570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.221613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.221658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.221697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.221734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.221771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.221813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.221856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.221904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.221945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.221985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.222025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.222065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.222106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.222146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.222185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.222222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.222258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.222297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.222336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.222377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.222419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.222458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.222498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.222540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.222580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.222616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.222656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.222692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.222734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.222776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.222821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.222866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.222904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.222948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.223401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.223449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.223494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.223539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.223586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.223634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.223680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.223725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.223770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.223815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.223864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.223907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.223960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.224003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.224048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.224093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.224138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.224183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.224225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.224262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.224295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.224333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.224373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.224409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.224457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.224495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.448 [2024-07-15 15:16:26.224533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.224573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.224616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.224655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.224699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.224737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.224779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.224819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.224862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.224901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.224947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.224988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.225028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.225067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.225106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.225152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.225192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.225231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.225273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.225310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.225348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.225389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.225427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.225468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.225514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.225560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.225611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.225660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.225704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.225751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.225795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.225844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.225889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.225936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.225981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.226030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.226083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.226567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.226618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.226670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.226713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.226760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.226802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:22.449 [2024-07-15 15:16:26.226850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.226890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.226928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.226969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.227010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.227052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.227095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.227141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.227180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.227224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.227269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.227309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.227345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.227386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.227426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.227466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.227506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.227546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.227584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.227623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.227665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.227709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.227749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.227789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.227829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.227873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.227911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.227951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.227991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.228032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.228075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.228114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.228159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.228201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.228245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.228289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.228337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.228395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.228442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.228487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.228533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.228577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.228627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.228674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.228723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.228768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.228814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.228866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.228912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.228956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.229002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.229046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.229090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.229136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.229182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.229224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.229272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.229328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.229789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.229844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.229887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.229933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.229973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.230019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.230059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.230098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.230142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.230181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.230221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.230255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.230294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.230331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.230373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.230413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.230459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.230501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.230533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.230573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.230612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.230653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.230699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.230744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.230783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.230823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.230869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.230913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.230953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.230984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.231020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.231058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.231094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.231140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.231186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.231244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.231297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.231349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.231395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.231439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.231482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.449 [2024-07-15 15:16:26.231527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.231575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.231625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.231672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.231722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.231767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.231813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.231862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.231907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.231951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.231997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.232041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.232085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.232128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.232176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.232217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.232262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.232306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.232350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.232387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.232424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.232468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.232919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.232962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.233000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.233040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.233081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.233125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.233168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.233207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.233244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.233285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.233324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.233363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.233404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.233447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.233489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.233536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.233580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.233622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.233666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.233709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.233754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.233798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.233848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.233891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.233939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.233981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.234026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.234074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.234118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.234165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.234213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.234258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.234306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.234350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.234392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.234436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.234483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.234528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.234571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.234618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.234666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.234710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.234752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.234793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.234840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.234881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.234933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.234975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.235014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.235055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.235093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.235132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.235179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.235211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.235254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.235292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.235335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.235373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.235413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.235462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.235502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.235549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.235588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.235621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.236154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.236205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.236252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.236295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.236340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.236383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.236439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.236492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.236537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.236582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.236624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.236670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.236714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.236765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.236818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.236868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.236912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.236955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.236997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.237041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.237086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.237137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.237189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.237234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.237280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.237326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.237373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.237421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.237466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.237517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.237562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.237605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.237652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.237697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.237740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.237793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.237846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.237892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.237940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.237984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.450 [2024-07-15 15:16:26.238034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.238076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.238119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.238156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.238199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.238234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.238272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.238315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.238356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.238401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.238439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.238478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.238517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.238558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.238598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.238637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.238671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.238709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.238746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.238789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.238840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.238881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.238925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.239471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.239514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.239555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.239599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.239642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.239685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.239733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.239787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.239830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.239881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.239927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.239973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.240016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.240062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.240109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.240158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.240203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.240254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.240291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.240330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.240372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.240409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.240452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.240492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.240530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.240573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.240624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.240668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.240706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.240746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.240792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.240829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.240872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.240911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.240951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.240990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.241031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.241073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.241115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.241152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.241196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.241239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.241285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.241331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.241377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.241422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.241468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.241512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.241561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.241612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.241660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.241704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.241748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.241791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.241841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.241892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.241935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.241979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.242024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.242071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.242119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.242166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.242211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.242258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.242723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.242766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.242806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.242854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.242901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.242943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.242977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.243014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.243052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.243096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.243134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.243176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.243219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.243266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.243308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.243347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.243395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.243427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.243465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.243503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.243553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.243593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.243630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.243668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.243707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.243745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.243784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.243824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.243870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.243912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.243949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.243990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.244037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.244088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.244141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.244186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.244231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.244277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.244322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.244372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.244415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.244461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.244508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.244555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.244609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.244655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.244702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.244756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.244799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.244847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.244894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.244938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.244972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.245015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.245052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.245095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.451 [2024-07-15 15:16:26.245136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.245175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.245213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.245258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.245302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.245343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.245376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.245929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.245977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.246023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.246070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.246113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.246160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.246205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.246249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.246297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.246348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.246396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.246441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.246490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.246537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.246584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.246629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.246672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.246720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.246768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.246815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.246876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.246940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.246985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.247034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.247080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.247124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.247167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.247206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.247239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.247278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.247319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.247359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.247399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.247442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.247483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.247524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.247569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.247612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.247656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.247700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.247733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.247771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.247810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.247863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.247907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.247947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.247992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.248034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.248073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.248112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.248151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.248189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.248230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.248274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.248322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.248367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.248412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.248458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.248501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.248548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.248603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.248648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.248695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.248741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.249200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.249249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.249294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.249343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.249390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.249434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.249490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.249534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.249575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.249620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.249662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.249695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.249737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.249772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.249814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.249865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.249907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.249948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.249988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.250027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.250075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.250114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.250147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.250186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.250227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.250265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.250305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.250345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.250385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.250428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.250466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.250507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.250546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.250584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.250630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.250674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.250720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.250766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.250809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.250866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.250917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.250965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.251028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.251070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.251115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.251160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.251209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.251254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.251301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.251346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.251391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.251436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.251478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.251513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.251552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.251594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.251635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.251677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.251714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.251753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.251793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.251838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.251885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.252424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.252471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.252511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.252552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.252600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.252649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.252691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.252738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.252781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.252829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.452 [2024-07-15 15:16:26.252883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.252926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.252968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.253015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.253064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.253116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.253168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.253213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.253265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.253316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.253366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.253413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.253458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.253504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.253550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.253594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.253637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.253683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.253731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.253791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.253845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.253896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.253939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.253980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.254025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.254074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.254108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.254154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.254201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.254243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.254290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.254332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.254371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.254413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.254456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.254498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.254539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.254573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.254613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.254653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.254692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.254733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.254767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.254804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.254850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.254891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.254934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.254980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.255021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.255058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.255095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.255137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.255181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.255230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.255685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.255732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.255775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.255822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.255875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.255928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.255974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.256021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.256066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.256111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.256152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.256198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.256230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.256271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.256311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.256351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.256395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.256435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.256479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.256532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.256577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.256618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.256660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.256693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.256736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.256778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.256817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.256862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.256905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.256947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.256988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.257027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.257068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.257103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.257148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.257196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.257243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.257289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.257336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.257387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.257434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.257480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.257529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.257576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.257620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.257668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.257709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.257759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.257814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.257870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.257918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.257962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.258007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.258054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.258099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.258149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.258190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.258234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.258268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.258314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.258352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.258399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.258441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.258908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.258952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.258992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.259033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.259079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.259121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.259159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.259198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.259235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.453 [2024-07-15 15:16:26.259281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.259326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.259378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.259420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.259465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.259512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.259557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.259603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.259651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.259703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.259751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.259797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.259849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.259917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.259962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.260006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.260051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.260094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.260135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.260180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.260225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.260260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.260302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.260343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.260387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.260428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.260478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.260518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.260559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.260594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.260633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.260673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.260713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.260752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.260802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.260849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.260893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.260935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.260975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.261020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.261068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.261113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.261158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.261202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.261246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.261289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.261338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.261383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.261431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.261476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.261524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.261578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.261629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.261690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.261738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.262211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.262261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.262313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.262360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.262399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.262433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.262475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.262513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.262553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.262597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.262637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.262684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.262725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.262765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.262809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.262862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.262896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.262937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.262976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.263026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.263067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.263110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.263156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.263194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.263233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.263272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.263313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.263353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.263391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.263435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.263474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.263512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.263555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.263596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.263643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.263685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.263728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.263774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.263820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.263872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.263924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.263971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.264018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.264066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.264116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.264166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.264218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.264267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.264311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.264359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.264406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.264449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.264494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.264539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.264588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.264647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.264702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.264750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.264804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.264853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.264898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.264944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.264981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.265463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.265499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.265542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.265585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.265624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.265662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.265706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.265746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.265783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.265823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.265869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.265909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.265948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.265989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.266038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.266077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.266116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.266158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.266197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.266246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.266292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.266342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.266388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.266433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.454 [2024-07-15 15:16:26.266480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.266527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.266573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.266619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.266662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.266704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.266737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.266776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.266817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.266864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.266903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.266942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.266985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.267027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.267068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.267115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.267164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.267209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.267254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.267299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.267350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.267402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.267458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.267505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.267549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.267597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.267641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.267688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.267736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.267782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.267826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.267874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.267920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.267967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.268013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.268062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.268108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.268153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.268194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.268236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.268695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.268738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.268774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.268813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.268859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.268901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.268941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.268982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.269023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.269066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.269108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.269149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.269195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.269243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.269290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.269339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.269385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.269431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.269478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.269523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.269568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.269612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.269661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.269711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.269756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.269807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.269856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.269902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.269949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.269992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.270034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.270078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.270124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.270171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.270215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.270261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.270312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.270373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.270420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.270470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.270514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.270562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.270607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.270655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.270703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.270748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.270789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.270837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.270879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.270916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.270954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.270985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.271026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.271066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.271104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.271144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.271185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.271226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.271266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.271304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.271355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.271399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.271438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.271893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.271939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.271982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.272022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.272063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.272099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.272135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.272178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.272219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.272256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.272301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.272349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.272395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.272446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.272497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.272545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.272591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.272636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.272684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.272730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.272778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.272825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.272883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.272930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.272974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.273023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.273068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.273111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.273158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.273208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.273257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.273293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.273331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.273374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.273412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.273452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.273494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.273533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.273583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.273628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.455 [2024-07-15 15:16:26.273671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.273710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.273749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.273785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.273828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.273881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.273920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.273959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.273994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.274033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.274072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.274111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.274150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.274193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.274234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.274269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.274316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.274363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.274408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.274454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.274503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.274545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.274589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.274635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.275121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.275171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.275217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.275264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.275308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.275355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.275400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.275444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.275495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.275542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.275587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.275633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.275676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.275722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.275767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.275817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.275868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.275907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.275949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.275993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.276036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.276069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.276111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.276149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.276192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.276235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.276275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.276316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.276355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.276395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.276438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.276477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.276513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.276557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.276597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.276640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.276689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.276724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.276765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.276804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.276853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.276893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.276936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.276976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.277014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.277054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.277096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.277141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.277188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.277234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.277278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.277324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.277376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.277422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.277470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.277519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.277566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.277612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.277658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.277694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.277736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.277779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.277820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.278293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.278339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.278380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.278419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.278459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.278496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.278533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.278577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.278613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.278660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.278709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:22.456 [2024-07-15 15:16:26.278756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.278803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.278856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.278902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.278947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.278993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.279037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.279080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.279126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.279174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.279218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.279267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.279315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.279360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.279407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.279453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.279488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.279529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.279569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.279615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.279655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.279703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.279750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.279790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.279836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.279884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.279922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.279956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.456 [2024-07-15 15:16:26.279994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.280033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.280068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.280109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.280150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.280192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.280233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.280277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.280315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.280356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.280402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.280446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.280490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.280536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.280581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.280627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.280671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.280715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.280763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.280808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.280858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.280913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.280959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.281010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.281057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.281524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.281578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.281630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.281689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.281733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.281780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.281824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.281875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.281920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.281968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.282012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.282059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.282103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.282147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.282194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.282242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.282288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.282334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.282384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.282427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.282474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.282520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.282560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.282601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.282640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.282680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.282724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.282763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.282804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.282851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.282901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.282936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.282978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.283020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.283062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.283108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.283153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.283193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.283232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.283278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.283318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.283359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.283395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.283436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.283476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.283517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.283557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.283600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.283643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.283685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.283726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.283759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.283796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.283845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.283883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.283921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.283959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.283999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.284040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.284079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.284121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.284164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.284209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.284681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.284723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.284756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.284787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.284817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.284856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.284886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.284917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.284948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.284984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.285026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.285070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.285111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.285153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.285194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.285235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.285273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.285312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.285347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.285390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.285439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.285488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.285535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.285583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.285628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.285673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.285718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.285765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.285814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.285871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.285911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.285951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.286000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.286035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.286075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.286116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.286155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.286197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.286249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.286304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.286349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.286393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.286440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.286489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.286536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.286578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.286627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.286673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.286720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.286765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.286814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.286872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.286921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.286972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.287018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.287064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.457 [2024-07-15 15:16:26.287111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.287158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.287206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.287250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.287298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.287348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.287396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.287449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.287916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.287968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.288018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.288063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.288109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.288156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.288202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.288243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.288282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.288322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.288367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.288409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.288453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.288493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.288535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.288569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.288608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.288647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.288688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.288727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.288773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.288816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.288868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.288915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.288955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.288998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.289031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.289070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.289113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.289153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.289195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.289244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.289285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.289324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.289369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.289411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.289456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.289495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.289535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.289582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.289632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.289679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.289724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.289769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.289813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.289864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.289909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.289956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.290000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.290045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.290093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.290140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.290186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.290235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.290279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.290322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.290355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.290398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.290436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.290479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.290522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.290561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.290602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.291148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.291193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.291232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.291269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.291310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.291361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.291406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.291449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.291495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.291545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.291592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.291638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.291684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.291730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.291771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.291818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.291872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.291917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.291963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.292007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.292054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.292105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.292149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.292197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.292243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.292290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.292339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.292385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.292431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.292478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.292518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.292557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.292598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.292638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.292684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.292725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.292767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.292812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.292863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.292909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.292943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.292981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.293023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.293067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.293106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.293146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.293190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.293231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.293271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.293317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.293355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.293399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.293439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.293480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.293520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.293559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.293599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.293638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.293676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.293721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.293766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.293808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.293859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.293909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.294359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.294406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.294450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.294497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.294543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.294591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.294643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.294693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.458 [2024-07-15 15:16:26.294742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.294794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.294849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.294898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.294944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.294991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.295033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.295080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.295124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.295170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.295216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.295263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.295312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.295362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.295406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.295457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.295502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.295549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.295593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.295638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.295686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.295727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.295767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.295805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.295859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.295894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.295936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.295977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.296022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.296064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.296105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.296149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.296191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.296231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.296276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.296326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.296360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.296402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.296441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.296480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.296520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.296559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.296606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.296648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.296687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.296723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.296762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.296801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.296844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.296884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.296933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.296973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.297012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.297055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.297096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.297612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.297657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.297697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.297739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.297778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.297825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.297876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.297918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.297950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.297993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.298031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.298070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.298106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.298152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.298183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.298214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.298244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.298274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.298313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.298352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.298391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.298430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.298473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.298513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.298552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.298591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.298634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.298666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.298704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.298748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.298801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.298854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.298900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.298947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.298994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.299036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.299080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.299128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.299171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.299219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.299273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.299325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.299372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.299417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.299463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.299505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.299551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.299600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.299650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.299706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.299754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.299803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.299853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.299896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.299939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.299984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.300026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.300068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.300120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.300159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.300209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.300250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.300291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.300332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.300807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.300859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.300902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.300940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.300985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.301033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.459 [2024-07-15 15:16:26.301080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.301126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.301170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.301215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.301262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.301306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.301354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.301402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.301463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.301509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.301554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.301599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.301645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.301688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.301734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.301781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.301836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.301884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.301931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.301977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.302021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.302069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.302114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.302160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.302206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.302254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.302300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.302347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.302393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.302439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.302487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.302531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.302576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.302625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.302673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.302715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.302760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.302806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.302853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.302893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.302932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.302977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.303019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.303058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.303100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.303137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.303178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.303220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.303260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.303308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.303347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.303392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.303437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.303477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.303517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.303552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.303593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.304096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.304141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.304180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.304220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.304258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.304303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.304348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.304387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.304427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.304467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.304506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.304544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.304581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.304619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.304654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.304686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.304716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.304749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.304779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.304809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.304845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.304875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.304906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.304936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.304969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.305009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 true 00:11:22.460 [2024-07-15 15:16:26.305051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.305098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.305141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.305181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.305224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.305267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.305311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.305357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.305403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.305449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.305494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.305539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.305584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.305628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.305678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.305723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.305768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.305816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.305866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.305913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.305957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.306001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.306045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.306089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.306137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.306178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.306218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.306259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.306300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.306340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.306380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.306427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.306459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.306498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.306535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.306575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.306618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.306664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.307138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.307185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.307227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.307272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.307318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.307361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.307411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.307458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.307504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.307550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.307597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.307642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.307689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.307734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.307781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.307842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.307891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.307936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.307984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.308032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.308075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.460 [2024-07-15 15:16:26.308120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.308163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.308209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.308255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.308298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.308345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.308393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.308438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.308480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.308526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.308569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.308619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.308668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.308718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.308765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.308811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.308861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.308908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.308955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.309004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.309044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.309093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.309131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.309171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.309212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.309252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.309293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.309338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.309379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.309412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.309457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.309497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.309537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.309577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.309615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.309662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.309703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.309743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.309787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.309826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.309865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.309902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.310359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.310402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.310444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.310486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.310525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.310565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.310606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.310644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.310681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.310725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.310771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.310811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.310856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.310898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.310937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.310973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.311012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.311047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.311088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.311127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.311164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.311208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.311238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.311268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.311298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.311328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.311359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.311389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.311420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.311451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.311488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.311528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.311568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.311609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.311650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.311688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.311731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.311770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.311807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.311851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.311885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.311927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.311974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.312022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.312073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.312124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.312180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.312227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.312271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.312317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.312361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.312405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.312454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.312503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.312553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.312599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.312644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.312689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.312732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.312777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.312817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.312868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.312911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.312951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.313430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.313480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.313524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.313572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.313620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.313662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.313710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.313753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.313796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.313846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.313896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.313954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.314000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.314045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.314093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.314142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.314189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.314236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.314285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.314331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.314377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.314423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.314468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.314513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.314559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.314611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.314657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.314703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.314749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.314795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.314843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.314891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.314941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.314988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.315031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.315076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.315121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.315165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.461 [2024-07-15 15:16:26.315211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.739 [2024-07-15 15:16:26.315258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.739 [2024-07-15 15:16:26.315309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.739 [2024-07-15 15:16:26.315362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.739 [2024-07-15 15:16:26.315407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.739 [2024-07-15 15:16:26.315453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.739 [2024-07-15 15:16:26.315501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.739 [2024-07-15 15:16:26.315545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.739 [2024-07-15 15:16:26.315587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.739 [2024-07-15 15:16:26.315632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.739 [2024-07-15 15:16:26.315677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.739 [2024-07-15 15:16:26.315728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.739 [2024-07-15 15:16:26.315773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.739 [2024-07-15 15:16:26.315815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.739 [2024-07-15 15:16:26.315864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.739 [2024-07-15 15:16:26.315905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.739 [2024-07-15 15:16:26.315946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.739 [2024-07-15 15:16:26.315988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.739 [2024-07-15 15:16:26.316028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.739 [2024-07-15 15:16:26.316076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.739 [2024-07-15 15:16:26.316116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.739 [2024-07-15 15:16:26.316154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.739 [2024-07-15 15:16:26.316198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.739 [2024-07-15 15:16:26.316239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.739 [2024-07-15 15:16:26.316287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.739 [2024-07-15 15:16:26.316745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.739 [2024-07-15 15:16:26.316787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.739 [2024-07-15 15:16:26.316837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.316879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.316918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.316966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.317003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.317039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.317077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.317117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.317158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.317198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.317236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.317275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.317314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.317354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.317392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.317430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.317461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.317500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.317542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.317580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.317618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.317658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.317689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.317729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.317767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.317798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.317827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.317863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.317893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.317943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.317982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.318014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.318045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.318076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.318107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.318138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.318169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.318209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.318256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.318301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.318342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.318382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.318423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.318468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.318514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.318561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.318604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.318648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.318691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.318736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.318786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.318829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.318876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.318922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.318963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.319004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.319044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.319085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.319125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.319166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.319207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.319248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.319700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.319748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.319793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.319844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.319888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.319939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.319990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.320036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.320082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.320132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.320177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.320222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.320269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.320316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.320370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.320421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.320468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.320514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.320562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.320609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.320652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.320696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.320741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.320788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.320837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.320886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.320932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.320981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.321030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.321075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.321121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.321174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.321221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.321267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.321309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.321354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.321399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.321449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.321503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.321548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.321595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.321640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.321685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.321730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.321779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.321822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.321871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.321916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.321960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.322002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.322049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.322094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.322135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.322174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.322215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.322255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.322302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.322342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.322382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.322423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.322460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.322499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.322542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.322988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.323030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.323074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.323119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.323161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.323204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.323245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.323287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.323319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.323359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.323401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.323440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.323478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.323520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.323555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.323591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.323631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.323670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.323706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.323744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.323788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.323827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.323871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.323915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.323947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.323986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.324026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.324063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.324102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.324146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.740 [2024-07-15 15:16:26.324188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.324227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.324258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.324301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.324333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.324363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.324393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.324423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.324459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.324500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.324536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.324569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.324599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.324639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.324681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.324721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.324764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.324806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.324854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.324901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.324944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.324989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.325033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.325078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.325122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.325168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.325220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.325264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.325309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.325353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.325399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.325449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.325499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.325550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.326029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.326082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.326126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.326180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.326223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.326269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.326315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.326362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.326411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.326455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.326498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.326545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.326586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.326628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.326671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.326711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.326751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.326789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.326830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.326871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.326911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.326956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.326995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.327045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.327088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.327128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.327168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.327206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.327257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.327295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.327334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.327375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.327417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.327459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.327504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.327544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.327586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.327627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.327664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.327706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.327742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.327790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.327841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.327888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.327935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.327989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.328036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.328082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.328129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.328179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.328227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.328270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.328314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.328357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.328400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.328448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.328498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.328545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.328594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.328638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.328687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.328738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.328783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 15:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2944530 00:11:22.741 15:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:22.741 [2024-07-15 15:16:26.329267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:22.741 [2024-07-15 15:16:26.329314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.329356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.329396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.329428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.329471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.329508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.329558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.329598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.329644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.329689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.329731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.329773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.329813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.329863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.329901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.329940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.329979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.330024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.330067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.330109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.330144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.330184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.330225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.330263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.330304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.330346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.330386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.330425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.330464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.330510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.330552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.330599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.330642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.330685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.330729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.330775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.330821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.330871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.330918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.330967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.331014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.331062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.331113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.331168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.331216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.331262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.741 [2024-07-15 15:16:26.331307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.331354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.331402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.331443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.331487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.331528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.331566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.331608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.331653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.331693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.331740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.331784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.331827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.331876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.331911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.331952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.331996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.332565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.332606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.332645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.332688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.332733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.332778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.332824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.332874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.332920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.332966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.333012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.333060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.333109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.333156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.333199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.333245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.333291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.333334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.333380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.333425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.333472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.333519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.333565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.333620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.333666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.333713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.333759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.333804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.333857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.333902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.333944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.333992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.334039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.334086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.334136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.334182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.334228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.334270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.334315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.334350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.334395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.334434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.334472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.334517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.334558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.334604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.334645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.334693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.334733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.334778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.334812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.334861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.334903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.334949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.334989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.335031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.335077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.335117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.335157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.335196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.335230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.335268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.335308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.335786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.335838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.335888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.335935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.335982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.336029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.336070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.336115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.336159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.336205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.336253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.336305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.336348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.336393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.336438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.336484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.336529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.336576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.336622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.336666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.336713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.336765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.336816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.336869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.336919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.336965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.337012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.337058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.337103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.337150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.337198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.337241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.337287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.337327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.337368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.337401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.337443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.337480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.337527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.337569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.337612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.337656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.337698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.337738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.337776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.337815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.337859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.337900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.337940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.337983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.338024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.338066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.338109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.338143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.338182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.338220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.742 [2024-07-15 15:16:26.338262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.338301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.338340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.338380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.338417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.338454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.338498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.338542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.339014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.339060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.339108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.339156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.339205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.339260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.339306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.339351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.339396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.339437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.339476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.339510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.339548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.339584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.339634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.339672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.339717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.339759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.339798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.339847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.339884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.339927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.339967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.340007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.340047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.340087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.340126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.340167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.340203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.340246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.340287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.340327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.340369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.340411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.340449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.340486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.340527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.340566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.340612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.340661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.340706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.340753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.340799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.340847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.340891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.340940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.340993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.341047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.341092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.341139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.341187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.341233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.341278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.341325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.341371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.341415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.341463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.341507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.341549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.341593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.341632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.341674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.341714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.342223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.342276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.342317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.342355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.342393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.342432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.342474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.342518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.342559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.342601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.342642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.342679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.342719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.342765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.342820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.342872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.342919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.342964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.343012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.343057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.343101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.343145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.343190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.343235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.343283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.343341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.343388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.343436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.343483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.343527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.343573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.343618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.343666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.343713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.343759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.343803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.343854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.343903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.343948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.343991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.344034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.344076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.344114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.344159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.344200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.344234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.344274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.344315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.344355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.344395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.344432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.344473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.344518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.344555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.344595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.344638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.344678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.344716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.344765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.344814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.344867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.344913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.344959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.345006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.345464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.345510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.345556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.345601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.345643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.345685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.345729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.345775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.345814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.345854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.345897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.345935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.345976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.346016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.346057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.346097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.346137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.346171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.346212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.346251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.346292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.346333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.346371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.743 [2024-07-15 15:16:26.346413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.346455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.346496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.346536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.346576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.346617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.346662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.346707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.346753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.346803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.346860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.346908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.346951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.346990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.347029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.347067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.347108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.347147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.347187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.347235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.347268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.347305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.347346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.347392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.347435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.347480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.347524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.347567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.347614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.347662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.347708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.347765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.347811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.347863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.347908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.347951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.347997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.348042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.348086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.348131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.348572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.348615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.348653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.348696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.348736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.348775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.348816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.348868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.348913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.348958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.349008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.349053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.349096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.349143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.349188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.349234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.349280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.349322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.349366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.349414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.349459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.349501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.349545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.349588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.349633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.349678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.349722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.349769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.349821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.349873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.349917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.349961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.350007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.350053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.350100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.350146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.350194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.350244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.350293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.350344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.350393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.350441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.350487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.350531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.350572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.350611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.350651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.350690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.350730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.350763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.350802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.350847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.350899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.350947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.350987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.351028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.351067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.351108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.351154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.351193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.351227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.351268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.351310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.351352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.351881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.351934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.351991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.352039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.352084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.352131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.352175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.352221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.352268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.352314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.352360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.352406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.352452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.352506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.352550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.352596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.352639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.352678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.352718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.352753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.352792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.352837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.352880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.352919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.352964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.353004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.353045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.353084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.353124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.353160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.353194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.353232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.353274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.353311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.353349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.353393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.353435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.353474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.353512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.353555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.353595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.353636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.353679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.353724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.353772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.353820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.353875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.353924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.353969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.354013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.354061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.354108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.354157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.354202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.744 [2024-07-15 15:16:26.354246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.354291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.354339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.354386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.354431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.354477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.354522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.354567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.354614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.355079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.355126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.355166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.355207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.355247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.355287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.355332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.355372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.355412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.355462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.355504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.355539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.355583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.355623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.355669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.355707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.355740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.355778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.355819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.355866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.355907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.355948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.355988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.356026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.356066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.356108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.356150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.356182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.356220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.356265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.356300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.356339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.356388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.356435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.356491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.356538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.356585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.356633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.356677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.356721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.356767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.356813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.356867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.356916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.356963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.357007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.357049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.357099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.357144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.357189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.357234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.357281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.357327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.357374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.357432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.357474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.357515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.357557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.357607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.357648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.357690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.357735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.357773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.357812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.358297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.358344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.358385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.358425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.358468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.358509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.358548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.358587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.358630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.358680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.358731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.358780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.358824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.358880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.358928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.358975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.359024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.359071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.359115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.359158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.359201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.359245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.359292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.359342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.359389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.359435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.359481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.359525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.359568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.359613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.359660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.359709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.359759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.359803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.359852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.359899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.359947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.359992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.360035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.360081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.360126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.360167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.360218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.360250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.360292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.360332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.360377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.360416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.360461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.360500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.360546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.360588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.360635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.360677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.360710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.360755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.360794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.360841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.360885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.360926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.360970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.361012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.361055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.361587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.361626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.361673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.745 [2024-07-15 15:16:26.361722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.361772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.361813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.361868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.361913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.361957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.362000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.362045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.362093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.362142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.362186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.362227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.362274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.362319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.362363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.362408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.362455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.362501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.362548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.362602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.362659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.362702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.362748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.362795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.362848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.362893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.362940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.362984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.363016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.363056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.363100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.363139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.363179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.363219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.363261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.363307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.363347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.363385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.363430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.363467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.363506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.363545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.363594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.363636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.363676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.363710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.363748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.363789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.363830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.363875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.363913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.363953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.363993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.364034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.364077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.364118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.364164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.364216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.364271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.364316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.364363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.364826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.364876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.364920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.364967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.365023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.365068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.365111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.365159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.365204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.365248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.365292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.365333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.365377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.365418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.365458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.365500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.365537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.365575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.365614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.365658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.365705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.365743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.365791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.365838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.365879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.365925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.365966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.366000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.366043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.366090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.366135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.366174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.366214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.366254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.366297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.366338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.366378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.366417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.366453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.366498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.366556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.366603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.366651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.366697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.366744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.366790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.366840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.366885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.366931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.366976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.367019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.367068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.367114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.367167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.367221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.367260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.367298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.367338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.367381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.367423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.367470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.367510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.367555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.368049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.368101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.368140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.368182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.368220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.368262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.368306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.368349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.368395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.368438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.368483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.368531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.368575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.368620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.368666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.368712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.368759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.368817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.368872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.368916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.368961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.369006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.369054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.369099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.369142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.369186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.369227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.369260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.369300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.369344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.369386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.746 [2024-07-15 15:16:26.369426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.369466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.369503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.369544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.369581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.369623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.369663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.369702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.369742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.369782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.369821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.369869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.369910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.369946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.369994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.370037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.370086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.370131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.370177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.370221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.370268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.370316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.370363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.370410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.370462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.370519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.370562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.370610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.370658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.370702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.370746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.370792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.370847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.371310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.371357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.371404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.371447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.371495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.371534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.371574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.371615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.371651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.371688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.371727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.371767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.371807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.371856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.371902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.371945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.371988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.372028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.372068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.372103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.372143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.372179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.372225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.372267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.372308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.372351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.372390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.372428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.372467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.372511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.372552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.372593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.372632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.372671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.372710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.372749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.372788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.372829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.372869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.372914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.372962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.373007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.373051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.373111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.373155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.373202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.373246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.373292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.373338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.373382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.373426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.373478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.373524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.373568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.373612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.373658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.373703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.373750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.373799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.373852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.373897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.373944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.373992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.374465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.374509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.374543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.374582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.374627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.374668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.374707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.374742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.374785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.374825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.374880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.374921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.374960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.374999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.375038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.375075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.375117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.375156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.375192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.375241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.375286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.375331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.375377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.375421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.375465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.375509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.375554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.375599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.375648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.375694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.375743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.375787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.375841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.375890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.375937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.375981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.376026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.376067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.376100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.376139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.376180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.376225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.376265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.376308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.376348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.376388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.376436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.376477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.376519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.376552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.376593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.376629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.376668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.376711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.376754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.376791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.376837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.376876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.376920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.376970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.747 [2024-07-15 15:16:26.377023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.377071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.377117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.377164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.377619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.377667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.377714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.377761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.377810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.377863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.377907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.377953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.377998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.378047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.378095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.378146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.378188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.378233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.378281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.378326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.378372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.378416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.378463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.378516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.378562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.378608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.378651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.378698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.378742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.378794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.378850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.378900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.378947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.378997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.379043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.379081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.379126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.379165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.379206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.379240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.379279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.379319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.379360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.379399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.379440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.379481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.379532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.379573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.379614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.379651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.379684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.379726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.379762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.379809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.379853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.379893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.379938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.379978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.380019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.380063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.380102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.380139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.380178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.380216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.380262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.380301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.380338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:22.748 [2024-07-15 15:16:26.380817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.380873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.380920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.380955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.380994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.381034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.381073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.381122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.381164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.381203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.381243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.381283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.381322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.381361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.381404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.381446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.381490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.381528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.381569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.381610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.381653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.381685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.381724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.381767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.381805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.381846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.381889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.381936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.381979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.382025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.382074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.382121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.382168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.382214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.382257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.382302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.382344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.382392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.382441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.382481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.382513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.382552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.382593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.382635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.382673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.382714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.382758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.382804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.382853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.382895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.382942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.382988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.383034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.383081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.383127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.383170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.383213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.383255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.383300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.383348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.383395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.383437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.383484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.383527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.384002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.748 [2024-07-15 15:16:26.384054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.384101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.384148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.384195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.384241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.384287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.384331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.384376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.384419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.384479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.384524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.384569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.384613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.384660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.384707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.384749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.384798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.384850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.384896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.384941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.384990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.385035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.385076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.385113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.385154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.385203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.385243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.385283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.385327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.385366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.385409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.385450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.385496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.385538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.385580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.385624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.385662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.385710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.385750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.385796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.385829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.385873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.385909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.385952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.385994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.386036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.386082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.386125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.386167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.386212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.386253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.386291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.386329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.386368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.386408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.386445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.386484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.386522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.386562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.386601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.386642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.386683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.387145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.387193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.387232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.387272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.387313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.387353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.387391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.387427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.387463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.387504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.387546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.387583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.387623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.387659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.387697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.387734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.387771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.387815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.387859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.387901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.387941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.387977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.388012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.388046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.388092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.388138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.388183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.388232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.388276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.388322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.388366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.388411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.388460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.388501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.388545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.388583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.388624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.388660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.388700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.388741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.388781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.388823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.388870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.388913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.388960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.389008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.389053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.389097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.389146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.389194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.389242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.389286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.389332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.389377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.389428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.389473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.389518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.389568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.389622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.389666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.389714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.389763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.389809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.389857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.390316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.390365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.390413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.390456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.390500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.390545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.390594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.390645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.390690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.390734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.390780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.390826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.390875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.390919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.390964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.391012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.391062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.391115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.391159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.391201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.391247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.391291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.391337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.391379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.391428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.391474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.391517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.391561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.391603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.391647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.391692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.391735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.391776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.749 [2024-07-15 15:16:26.391817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.391860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.391905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.391947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.391981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.392022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.392060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.392101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.392140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.392179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.392221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.392259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.392303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.392343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.392390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.392421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.392465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.392502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.392545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.392590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.392633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.392674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.392720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.392759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.392803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.392846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.392881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.392923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.392979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.393022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.393548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.393593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.393632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.393671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.393711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.393745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.393783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.393824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.393860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.393899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.393940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.393971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.394001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.394032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.394062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.394099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.394129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.394158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.394188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.394218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.394249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.394279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.394310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.394340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.394370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.394404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.394445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.394484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.394525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.394563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.394602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.394637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.394680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.394724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.394773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.394820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.394875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.394920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.394967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.395012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.395058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.395102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.395146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.395188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.395238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.395282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.395326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.395374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.395422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.395463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.395506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.395548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.395586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.395630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.395671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.395714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.395760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.395800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.395846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.395881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.395919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.395956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.396002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.396044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.396525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.396576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.396619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.396665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.396712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.396755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.396805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.396854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.396897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.396941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.396991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.397039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.397092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.397145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.397190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.397234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.397281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.397327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.397374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.397420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.397465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.397511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.397560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.397611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.397662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.397708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.397754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.397800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.397852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.397900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.397947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.397997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.398052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.398098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.398142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.398190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.398239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.398283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.398327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.398374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.398422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.398471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.398524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.398566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.398612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.398652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.398695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.398734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.398776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.398822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.398871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.398912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.398945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.398983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.399024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.399063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.399106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.399156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.399198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.399239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.399282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.399322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.399364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.399820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.750 [2024-07-15 15:16:26.399872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.399908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.399947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.399990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.400026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.400064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.400104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.400144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.400185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.400225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.400258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.400292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.400332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.400371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.400403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.400445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.400475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.400517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.400557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.400594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.400636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.400667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.400696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.400729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.400759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.400789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.400819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.400854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.400884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.400915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.400945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.400976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.401017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.401062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.401101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.401144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.401184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.401227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.401269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.401315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.401360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.401409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.401455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.401502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.401546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.401592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.401636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.401690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.401743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.401787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.401838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.401882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.401928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.401976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.402024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.402069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.402110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.402153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.402195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.402237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.402277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.402319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.402368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.402883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.402923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.402968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.403015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.403067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.403120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.403167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.403213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.403259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.403305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.403349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.403395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.403438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.403483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.403527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.403574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.403621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.403669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.403715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.403760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.403805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.403854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.403902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.403958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.404005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.404051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.404097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.404143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.404188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.404232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.404278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.404322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.404364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.404413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.404455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.404504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.404548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.404590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.404636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.404679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.404732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.404782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.404827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.404882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.404931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.404979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.405025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.405070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.405118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.405163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.405209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.405253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.405293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.405336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.405377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.405417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.405456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.405501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.405541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.405581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.405617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.405662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.405704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.406154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.406197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.406245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.406287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.406328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.406371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.406413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.406452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.406485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.406522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.406563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.406602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.406643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.406684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.406727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.406768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.406806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.406849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.406888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.751 [2024-07-15 15:16:26.406920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.406960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.407000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.407036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.407077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.407118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.407149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.407187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.407230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.407261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.407291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.407332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.407373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.407411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.407442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.407476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.407522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.407560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.407602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.407643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.407683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.407723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.407770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.407825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.407876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.407921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.407967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.408014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.408059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.408103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.408145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.408184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.408228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.408265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.408300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.408345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.408386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.408426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.408470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.408511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.408554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.408590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.408637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.408688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.408745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.409204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.409252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.409299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.409345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.409396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.409443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.409489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.409535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.409581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.409626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.409674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.409723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.409767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.409809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.409863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.409913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.409956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.410001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.410048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.410093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.410141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.410185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.410230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.410276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.410325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.410370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.410413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.410468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.410510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.410554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.410596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.410644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.410696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.410742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.410785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.410829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.410876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.410918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.410962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.411010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.411059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.411109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.411154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.411199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.411250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.411296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.411340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.411387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.411430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.411469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.411517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.411559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.411602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.411642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.411683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.411716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.411754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.411792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.411841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.411884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.411922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.411961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.411999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.412453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.412495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.412546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.412586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.412622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.412663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.412703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.412745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.412793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.412844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.412889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.412928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.412969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.413008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.413049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.413088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.413128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.413170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.413205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.413248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.413282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.413320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.413359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.413402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.413443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.413484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.413524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.413562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.413595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.413634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.413665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.413695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.752 [2024-07-15 15:16:26.413732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.413771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.413809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.413844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.413876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.413914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.413954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.413992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.414033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.414073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.414122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.414166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.414212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.414258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.414305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.414357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.414415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.414462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.414510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.414556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.414603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.414653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.414704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.414752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.414795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.414847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.414889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.414933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.414977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.415020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.415067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.415111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.415566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.415612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.415661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.415704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.415748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.415796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.415843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.415887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.415927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.415971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.416011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.416054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.416090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.416136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.416175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.416220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.416265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.416306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.416347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.416392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.416434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.416473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.416520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.416554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.416597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.416642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.416683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.416724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.416766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.416807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.416852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.416892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.416932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.416972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.417022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.417066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.417107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.417153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.417196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.417239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.417288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.417337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.417393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.417437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.417484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.417531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.417578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.417622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.417670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.417719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.417763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.417810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.417863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.417909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.417953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.417997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.418041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.418088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.418139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.418185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.418230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.418275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.418322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.418782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.418822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.418868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.418914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.418960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.419002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.419044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.419088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.419127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.419164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.419204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.419243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.419288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.419330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.419363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.419400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.419442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.419480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.419518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.419558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.753 [2024-07-15 15:16:26.419598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.419644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.419683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.419722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.419764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.419801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.419852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.419899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.419948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.419995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.420044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.420088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.420136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.420185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.420229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.420273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.420322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.420364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.420413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.420462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.420511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.420564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.420610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.420659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.420693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.420732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.420771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.420815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.420861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.420904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.420943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.420982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.421021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.421068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.421105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.421148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.421184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.421224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.421260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.421298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.421338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.421379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.421419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.421459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.421934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.421982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.422029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.422071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.422119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.422161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.422208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.422254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.422296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.422344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.422393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.422437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.422483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.422529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.422574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.422616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.422662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.422709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.422756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.422800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.422849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.422895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.422942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.422990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.423037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.423081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.423130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.423173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.423214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.423257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.423300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.423340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.423382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.423422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.754 [2024-07-15 15:16:26.423462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.423495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.423532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.423571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.423613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.423653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.423692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.423732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.423773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.423811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.423863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.423906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.423944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.423978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.424016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.424058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.424096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.424136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.424170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.424214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.424254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.424296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.424336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.424384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.424421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.424461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.424498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.424543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.424588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.425066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.425115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.425163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.425199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.425243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.425286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.425322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.425370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.425411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.425452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.425495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.425541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.425581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.425620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.425660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.425696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.425739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.425780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.425821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.425867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.425908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.425953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.425991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.426033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.426067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.426106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.426146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.426186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.426234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.426279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.426326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.426373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.426421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.426467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.426512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.426555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.426596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.426642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.426688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.426731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.426782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.426830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.426882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.426927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.426977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.427020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.427066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.427112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.427159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.427203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.427250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.427302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.427347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.427391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.427435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.427479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.427524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.427565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.427607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.427651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.427690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.427735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.427773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.427813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.428282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.428326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.755 [2024-07-15 15:16:26.428365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.428404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.428444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.428488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.428531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.428571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.428611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.428649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.428689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.428735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.428780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.428827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:22.756 [2024-07-15 15:16:26.428878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.428922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.428971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.429018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.429073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.429118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.429167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.429211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.429258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.429305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.429352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.429399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.429449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.429497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.429539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.429584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.429631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.429675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.429726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.429771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.429816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.429858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.429904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.429945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.429981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.430027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.430067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.430109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.430152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.430197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.430238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.430282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.430321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.430356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.430395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.430437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.430479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.430519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.430558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.430596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.430639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.430684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.430725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.430767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.430807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.430854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.430900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.430944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.430989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.431476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.431525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.431573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.431618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.431665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.431708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.431753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.431798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.431852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.431900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.431943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.431992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.432044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.432090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.432137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.432183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.432228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.432275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.432319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.432367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.432412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.432452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.432492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.432533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.432567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.432608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.432649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.432691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.432730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.432777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.432820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.432873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.432919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.432960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.432999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.433033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.433075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.433113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.433162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.433199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.433242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.756 [2024-07-15 15:16:26.433283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.433323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.433365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.433406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.433449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.433488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.433533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.433580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.433625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.433672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.433718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.433765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.433812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.433863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.433908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.433954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.434001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.434050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.434104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.434150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.434188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.434230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.434269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.434718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.434762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.434801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.434849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.434887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.434928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.434970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.435011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.435048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.435089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.435129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.435185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.435232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.435280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.435329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.435375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.435422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.435466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.435511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.435558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.435605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.435652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.435698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.435747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.435793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.435842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.435888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.435936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.435983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.436019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.436059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.436100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.436141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.436179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.436225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.436267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.436308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.436355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.436394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.436435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.436472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.436513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.436551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.436586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.436637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.436684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.436732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.436776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.436821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.436870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.436918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.436960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.437008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.437058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.437114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.437167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.437212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.437256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.437302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.437346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.437386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.437427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.437473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.437993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.438043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.438085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.438127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.438165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.438205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.438247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.438287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.438335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.438386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.438435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.438483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.438528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.438578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.438624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.438667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.757 [2024-07-15 15:16:26.438714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.438758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.438804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.438860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.438903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.438950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.439002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.439057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.439110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.439155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.439203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.439249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.439295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.439341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.439390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.439439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.439485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.439535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.439581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.439626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.439672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.439720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.439767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.439814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.439861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.439903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.439941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.439983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.440021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.440062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.440102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.440146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.440188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.440227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.440276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.440317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.440361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.440395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.440438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.440476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.440516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.440558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.440606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.440649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.440689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.440735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.440774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.440816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.441350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.441395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.441443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.441489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.441537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.441582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.441629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.441671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.441718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.441763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.441807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.441861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.441907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.441956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.442005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.442057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.442102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.442148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.442193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.442238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.442284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.442329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.442373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.442418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.442465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.442516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.442562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.442614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.442666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.442714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.442759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.442806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.442859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.442904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.442948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.442988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.443035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.443080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.443114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.443155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.443209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.443249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.443296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.443338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.443378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.443426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.443468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.443512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.443552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.443589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.443632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.443672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.443715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.443755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.443803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.443844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.443885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.758 [2024-07-15 15:16:26.443920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.443959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.444001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.444042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.444084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.444127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.444613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.444672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.444723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.444773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.444825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.444881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.444932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.444987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.445033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.445078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.445110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.445152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.445195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.445238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.445279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.445328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.445368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.445407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.445446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.445486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.445522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.445558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.445595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.445635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.445679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.445721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.445762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.445804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.445849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.445892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.445933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.445975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.446017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.446064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.446115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.446159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.446208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.446255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.446304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.446347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.446393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.446439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.446486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.446531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.446578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.446634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.446681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.446728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.446773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.446820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.446870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.446916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.446967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.447013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.447057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.447102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.447150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.447196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.447238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.447273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.447318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.447356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.447406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.447446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.447880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.447928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.447969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.448008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.448045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.448082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.448124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.448166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.448208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.448249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.448290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.448329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.448367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.448410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.448449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.448492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.759 [2024-07-15 15:16:26.448537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.448578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.448621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.448669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.448714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.448763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.448815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.448868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.448913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.448957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.449005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.449054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.449098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.449145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.449190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.449234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.449278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.449322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.449365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.449412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.449461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.449511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.449565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.449615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.449661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.449709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.449756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.449801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.449851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.449898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.449944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.449991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.450036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.450080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.450125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.450167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.450221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.450259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.450296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.450337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.450382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.450427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.450466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.450510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.450554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.450596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.450642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.451111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.451156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.451195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.451235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.451280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.451324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.451372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.451419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.451468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.451515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.451563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.451607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.451653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.451695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.451740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.451787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.451841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.451891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.451942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.451997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.452044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.452088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.452139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.452183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.452228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.452272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.452313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.452349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.452393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.452437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.452480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.452518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.452558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.452601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.452641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.452682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.452725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.452771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.452804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.452847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.452885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.452929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.452967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.453009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.453055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.453094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.453134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.453175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.453215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.453259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.453307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.453358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.453408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.453456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.453500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.453553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.760 [2024-07-15 15:16:26.453597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.453641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.453687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.453731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.453777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.453821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.453870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.453917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.454378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.454419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.454461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.454496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.454531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.454583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.454621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.454659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.454701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.454740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.454798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.454845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.454885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.454926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.454966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.455008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.455047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.455082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.455121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.455157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.455198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.455240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.455289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.455338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.455381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.455424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.455463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.455500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.455540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.455570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.455604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.455641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.455681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.455723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.455769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.455816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.455871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.455918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.455969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.456019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.456063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.456111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.456162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.456213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.456260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.456305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.456350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.456395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.456442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.456490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.456533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.456579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.456620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.456660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.456702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.456742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.456790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.456836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.456873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.456915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.456955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.456996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.457033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.457532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.457583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.457631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.457679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.457727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.457772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.457818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.457868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.457912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.457957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.458005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.458059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.458112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.458161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.458215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.458265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.458313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.458357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.458404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.458449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.458493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.458540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.458587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.458633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.458678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.458730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.458781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.458828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.458877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.458924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.458970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.761 [2024-07-15 15:16:26.459016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.459059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.459106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.459155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.459212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.459256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.459300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.459348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.459393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.459438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.459483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.459525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.459567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.459611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.459655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.459693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.459742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.459782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.459822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.459863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.459903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.459943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.459986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.460026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.460063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.460110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.460152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.460199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.460245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.460281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.460325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.460368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.460412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.460884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.460927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.460968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.461008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.461048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.461090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.461133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.461177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.461224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.461283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.461326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.461360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.461401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.461441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.461488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.461528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.461562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.461604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.461644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.461687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.461719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.461750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.461780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.461811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.461861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.461906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.461945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.461992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.462033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.462078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.462115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.462155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.462195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.462233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.462271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.462318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.462362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.462409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.462465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.462517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.462564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.462609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.462657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.462704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.462746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.462795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.462841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.462885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.462919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.462957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.463003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.463043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.463085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.463134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.463180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.463228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.463274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.463321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.463367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.463412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.762 [2024-07-15 15:16:26.463454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.463498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.463544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.464020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.464073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.464125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.464180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.464225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.464271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.464318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.464361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.464406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.464451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.464493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.464543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.464592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.464644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.464689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.464734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.464782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.464839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.464890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.464938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.464993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.465044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.465090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.465138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.465185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.465225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.465269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.465310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.465351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.465391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.465431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.465483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.465519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.465558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.465597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.465645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.465688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.465728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.465772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.465812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.465860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.465904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.465943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.465977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.466018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.466056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.466099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.466139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.466184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.466231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.466268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.466305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.466350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.466388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.466430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.466471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.466517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.466558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.466594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.466637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.466680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.466730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.466775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.466822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.467291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.467333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.467375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.467417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.467457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.467494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.467541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.467580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.467622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.467666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.467707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.467743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.467783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.467824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.467871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.467915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.467951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.467984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.468030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.468069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.468110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.468152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.468192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.468232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.468274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.468312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.468353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.763 [2024-07-15 15:16:26.468396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.468427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.468468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.468514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.468563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.468608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.468654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.468701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.468742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.468787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.468848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.468895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.468941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.468985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.469031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.469076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.469122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.469170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.469218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.469262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.469309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.469353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.469396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.469447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.469487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.469531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.469574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.469612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.469651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.469698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.469735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.469777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.469813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.469861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.469901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.469940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.470408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.470458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.470507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.470552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.470598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.470645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.470692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.470740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.470786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.470838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.470886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.470931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.470976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.471020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.471066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.471113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.471164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.471214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.471258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.471304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.471351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.471400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.471449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.471498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.471541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.471588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.471633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.471677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.471731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.471779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.471828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.471882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.471931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.471979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.472025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.472071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.472114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.472162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.472208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.472254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.472299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.472337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.472378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.472421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.472467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.472515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.472557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.472601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.472641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.472684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.472725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.472771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.472814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.472864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.472906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.472949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.472996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.473041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.473083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.473118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.473157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.473199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.473239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.473279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.473732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.473775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.473813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.473859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.473899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.473941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.473976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.474015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.474059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.474098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.474134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.474171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.474212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.474253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.474294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.474333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.474364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.474395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.474425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.474456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.474494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.474525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.474558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.764 [2024-07-15 15:16:26.474589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.474620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.474651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.474681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.474712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.474743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.474773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.474808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.474853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.474894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.474932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.474973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.475020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.475060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.475102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.475140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.475182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.475307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.475354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.475397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.475520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.475567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.475611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.475660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.475712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.475771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.475818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.475870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.475919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.475965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.476013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.476062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.476110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.476155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.476195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.476235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.476279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.476317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.476360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.476405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.476916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.476965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.477009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.477055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.477102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.477148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.477195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.477240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.477285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.477330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.477374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.477424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.477473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.477524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.477573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.477617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.477661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.477707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.477752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.477798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.477848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.477895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.477942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.477987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.478038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.478090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.478137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.478182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.478227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.478273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.478318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.478362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.478408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.478453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.478498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.478541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.478586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.478629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.478680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.478730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.478778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.478827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.478884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.478927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.478969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.479009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.479051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.479090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.479137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.479181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.479225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.479271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.479306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.479344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.479381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.479429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.479470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.479514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.479557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.479604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.479645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.479685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.479726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.479760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.480215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.480258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.480303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.480340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.480382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.480420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.480460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:11:22.765 [2024-07-15 15:16:26.480500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.480541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.480581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.480616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.480652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.480697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.480737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.765 [2024-07-15 15:16:26.480770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.480808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.480846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.480888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.480929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.480965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.481005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.481037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.481068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.481099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.481130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.481162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.481193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.481225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.481255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.481291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.481329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.481372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.481413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.481453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.481491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.481532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.481563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.481593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.481623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.481653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.481689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.481729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.481772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.481812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.481859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.481897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.481928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.481974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.482021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.482066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.482113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.482156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.482200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.482246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.482294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.482348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.482402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.482447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.482497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.482542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.482590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.482636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.482683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.483213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.483261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.483304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.483347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.483395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.483440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.483482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.483521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.483567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.483608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.483648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.483696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.483737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.483779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.483824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.483862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.483901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.483939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.483983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.484027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.484068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.484112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.484151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.484192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.484240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.484280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.484313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.484358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.484404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.484447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.484493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.484538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.484582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.484628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.484673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.484722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.484773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.484819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.484872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.484918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.484967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.485012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.485062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.485108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.485152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.485200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.485255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.485299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.485344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.485387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.485434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.485480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.485527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.485590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.485635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.485681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.485727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.485771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.485815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.485861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.485908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.485952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.485996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 [2024-07-15 15:16:26.486044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:11:22.766 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:22.766 15:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:22.766 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:22.766 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:22.766 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:23.025 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:23.025 15:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:11:23.025 15:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:11:23.025 true 00:11:23.025 15:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2944530 00:11:23.025 15:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:23.960 15:16:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:23.960 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:24.218 15:16:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:11:24.218 15:16:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:11:24.218 true 00:11:24.218 15:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2944530 00:11:24.218 15:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:24.476 15:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:24.737 15:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:11:24.738 15:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:11:24.738 true 00:11:24.738 15:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2944530 00:11:24.738 15:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:26.113 15:16:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:26.113 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:26.113 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:26.113 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:26.113 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:26.113 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:26.113 15:16:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:11:26.113 15:16:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:11:26.372 true 00:11:26.372 15:16:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2944530 00:11:26.372 15:16:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:27.307 15:16:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:27.307 15:16:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:11:27.307 15:16:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:11:27.566 true 00:11:27.566 15:16:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2944530 00:11:27.566 15:16:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:27.825 15:16:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:28.084 15:16:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:11:28.084 15:16:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:11:28.084 true 00:11:28.084 15:16:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2944530 00:11:28.084 15:16:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:28.343 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:28.343 15:16:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:28.343 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:28.343 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:28.343 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:28.343 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:28.603 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:28.603 15:16:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:11:28.603 15:16:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:11:28.603 true 00:11:28.862 15:16:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2944530 00:11:28.862 15:16:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:29.796 15:16:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:29.796 15:16:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:11:29.796 15:16:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:11:29.796 true 00:11:29.797 15:16:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2944530 00:11:29.797 15:16:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:30.053 15:16:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:30.311 15:16:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:11:30.311 15:16:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:11:30.311 true 00:11:30.569 15:16:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2944530 00:11:30.569 15:16:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:31.505 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:31.505 15:16:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:31.505 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:31.505 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:31.763 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:31.763 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:31.763 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:31.763 15:16:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:11:31.763 15:16:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:11:32.022 true 00:11:32.022 15:16:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2944530 00:11:32.022 15:16:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:32.957 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:32.957 15:16:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:32.957 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:32.957 15:16:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:11:32.957 15:16:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:11:33.215 true 00:11:33.215 15:16:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2944530 00:11:33.215 15:16:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:33.215 15:16:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:33.474 15:16:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:11:33.474 15:16:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:11:33.732 true 00:11:33.732 15:16:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2944530 00:11:33.732 15:16:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:33.991 15:16:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:33.991 15:16:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:11:33.991 15:16:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:11:34.249 true 00:11:34.249 15:16:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2944530 00:11:34.249 15:16:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:34.508 15:16:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:34.508 15:16:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:11:34.508 15:16:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:11:34.766 true 00:11:34.766 15:16:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2944530 00:11:34.766 15:16:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:36.140 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:36.140 15:16:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:36.140 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:36.140 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:36.140 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:36.140 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:36.140 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:36.140 15:16:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:11:36.140 15:16:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:11:36.399 true 00:11:36.399 15:16:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2944530 00:11:36.399 15:16:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:37.334 15:16:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:37.334 15:16:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:11:37.334 15:16:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:11:37.592 true 00:11:37.592 15:16:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2944530 00:11:37.592 15:16:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:37.592 15:16:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:37.850 15:16:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:11:37.850 15:16:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:11:38.109 true 00:11:38.109 15:16:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2944530 00:11:38.109 15:16:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:39.486 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:39.486 15:16:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:39.486 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:39.486 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:39.486 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:39.486 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:39.486 15:16:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:11:39.486 15:16:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:11:39.486 true 00:11:39.486 15:16:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2944530 00:11:39.486 15:16:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:40.423 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:40.423 15:16:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:40.423 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:40.682 15:16:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:11:40.682 15:16:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:11:40.682 true 00:11:40.682 15:16:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2944530 00:11:40.682 15:16:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:40.941 Initializing NVMe Controllers 00:11:40.941 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:40.941 Controller IO queue size 128, less than required. 00:11:40.941 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:40.941 Controller IO queue size 128, less than required. 00:11:40.941 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:40.941 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:40.941 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:11:40.941 Initialization complete. Launching workers. 00:11:40.941 ======================================================== 00:11:40.941 Latency(us) 00:11:40.941 Device Information : IOPS MiB/s Average min max 00:11:40.941 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2838.55 1.39 29337.83 1590.93 1074424.61 00:11:40.941 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17442.77 8.52 7338.14 2065.59 288585.12 00:11:40.941 ======================================================== 00:11:40.941 Total : 20281.32 9.90 10417.18 1590.93 1074424.61 00:11:40.941 00:11:40.941 15:16:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:41.233 15:16:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:11:41.233 15:16:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:11:41.233 true 00:11:41.233 15:16:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2944530 00:11:41.233 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2944530) - No such process 00:11:41.233 15:16:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2944530 00:11:41.233 15:16:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:41.492 15:16:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:41.750 15:16:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:11:41.750 15:16:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:11:41.750 15:16:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:11:41.750 15:16:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:41.750 15:16:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:11:41.750 null0 00:11:41.751 15:16:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:41.751 15:16:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:41.751 15:16:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:11:42.009 null1 00:11:42.009 15:16:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:42.009 15:16:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:42.009 15:16:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:11:42.267 null2 00:11:42.267 15:16:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:42.267 15:16:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:42.267 15:16:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:11:42.267 null3 00:11:42.267 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:42.267 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:42.267 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:11:42.525 null4 00:11:42.525 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:42.525 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:42.525 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:11:42.783 null5 00:11:42.783 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:42.783 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:42.783 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:11:42.783 null6 00:11:42.783 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:42.783 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:42.783 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:11:43.042 null7 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2950081 2950082 2950084 2950087 2950090 2950092 2950094 2950096 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:43.042 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:43.043 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:43.301 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:43.301 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:43.301 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:43.301 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:43.301 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:43.301 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:43.301 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:43.301 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:43.560 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:43.560 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:43.560 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:43.560 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:43.560 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:43.560 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:43.560 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:43.560 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:43.561 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:43.561 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:43.561 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:43.561 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:43.561 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:43.561 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:43.561 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:43.561 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:43.561 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:43.561 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:43.561 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:43.561 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:43.561 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:43.561 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:43.561 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:43.561 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:43.561 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:43.561 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:43.561 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:43.561 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:43.561 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:43.561 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:43.561 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:43.561 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:43.820 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:43.820 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:43.820 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:43.820 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:43.820 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:43.820 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:43.820 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:43.820 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:43.820 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:43.820 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:43.820 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:43.820 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:43.820 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:43.820 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:43.820 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:43.820 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:43.820 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:43.820 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:43.820 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:43.820 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:43.820 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:43.820 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:43.820 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:43.820 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:44.080 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:44.080 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:44.080 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:44.080 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:44.080 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:44.080 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:44.080 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:44.080 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:44.339 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:44.339 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:44.339 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:44.339 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:44.339 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:44.339 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:44.339 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:44.339 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:44.339 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:44.339 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:44.339 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:44.339 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:44.339 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:44.339 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:44.339 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:44.339 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:44.339 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:44.339 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:44.339 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:44.339 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:44.339 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:44.339 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:44.339 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:44.339 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:44.339 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:44.339 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:44.339 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:44.339 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:44.339 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:44.339 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:44.339 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:44.339 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:44.598 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:44.598 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:44.598 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:44.598 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:44.598 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:44.598 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:44.598 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:44.598 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:44.598 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:44.598 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:44.598 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:44.598 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:44.598 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:44.598 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:44.598 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:44.598 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:44.598 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:44.598 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:44.598 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:44.598 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:44.598 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:44.598 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:44.598 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:44.598 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:44.856 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:44.856 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:44.856 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:44.856 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:44.856 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:44.856 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:44.856 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:44.856 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:44.856 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:44.856 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:44.856 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:44.856 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:44.856 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:44.856 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:44.856 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:44.856 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:44.856 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:44.856 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:44.856 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:44.856 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:44.856 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:44.856 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:44.856 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:45.115 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:45.115 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:45.115 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:45.115 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:45.115 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:45.115 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:45.115 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:45.115 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:45.115 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:45.115 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:45.115 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:45.115 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:45.115 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:45.115 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:45.115 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:45.115 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:45.115 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:45.375 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:45.375 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:45.375 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:45.375 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:45.375 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:45.375 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:45.375 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:45.375 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:45.375 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:45.375 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:45.375 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:45.375 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:45.375 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:45.375 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:45.375 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:45.375 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:45.375 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:45.375 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:45.375 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:45.375 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:45.375 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:45.375 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:45.375 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:45.375 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:45.634 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:45.634 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:45.634 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:45.634 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:45.634 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:45.634 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:45.634 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:45.634 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:45.634 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:45.634 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:45.634 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:45.634 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:45.634 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:45.634 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:45.634 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:45.634 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:45.634 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:45.634 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:45.634 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:45.634 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:45.634 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:45.635 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:45.635 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:45.635 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:45.635 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:45.635 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:45.635 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:45.635 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:45.635 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:45.635 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:45.635 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:45.635 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:45.894 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:45.894 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:45.894 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:45.894 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:45.894 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:45.894 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:45.894 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:45.894 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:46.153 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:46.153 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:46.153 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:46.153 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:46.153 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:46.153 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:46.153 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:46.153 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:46.153 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:46.153 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:46.153 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:46.153 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:46.153 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:46.153 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:46.153 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:46.153 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:46.153 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:46.153 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:46.153 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:46.153 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:46.153 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:46.153 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:46.153 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:46.153 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:46.153 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:46.153 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:46.153 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:46.153 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:46.412 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:46.412 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:46.412 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:46.412 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:46.412 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:46.412 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:46.412 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:46.412 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:46.412 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:46.412 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:46.412 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:46.412 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:46.412 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:46.412 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:46.412 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:46.412 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:46.412 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:46.412 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:46.412 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:46.412 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:46.412 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:46.412 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:46.412 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:46.412 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:46.412 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:46.412 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:46.412 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:46.412 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:46.671 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:46.671 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:46.671 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:46.671 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:46.671 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:46.671 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:46.671 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:46.671 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:46.931 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:46.931 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:46.931 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:46.931 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:46.931 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:46.931 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:46.931 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:46.931 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:46.931 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:46.931 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:46.931 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:46.931 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:46.931 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:46.931 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:46.931 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:46.931 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:46.931 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:46.931 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:11:46.931 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:46.931 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:11:46.931 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:46.931 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:11:46.931 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:46.931 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:46.931 rmmod nvme_tcp 00:11:46.931 rmmod nvme_fabrics 00:11:46.931 rmmod nvme_keyring 00:11:46.931 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:46.931 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:11:46.931 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:11:46.931 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 2944144 ']' 00:11:46.931 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 2944144 00:11:46.931 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 2944144 ']' 00:11:46.931 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 2944144 00:11:46.931 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:11:46.931 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:46.931 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2944144 00:11:46.931 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:46.931 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:46.931 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2944144' 00:11:46.931 killing process with pid 2944144 00:11:46.931 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 2944144 00:11:46.931 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 2944144 00:11:47.191 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:47.191 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:47.191 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:47.191 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:47.191 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:47.191 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:47.191 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:47.191 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:49.725 15:16:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:49.725 00:11:49.725 real 0m48.184s 00:11:49.725 user 3m5.737s 00:11:49.725 sys 0m21.147s 00:11:49.725 15:16:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:49.725 15:16:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:49.725 ************************************ 00:11:49.725 END TEST nvmf_ns_hotplug_stress 00:11:49.725 ************************************ 00:11:49.725 15:16:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:49.725 15:16:53 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:49.725 15:16:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:49.725 15:16:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:49.725 15:16:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:49.725 ************************************ 00:11:49.725 START TEST nvmf_connect_stress 00:11:49.725 ************************************ 00:11:49.725 15:16:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:49.725 * Looking for test storage... 00:11:49.725 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:49.725 15:16:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:49.725 15:16:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:11:49.725 15:16:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:49.725 15:16:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:49.725 15:16:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:49.725 15:16:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:49.725 15:16:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:49.725 15:16:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:49.725 15:16:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:49.725 15:16:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:49.725 15:16:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:49.725 15:16:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:49.725 15:16:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:11:49.725 15:16:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:11:49.725 15:16:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:49.725 15:16:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:49.725 15:16:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:49.725 15:16:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:49.725 15:16:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:49.725 15:16:53 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:49.725 15:16:53 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:49.725 15:16:53 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:49.725 15:16:53 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.725 15:16:53 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.725 15:16:53 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.725 15:16:53 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:11:49.725 15:16:53 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.725 15:16:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:11:49.725 15:16:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:49.725 15:16:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:49.725 15:16:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:49.725 15:16:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:49.725 15:16:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:49.725 15:16:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:49.725 15:16:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:49.725 15:16:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:49.725 15:16:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:11:49.725 15:16:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:49.725 15:16:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:49.725 15:16:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:49.725 15:16:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:49.725 15:16:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:49.725 15:16:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:49.725 15:16:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:49.725 15:16:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:49.725 15:16:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:49.725 15:16:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:49.725 15:16:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:11:49.725 15:16:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:56.290 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:56.290 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:56.290 Found net devices under 0000:af:00.0: cvl_0_0 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:56.290 Found net devices under 0000:af:00.1: cvl_0_1 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:56.290 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:56.291 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:56.291 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:56.291 15:16:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:56.291 15:17:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:56.291 15:17:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:56.291 15:17:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:56.291 15:17:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:56.291 15:17:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:56.291 15:17:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:56.550 15:17:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:56.550 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:56.550 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:11:56.550 00:11:56.550 --- 10.0.0.2 ping statistics --- 00:11:56.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:56.550 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:11:56.550 15:17:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:56.550 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:56.550 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:11:56.550 00:11:56.550 --- 10.0.0.1 ping statistics --- 00:11:56.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:56.550 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:11:56.550 15:17:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:56.550 15:17:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:11:56.550 15:17:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:56.550 15:17:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:56.550 15:17:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:56.550 15:17:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:56.550 15:17:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:56.550 15:17:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:56.550 15:17:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:56.550 15:17:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:11:56.550 15:17:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:56.550 15:17:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:56.550 15:17:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:56.550 15:17:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=2954727 00:11:56.550 15:17:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:56.550 15:17:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 2954727 00:11:56.550 15:17:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 2954727 ']' 00:11:56.550 15:17:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:56.550 15:17:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:56.550 15:17:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:56.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:56.550 15:17:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:56.550 15:17:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:56.550 [2024-07-15 15:17:00.313345] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:11:56.550 [2024-07-15 15:17:00.313393] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:56.550 EAL: No free 2048 kB hugepages reported on node 1 00:11:56.550 [2024-07-15 15:17:00.387794] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:56.809 [2024-07-15 15:17:00.458361] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:56.809 [2024-07-15 15:17:00.458400] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:56.809 [2024-07-15 15:17:00.458410] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:56.809 [2024-07-15 15:17:00.458419] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:56.809 [2024-07-15 15:17:00.458426] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:56.809 [2024-07-15 15:17:00.458527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:56.809 [2024-07-15 15:17:00.458630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:56.809 [2024-07-15 15:17:00.458632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:57.410 15:17:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:57.410 15:17:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:11:57.410 15:17:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:57.410 15:17:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:57.410 15:17:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:57.410 15:17:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:57.410 15:17:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:57.410 15:17:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.410 15:17:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:57.410 [2024-07-15 15:17:01.170773] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:57.410 15:17:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.410 15:17:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:57.410 15:17:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.410 15:17:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:57.410 15:17:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.410 15:17:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:57.410 15:17:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.410 15:17:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:57.410 [2024-07-15 15:17:01.203940] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:57.410 15:17:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.410 15:17:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:57.410 15:17:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.410 15:17:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:57.410 NULL1 00:11:57.410 15:17:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.410 15:17:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2954949 00:11:57.410 15:17:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:11:57.410 15:17:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:57.410 15:17:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:57.410 15:17:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:11:57.410 15:17:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:57.410 15:17:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:57.410 15:17:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:57.410 15:17:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:57.410 15:17:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:57.410 15:17:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:57.410 15:17:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:57.410 15:17:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:57.410 15:17:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:57.410 15:17:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:57.410 EAL: No free 2048 kB hugepages reported on node 1 00:11:57.410 15:17:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:57.410 15:17:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:57.410 15:17:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:57.410 15:17:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:57.410 15:17:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:57.410 15:17:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:57.410 15:17:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:57.410 15:17:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:57.410 15:17:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:57.410 15:17:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:57.410 15:17:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:57.410 15:17:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:57.725 15:17:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:57.725 15:17:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:57.725 15:17:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:57.725 15:17:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:57.725 15:17:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:57.725 15:17:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:57.725 15:17:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:57.725 15:17:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:57.725 15:17:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:57.725 15:17:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:57.725 15:17:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:57.725 15:17:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:57.725 15:17:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:57.725 15:17:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:57.725 15:17:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:57.725 15:17:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:57.725 15:17:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:57.725 15:17:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:57.725 15:17:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954949 00:11:57.725 15:17:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:57.725 15:17:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.725 15:17:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:57.983 15:17:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.984 15:17:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954949 00:11:57.984 15:17:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:57.984 15:17:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.984 15:17:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:58.242 15:17:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.242 15:17:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954949 00:11:58.242 15:17:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:58.242 15:17:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.242 15:17:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:58.501 15:17:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.501 15:17:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954949 00:11:58.501 15:17:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:58.501 15:17:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.501 15:17:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:58.762 15:17:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.762 15:17:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954949 00:11:58.762 15:17:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:58.762 15:17:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.762 15:17:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:59.330 15:17:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.330 15:17:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954949 00:11:59.330 15:17:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:59.330 15:17:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.330 15:17:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:59.589 15:17:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.589 15:17:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954949 00:11:59.589 15:17:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:59.589 15:17:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.589 15:17:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:59.847 15:17:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.847 15:17:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954949 00:11:59.847 15:17:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:59.847 15:17:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.847 15:17:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:00.106 15:17:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.106 15:17:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954949 00:12:00.106 15:17:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:00.106 15:17:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.106 15:17:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:00.365 15:17:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.365 15:17:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954949 00:12:00.365 15:17:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:00.365 15:17:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.365 15:17:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:00.932 15:17:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.932 15:17:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954949 00:12:00.932 15:17:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:00.932 15:17:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.932 15:17:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:01.190 15:17:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.190 15:17:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954949 00:12:01.190 15:17:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:01.190 15:17:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.190 15:17:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:01.449 15:17:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.449 15:17:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954949 00:12:01.449 15:17:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:01.449 15:17:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.449 15:17:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:01.707 15:17:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.707 15:17:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954949 00:12:01.707 15:17:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:01.707 15:17:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.707 15:17:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:02.274 15:17:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.274 15:17:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954949 00:12:02.274 15:17:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:02.274 15:17:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.274 15:17:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:02.534 15:17:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.534 15:17:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954949 00:12:02.534 15:17:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:02.534 15:17:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.534 15:17:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:02.792 15:17:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.792 15:17:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954949 00:12:02.792 15:17:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:02.792 15:17:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.792 15:17:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:03.051 15:17:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.051 15:17:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954949 00:12:03.051 15:17:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:03.051 15:17:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.051 15:17:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:03.310 15:17:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.310 15:17:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954949 00:12:03.310 15:17:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:03.310 15:17:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.310 15:17:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:03.878 15:17:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.878 15:17:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954949 00:12:03.878 15:17:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:03.878 15:17:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.878 15:17:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:04.137 15:17:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.137 15:17:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954949 00:12:04.137 15:17:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:04.137 15:17:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.137 15:17:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:04.396 15:17:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.396 15:17:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954949 00:12:04.396 15:17:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:04.396 15:17:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.396 15:17:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:04.655 15:17:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.655 15:17:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954949 00:12:04.655 15:17:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:04.655 15:17:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.655 15:17:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:04.913 15:17:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.913 15:17:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954949 00:12:04.913 15:17:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:04.913 15:17:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.913 15:17:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:05.480 15:17:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.480 15:17:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954949 00:12:05.480 15:17:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:05.480 15:17:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.480 15:17:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:05.738 15:17:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.738 15:17:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954949 00:12:05.738 15:17:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:05.738 15:17:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.738 15:17:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:05.997 15:17:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.997 15:17:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954949 00:12:05.997 15:17:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:05.997 15:17:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.997 15:17:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:06.256 15:17:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.256 15:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954949 00:12:06.256 15:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:06.256 15:17:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.256 15:17:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:06.822 15:17:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.822 15:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954949 00:12:06.822 15:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:06.822 15:17:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.822 15:17:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:07.080 15:17:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.080 15:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954949 00:12:07.080 15:17:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:07.080 15:17:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.080 15:17:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:07.338 15:17:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.338 15:17:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954949 00:12:07.338 15:17:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:07.338 15:17:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.338 15:17:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:07.597 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:07.597 15:17:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.597 15:17:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2954949 00:12:07.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2954949) - No such process 00:12:07.597 15:17:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2954949 00:12:07.597 15:17:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:07.597 15:17:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:07.597 15:17:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:07.597 15:17:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:07.597 15:17:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:12:07.597 15:17:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:07.597 15:17:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:12:07.597 15:17:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:07.597 15:17:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:07.597 rmmod nvme_tcp 00:12:07.597 rmmod nvme_fabrics 00:12:07.597 rmmod nvme_keyring 00:12:07.597 15:17:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:07.597 15:17:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:12:07.597 15:17:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:12:07.597 15:17:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 2954727 ']' 00:12:07.597 15:17:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 2954727 00:12:07.597 15:17:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 2954727 ']' 00:12:07.597 15:17:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 2954727 00:12:07.597 15:17:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:12:07.597 15:17:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:07.597 15:17:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2954727 00:12:07.856 15:17:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:07.856 15:17:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:07.856 15:17:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2954727' 00:12:07.856 killing process with pid 2954727 00:12:07.856 15:17:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 2954727 00:12:07.856 15:17:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 2954727 00:12:07.856 15:17:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:07.856 15:17:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:07.856 15:17:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:07.856 15:17:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:07.856 15:17:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:07.856 15:17:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.856 15:17:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:07.856 15:17:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:10.389 15:17:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:10.389 00:12:10.389 real 0m20.693s 00:12:10.389 user 0m40.845s 00:12:10.389 sys 0m10.187s 00:12:10.389 15:17:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:10.389 15:17:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:10.389 ************************************ 00:12:10.389 END TEST nvmf_connect_stress 00:12:10.389 ************************************ 00:12:10.389 15:17:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:10.389 15:17:13 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:10.389 15:17:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:10.389 15:17:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:10.389 15:17:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:10.389 ************************************ 00:12:10.389 START TEST nvmf_fused_ordering 00:12:10.389 ************************************ 00:12:10.389 15:17:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:10.389 * Looking for test storage... 00:12:10.389 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:10.389 15:17:13 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:10.389 15:17:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:12:10.389 15:17:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:10.389 15:17:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:10.389 15:17:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:10.389 15:17:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:10.389 15:17:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:10.389 15:17:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:10.389 15:17:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:10.389 15:17:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:10.389 15:17:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:10.389 15:17:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:10.389 15:17:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:12:10.389 15:17:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:12:10.389 15:17:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:10.389 15:17:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:10.389 15:17:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:10.389 15:17:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:10.389 15:17:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:10.389 15:17:14 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:10.389 15:17:14 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:10.389 15:17:14 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:10.389 15:17:14 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.389 15:17:14 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.389 15:17:14 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.389 15:17:14 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:12:10.389 15:17:14 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.389 15:17:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:12:10.389 15:17:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:10.389 15:17:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:10.389 15:17:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:10.389 15:17:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:10.389 15:17:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:10.390 15:17:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:10.390 15:17:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:10.390 15:17:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:10.390 15:17:14 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:10.390 15:17:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:10.390 15:17:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:10.390 15:17:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:10.390 15:17:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:10.390 15:17:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:10.390 15:17:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:10.390 15:17:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:10.390 15:17:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:10.390 15:17:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:10.390 15:17:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:10.390 15:17:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:12:10.390 15:17:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:16.976 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:16.976 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:16.976 Found net devices under 0000:af:00.0: cvl_0_0 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:16.976 Found net devices under 0000:af:00.1: cvl_0_1 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:16.976 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:16.976 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.345 ms 00:12:16.976 00:12:16.976 --- 10.0.0.2 ping statistics --- 00:12:16.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.976 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:16.976 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:16.976 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:12:16.976 00:12:16.976 --- 10.0.0.1 ping statistics --- 00:12:16.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.976 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=2960295 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 2960295 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 2960295 ']' 00:12:16.976 15:17:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:16.977 15:17:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:16.977 15:17:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:16.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:16.977 15:17:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:16.977 15:17:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:16.977 [2024-07-15 15:17:20.759559] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:12:16.977 [2024-07-15 15:17:20.759607] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:16.977 EAL: No free 2048 kB hugepages reported on node 1 00:12:16.977 [2024-07-15 15:17:20.834379] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.236 [2024-07-15 15:17:20.907394] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:17.236 [2024-07-15 15:17:20.907427] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:17.236 [2024-07-15 15:17:20.907437] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:17.236 [2024-07-15 15:17:20.907445] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:17.236 [2024-07-15 15:17:20.907468] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:17.236 [2024-07-15 15:17:20.907490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:17.803 15:17:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:17.803 15:17:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:12:17.803 15:17:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:17.803 15:17:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:17.803 15:17:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:17.803 15:17:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:17.803 15:17:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:17.803 15:17:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.803 15:17:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:17.803 [2024-07-15 15:17:21.605940] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:17.803 15:17:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.803 15:17:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:17.803 15:17:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.803 15:17:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:17.803 15:17:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.803 15:17:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:17.803 15:17:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.803 15:17:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:17.803 [2024-07-15 15:17:21.626094] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:17.803 15:17:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.803 15:17:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:17.803 15:17:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.803 15:17:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:17.803 NULL1 00:12:17.803 15:17:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.803 15:17:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:17.803 15:17:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.803 15:17:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:17.803 15:17:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.803 15:17:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:17.803 15:17:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.803 15:17:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:17.803 15:17:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.803 15:17:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:17.803 [2024-07-15 15:17:21.682048] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:12:17.804 [2024-07-15 15:17:21.682091] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2960568 ] 00:12:18.062 EAL: No free 2048 kB hugepages reported on node 1 00:12:18.409 Attached to nqn.2016-06.io.spdk:cnode1 00:12:18.409 Namespace ID: 1 size: 1GB 00:12:18.409 fused_ordering(0) 00:12:18.409 fused_ordering(1) 00:12:18.409 fused_ordering(2) 00:12:18.409 fused_ordering(3) 00:12:18.409 fused_ordering(4) 00:12:18.409 fused_ordering(5) 00:12:18.409 fused_ordering(6) 00:12:18.409 fused_ordering(7) 00:12:18.409 fused_ordering(8) 00:12:18.409 fused_ordering(9) 00:12:18.409 fused_ordering(10) 00:12:18.409 fused_ordering(11) 00:12:18.409 fused_ordering(12) 00:12:18.409 fused_ordering(13) 00:12:18.409 fused_ordering(14) 00:12:18.409 fused_ordering(15) 00:12:18.409 fused_ordering(16) 00:12:18.409 fused_ordering(17) 00:12:18.409 fused_ordering(18) 00:12:18.409 fused_ordering(19) 00:12:18.409 fused_ordering(20) 00:12:18.409 fused_ordering(21) 00:12:18.409 fused_ordering(22) 00:12:18.409 fused_ordering(23) 00:12:18.409 fused_ordering(24) 00:12:18.409 fused_ordering(25) 00:12:18.409 fused_ordering(26) 00:12:18.409 fused_ordering(27) 00:12:18.409 fused_ordering(28) 00:12:18.409 fused_ordering(29) 00:12:18.409 fused_ordering(30) 00:12:18.409 fused_ordering(31) 00:12:18.409 fused_ordering(32) 00:12:18.409 fused_ordering(33) 00:12:18.409 fused_ordering(34) 00:12:18.409 fused_ordering(35) 00:12:18.409 fused_ordering(36) 00:12:18.409 fused_ordering(37) 00:12:18.409 fused_ordering(38) 00:12:18.409 fused_ordering(39) 00:12:18.409 fused_ordering(40) 00:12:18.409 fused_ordering(41) 00:12:18.409 fused_ordering(42) 00:12:18.409 fused_ordering(43) 00:12:18.409 fused_ordering(44) 00:12:18.409 fused_ordering(45) 00:12:18.409 fused_ordering(46) 00:12:18.409 fused_ordering(47) 00:12:18.409 fused_ordering(48) 00:12:18.409 fused_ordering(49) 00:12:18.409 fused_ordering(50) 00:12:18.409 fused_ordering(51) 00:12:18.409 fused_ordering(52) 00:12:18.409 fused_ordering(53) 00:12:18.409 fused_ordering(54) 00:12:18.409 fused_ordering(55) 00:12:18.409 fused_ordering(56) 00:12:18.409 fused_ordering(57) 00:12:18.409 fused_ordering(58) 00:12:18.409 fused_ordering(59) 00:12:18.409 fused_ordering(60) 00:12:18.409 fused_ordering(61) 00:12:18.409 fused_ordering(62) 00:12:18.409 fused_ordering(63) 00:12:18.409 fused_ordering(64) 00:12:18.409 fused_ordering(65) 00:12:18.409 fused_ordering(66) 00:12:18.409 fused_ordering(67) 00:12:18.409 fused_ordering(68) 00:12:18.409 fused_ordering(69) 00:12:18.409 fused_ordering(70) 00:12:18.409 fused_ordering(71) 00:12:18.409 fused_ordering(72) 00:12:18.409 fused_ordering(73) 00:12:18.409 fused_ordering(74) 00:12:18.409 fused_ordering(75) 00:12:18.409 fused_ordering(76) 00:12:18.409 fused_ordering(77) 00:12:18.409 fused_ordering(78) 00:12:18.409 fused_ordering(79) 00:12:18.409 fused_ordering(80) 00:12:18.409 fused_ordering(81) 00:12:18.409 fused_ordering(82) 00:12:18.409 fused_ordering(83) 00:12:18.409 fused_ordering(84) 00:12:18.409 fused_ordering(85) 00:12:18.409 fused_ordering(86) 00:12:18.409 fused_ordering(87) 00:12:18.409 fused_ordering(88) 00:12:18.409 fused_ordering(89) 00:12:18.409 fused_ordering(90) 00:12:18.409 fused_ordering(91) 00:12:18.409 fused_ordering(92) 00:12:18.409 fused_ordering(93) 00:12:18.409 fused_ordering(94) 00:12:18.409 fused_ordering(95) 00:12:18.409 fused_ordering(96) 00:12:18.409 fused_ordering(97) 00:12:18.409 fused_ordering(98) 00:12:18.409 fused_ordering(99) 00:12:18.409 fused_ordering(100) 00:12:18.409 fused_ordering(101) 00:12:18.409 fused_ordering(102) 00:12:18.409 fused_ordering(103) 00:12:18.409 fused_ordering(104) 00:12:18.409 fused_ordering(105) 00:12:18.409 fused_ordering(106) 00:12:18.409 fused_ordering(107) 00:12:18.409 fused_ordering(108) 00:12:18.409 fused_ordering(109) 00:12:18.409 fused_ordering(110) 00:12:18.409 fused_ordering(111) 00:12:18.409 fused_ordering(112) 00:12:18.409 fused_ordering(113) 00:12:18.409 fused_ordering(114) 00:12:18.409 fused_ordering(115) 00:12:18.409 fused_ordering(116) 00:12:18.409 fused_ordering(117) 00:12:18.409 fused_ordering(118) 00:12:18.409 fused_ordering(119) 00:12:18.409 fused_ordering(120) 00:12:18.409 fused_ordering(121) 00:12:18.409 fused_ordering(122) 00:12:18.409 fused_ordering(123) 00:12:18.409 fused_ordering(124) 00:12:18.409 fused_ordering(125) 00:12:18.409 fused_ordering(126) 00:12:18.409 fused_ordering(127) 00:12:18.409 fused_ordering(128) 00:12:18.409 fused_ordering(129) 00:12:18.409 fused_ordering(130) 00:12:18.409 fused_ordering(131) 00:12:18.409 fused_ordering(132) 00:12:18.409 fused_ordering(133) 00:12:18.409 fused_ordering(134) 00:12:18.409 fused_ordering(135) 00:12:18.409 fused_ordering(136) 00:12:18.409 fused_ordering(137) 00:12:18.409 fused_ordering(138) 00:12:18.409 fused_ordering(139) 00:12:18.409 fused_ordering(140) 00:12:18.409 fused_ordering(141) 00:12:18.409 fused_ordering(142) 00:12:18.409 fused_ordering(143) 00:12:18.409 fused_ordering(144) 00:12:18.409 fused_ordering(145) 00:12:18.409 fused_ordering(146) 00:12:18.409 fused_ordering(147) 00:12:18.409 fused_ordering(148) 00:12:18.409 fused_ordering(149) 00:12:18.409 fused_ordering(150) 00:12:18.409 fused_ordering(151) 00:12:18.409 fused_ordering(152) 00:12:18.409 fused_ordering(153) 00:12:18.409 fused_ordering(154) 00:12:18.409 fused_ordering(155) 00:12:18.409 fused_ordering(156) 00:12:18.409 fused_ordering(157) 00:12:18.409 fused_ordering(158) 00:12:18.409 fused_ordering(159) 00:12:18.409 fused_ordering(160) 00:12:18.409 fused_ordering(161) 00:12:18.409 fused_ordering(162) 00:12:18.409 fused_ordering(163) 00:12:18.409 fused_ordering(164) 00:12:18.409 fused_ordering(165) 00:12:18.409 fused_ordering(166) 00:12:18.409 fused_ordering(167) 00:12:18.409 fused_ordering(168) 00:12:18.409 fused_ordering(169) 00:12:18.409 fused_ordering(170) 00:12:18.409 fused_ordering(171) 00:12:18.409 fused_ordering(172) 00:12:18.409 fused_ordering(173) 00:12:18.409 fused_ordering(174) 00:12:18.409 fused_ordering(175) 00:12:18.409 fused_ordering(176) 00:12:18.409 fused_ordering(177) 00:12:18.409 fused_ordering(178) 00:12:18.409 fused_ordering(179) 00:12:18.409 fused_ordering(180) 00:12:18.409 fused_ordering(181) 00:12:18.409 fused_ordering(182) 00:12:18.409 fused_ordering(183) 00:12:18.409 fused_ordering(184) 00:12:18.409 fused_ordering(185) 00:12:18.409 fused_ordering(186) 00:12:18.409 fused_ordering(187) 00:12:18.409 fused_ordering(188) 00:12:18.409 fused_ordering(189) 00:12:18.409 fused_ordering(190) 00:12:18.409 fused_ordering(191) 00:12:18.409 fused_ordering(192) 00:12:18.409 fused_ordering(193) 00:12:18.409 fused_ordering(194) 00:12:18.409 fused_ordering(195) 00:12:18.409 fused_ordering(196) 00:12:18.409 fused_ordering(197) 00:12:18.409 fused_ordering(198) 00:12:18.409 fused_ordering(199) 00:12:18.409 fused_ordering(200) 00:12:18.409 fused_ordering(201) 00:12:18.409 fused_ordering(202) 00:12:18.409 fused_ordering(203) 00:12:18.409 fused_ordering(204) 00:12:18.409 fused_ordering(205) 00:12:18.976 fused_ordering(206) 00:12:18.976 fused_ordering(207) 00:12:18.976 fused_ordering(208) 00:12:18.976 fused_ordering(209) 00:12:18.976 fused_ordering(210) 00:12:18.976 fused_ordering(211) 00:12:18.976 fused_ordering(212) 00:12:18.976 fused_ordering(213) 00:12:18.976 fused_ordering(214) 00:12:18.976 fused_ordering(215) 00:12:18.976 fused_ordering(216) 00:12:18.976 fused_ordering(217) 00:12:18.976 fused_ordering(218) 00:12:18.976 fused_ordering(219) 00:12:18.976 fused_ordering(220) 00:12:18.976 fused_ordering(221) 00:12:18.976 fused_ordering(222) 00:12:18.976 fused_ordering(223) 00:12:18.976 fused_ordering(224) 00:12:18.976 fused_ordering(225) 00:12:18.976 fused_ordering(226) 00:12:18.976 fused_ordering(227) 00:12:18.976 fused_ordering(228) 00:12:18.977 fused_ordering(229) 00:12:18.977 fused_ordering(230) 00:12:18.977 fused_ordering(231) 00:12:18.977 fused_ordering(232) 00:12:18.977 fused_ordering(233) 00:12:18.977 fused_ordering(234) 00:12:18.977 fused_ordering(235) 00:12:18.977 fused_ordering(236) 00:12:18.977 fused_ordering(237) 00:12:18.977 fused_ordering(238) 00:12:18.977 fused_ordering(239) 00:12:18.977 fused_ordering(240) 00:12:18.977 fused_ordering(241) 00:12:18.977 fused_ordering(242) 00:12:18.977 fused_ordering(243) 00:12:18.977 fused_ordering(244) 00:12:18.977 fused_ordering(245) 00:12:18.977 fused_ordering(246) 00:12:18.977 fused_ordering(247) 00:12:18.977 fused_ordering(248) 00:12:18.977 fused_ordering(249) 00:12:18.977 fused_ordering(250) 00:12:18.977 fused_ordering(251) 00:12:18.977 fused_ordering(252) 00:12:18.977 fused_ordering(253) 00:12:18.977 fused_ordering(254) 00:12:18.977 fused_ordering(255) 00:12:18.977 fused_ordering(256) 00:12:18.977 fused_ordering(257) 00:12:18.977 fused_ordering(258) 00:12:18.977 fused_ordering(259) 00:12:18.977 fused_ordering(260) 00:12:18.977 fused_ordering(261) 00:12:18.977 fused_ordering(262) 00:12:18.977 fused_ordering(263) 00:12:18.977 fused_ordering(264) 00:12:18.977 fused_ordering(265) 00:12:18.977 fused_ordering(266) 00:12:18.977 fused_ordering(267) 00:12:18.977 fused_ordering(268) 00:12:18.977 fused_ordering(269) 00:12:18.977 fused_ordering(270) 00:12:18.977 fused_ordering(271) 00:12:18.977 fused_ordering(272) 00:12:18.977 fused_ordering(273) 00:12:18.977 fused_ordering(274) 00:12:18.977 fused_ordering(275) 00:12:18.977 fused_ordering(276) 00:12:18.977 fused_ordering(277) 00:12:18.977 fused_ordering(278) 00:12:18.977 fused_ordering(279) 00:12:18.977 fused_ordering(280) 00:12:18.977 fused_ordering(281) 00:12:18.977 fused_ordering(282) 00:12:18.977 fused_ordering(283) 00:12:18.977 fused_ordering(284) 00:12:18.977 fused_ordering(285) 00:12:18.977 fused_ordering(286) 00:12:18.977 fused_ordering(287) 00:12:18.977 fused_ordering(288) 00:12:18.977 fused_ordering(289) 00:12:18.977 fused_ordering(290) 00:12:18.977 fused_ordering(291) 00:12:18.977 fused_ordering(292) 00:12:18.977 fused_ordering(293) 00:12:18.977 fused_ordering(294) 00:12:18.977 fused_ordering(295) 00:12:18.977 fused_ordering(296) 00:12:18.977 fused_ordering(297) 00:12:18.977 fused_ordering(298) 00:12:18.977 fused_ordering(299) 00:12:18.977 fused_ordering(300) 00:12:18.977 fused_ordering(301) 00:12:18.977 fused_ordering(302) 00:12:18.977 fused_ordering(303) 00:12:18.977 fused_ordering(304) 00:12:18.977 fused_ordering(305) 00:12:18.977 fused_ordering(306) 00:12:18.977 fused_ordering(307) 00:12:18.977 fused_ordering(308) 00:12:18.977 fused_ordering(309) 00:12:18.977 fused_ordering(310) 00:12:18.977 fused_ordering(311) 00:12:18.977 fused_ordering(312) 00:12:18.977 fused_ordering(313) 00:12:18.977 fused_ordering(314) 00:12:18.977 fused_ordering(315) 00:12:18.977 fused_ordering(316) 00:12:18.977 fused_ordering(317) 00:12:18.977 fused_ordering(318) 00:12:18.977 fused_ordering(319) 00:12:18.977 fused_ordering(320) 00:12:18.977 fused_ordering(321) 00:12:18.977 fused_ordering(322) 00:12:18.977 fused_ordering(323) 00:12:18.977 fused_ordering(324) 00:12:18.977 fused_ordering(325) 00:12:18.977 fused_ordering(326) 00:12:18.977 fused_ordering(327) 00:12:18.977 fused_ordering(328) 00:12:18.977 fused_ordering(329) 00:12:18.977 fused_ordering(330) 00:12:18.977 fused_ordering(331) 00:12:18.977 fused_ordering(332) 00:12:18.977 fused_ordering(333) 00:12:18.977 fused_ordering(334) 00:12:18.977 fused_ordering(335) 00:12:18.977 fused_ordering(336) 00:12:18.977 fused_ordering(337) 00:12:18.977 fused_ordering(338) 00:12:18.977 fused_ordering(339) 00:12:18.977 fused_ordering(340) 00:12:18.977 fused_ordering(341) 00:12:18.977 fused_ordering(342) 00:12:18.977 fused_ordering(343) 00:12:18.977 fused_ordering(344) 00:12:18.977 fused_ordering(345) 00:12:18.977 fused_ordering(346) 00:12:18.977 fused_ordering(347) 00:12:18.977 fused_ordering(348) 00:12:18.977 fused_ordering(349) 00:12:18.977 fused_ordering(350) 00:12:18.977 fused_ordering(351) 00:12:18.977 fused_ordering(352) 00:12:18.977 fused_ordering(353) 00:12:18.977 fused_ordering(354) 00:12:18.977 fused_ordering(355) 00:12:18.977 fused_ordering(356) 00:12:18.977 fused_ordering(357) 00:12:18.977 fused_ordering(358) 00:12:18.977 fused_ordering(359) 00:12:18.977 fused_ordering(360) 00:12:18.977 fused_ordering(361) 00:12:18.977 fused_ordering(362) 00:12:18.977 fused_ordering(363) 00:12:18.977 fused_ordering(364) 00:12:18.977 fused_ordering(365) 00:12:18.977 fused_ordering(366) 00:12:18.977 fused_ordering(367) 00:12:18.977 fused_ordering(368) 00:12:18.977 fused_ordering(369) 00:12:18.977 fused_ordering(370) 00:12:18.977 fused_ordering(371) 00:12:18.977 fused_ordering(372) 00:12:18.977 fused_ordering(373) 00:12:18.977 fused_ordering(374) 00:12:18.977 fused_ordering(375) 00:12:18.977 fused_ordering(376) 00:12:18.977 fused_ordering(377) 00:12:18.977 fused_ordering(378) 00:12:18.977 fused_ordering(379) 00:12:18.977 fused_ordering(380) 00:12:18.977 fused_ordering(381) 00:12:18.977 fused_ordering(382) 00:12:18.977 fused_ordering(383) 00:12:18.977 fused_ordering(384) 00:12:18.977 fused_ordering(385) 00:12:18.977 fused_ordering(386) 00:12:18.977 fused_ordering(387) 00:12:18.977 fused_ordering(388) 00:12:18.977 fused_ordering(389) 00:12:18.977 fused_ordering(390) 00:12:18.977 fused_ordering(391) 00:12:18.977 fused_ordering(392) 00:12:18.977 fused_ordering(393) 00:12:18.977 fused_ordering(394) 00:12:18.977 fused_ordering(395) 00:12:18.977 fused_ordering(396) 00:12:18.977 fused_ordering(397) 00:12:18.977 fused_ordering(398) 00:12:18.977 fused_ordering(399) 00:12:18.977 fused_ordering(400) 00:12:18.977 fused_ordering(401) 00:12:18.977 fused_ordering(402) 00:12:18.977 fused_ordering(403) 00:12:18.977 fused_ordering(404) 00:12:18.977 fused_ordering(405) 00:12:18.977 fused_ordering(406) 00:12:18.977 fused_ordering(407) 00:12:18.977 fused_ordering(408) 00:12:18.977 fused_ordering(409) 00:12:18.977 fused_ordering(410) 00:12:19.236 fused_ordering(411) 00:12:19.236 fused_ordering(412) 00:12:19.236 fused_ordering(413) 00:12:19.236 fused_ordering(414) 00:12:19.236 fused_ordering(415) 00:12:19.236 fused_ordering(416) 00:12:19.236 fused_ordering(417) 00:12:19.236 fused_ordering(418) 00:12:19.236 fused_ordering(419) 00:12:19.236 fused_ordering(420) 00:12:19.236 fused_ordering(421) 00:12:19.236 fused_ordering(422) 00:12:19.236 fused_ordering(423) 00:12:19.236 fused_ordering(424) 00:12:19.236 fused_ordering(425) 00:12:19.236 fused_ordering(426) 00:12:19.236 fused_ordering(427) 00:12:19.236 fused_ordering(428) 00:12:19.236 fused_ordering(429) 00:12:19.236 fused_ordering(430) 00:12:19.236 fused_ordering(431) 00:12:19.236 fused_ordering(432) 00:12:19.236 fused_ordering(433) 00:12:19.236 fused_ordering(434) 00:12:19.236 fused_ordering(435) 00:12:19.236 fused_ordering(436) 00:12:19.236 fused_ordering(437) 00:12:19.236 fused_ordering(438) 00:12:19.236 fused_ordering(439) 00:12:19.236 fused_ordering(440) 00:12:19.236 fused_ordering(441) 00:12:19.236 fused_ordering(442) 00:12:19.236 fused_ordering(443) 00:12:19.236 fused_ordering(444) 00:12:19.236 fused_ordering(445) 00:12:19.236 fused_ordering(446) 00:12:19.236 fused_ordering(447) 00:12:19.236 fused_ordering(448) 00:12:19.236 fused_ordering(449) 00:12:19.236 fused_ordering(450) 00:12:19.236 fused_ordering(451) 00:12:19.236 fused_ordering(452) 00:12:19.236 fused_ordering(453) 00:12:19.236 fused_ordering(454) 00:12:19.236 fused_ordering(455) 00:12:19.236 fused_ordering(456) 00:12:19.236 fused_ordering(457) 00:12:19.236 fused_ordering(458) 00:12:19.236 fused_ordering(459) 00:12:19.236 fused_ordering(460) 00:12:19.236 fused_ordering(461) 00:12:19.236 fused_ordering(462) 00:12:19.236 fused_ordering(463) 00:12:19.236 fused_ordering(464) 00:12:19.236 fused_ordering(465) 00:12:19.236 fused_ordering(466) 00:12:19.236 fused_ordering(467) 00:12:19.236 fused_ordering(468) 00:12:19.236 fused_ordering(469) 00:12:19.236 fused_ordering(470) 00:12:19.236 fused_ordering(471) 00:12:19.236 fused_ordering(472) 00:12:19.236 fused_ordering(473) 00:12:19.236 fused_ordering(474) 00:12:19.236 fused_ordering(475) 00:12:19.236 fused_ordering(476) 00:12:19.236 fused_ordering(477) 00:12:19.236 fused_ordering(478) 00:12:19.236 fused_ordering(479) 00:12:19.236 fused_ordering(480) 00:12:19.236 fused_ordering(481) 00:12:19.236 fused_ordering(482) 00:12:19.236 fused_ordering(483) 00:12:19.236 fused_ordering(484) 00:12:19.236 fused_ordering(485) 00:12:19.236 fused_ordering(486) 00:12:19.236 fused_ordering(487) 00:12:19.236 fused_ordering(488) 00:12:19.236 fused_ordering(489) 00:12:19.236 fused_ordering(490) 00:12:19.236 fused_ordering(491) 00:12:19.236 fused_ordering(492) 00:12:19.236 fused_ordering(493) 00:12:19.236 fused_ordering(494) 00:12:19.236 fused_ordering(495) 00:12:19.236 fused_ordering(496) 00:12:19.236 fused_ordering(497) 00:12:19.236 fused_ordering(498) 00:12:19.236 fused_ordering(499) 00:12:19.236 fused_ordering(500) 00:12:19.236 fused_ordering(501) 00:12:19.236 fused_ordering(502) 00:12:19.236 fused_ordering(503) 00:12:19.236 fused_ordering(504) 00:12:19.236 fused_ordering(505) 00:12:19.236 fused_ordering(506) 00:12:19.236 fused_ordering(507) 00:12:19.236 fused_ordering(508) 00:12:19.236 fused_ordering(509) 00:12:19.236 fused_ordering(510) 00:12:19.236 fused_ordering(511) 00:12:19.236 fused_ordering(512) 00:12:19.236 fused_ordering(513) 00:12:19.236 fused_ordering(514) 00:12:19.236 fused_ordering(515) 00:12:19.236 fused_ordering(516) 00:12:19.236 fused_ordering(517) 00:12:19.236 fused_ordering(518) 00:12:19.236 fused_ordering(519) 00:12:19.236 fused_ordering(520) 00:12:19.237 fused_ordering(521) 00:12:19.237 fused_ordering(522) 00:12:19.237 fused_ordering(523) 00:12:19.237 fused_ordering(524) 00:12:19.237 fused_ordering(525) 00:12:19.237 fused_ordering(526) 00:12:19.237 fused_ordering(527) 00:12:19.237 fused_ordering(528) 00:12:19.237 fused_ordering(529) 00:12:19.237 fused_ordering(530) 00:12:19.237 fused_ordering(531) 00:12:19.237 fused_ordering(532) 00:12:19.237 fused_ordering(533) 00:12:19.237 fused_ordering(534) 00:12:19.237 fused_ordering(535) 00:12:19.237 fused_ordering(536) 00:12:19.237 fused_ordering(537) 00:12:19.237 fused_ordering(538) 00:12:19.237 fused_ordering(539) 00:12:19.237 fused_ordering(540) 00:12:19.237 fused_ordering(541) 00:12:19.237 fused_ordering(542) 00:12:19.237 fused_ordering(543) 00:12:19.237 fused_ordering(544) 00:12:19.237 fused_ordering(545) 00:12:19.237 fused_ordering(546) 00:12:19.237 fused_ordering(547) 00:12:19.237 fused_ordering(548) 00:12:19.237 fused_ordering(549) 00:12:19.237 fused_ordering(550) 00:12:19.237 fused_ordering(551) 00:12:19.237 fused_ordering(552) 00:12:19.237 fused_ordering(553) 00:12:19.237 fused_ordering(554) 00:12:19.237 fused_ordering(555) 00:12:19.237 fused_ordering(556) 00:12:19.237 fused_ordering(557) 00:12:19.237 fused_ordering(558) 00:12:19.237 fused_ordering(559) 00:12:19.237 fused_ordering(560) 00:12:19.237 fused_ordering(561) 00:12:19.237 fused_ordering(562) 00:12:19.237 fused_ordering(563) 00:12:19.237 fused_ordering(564) 00:12:19.237 fused_ordering(565) 00:12:19.237 fused_ordering(566) 00:12:19.237 fused_ordering(567) 00:12:19.237 fused_ordering(568) 00:12:19.237 fused_ordering(569) 00:12:19.237 fused_ordering(570) 00:12:19.237 fused_ordering(571) 00:12:19.237 fused_ordering(572) 00:12:19.237 fused_ordering(573) 00:12:19.237 fused_ordering(574) 00:12:19.237 fused_ordering(575) 00:12:19.237 fused_ordering(576) 00:12:19.237 fused_ordering(577) 00:12:19.237 fused_ordering(578) 00:12:19.237 fused_ordering(579) 00:12:19.237 fused_ordering(580) 00:12:19.237 fused_ordering(581) 00:12:19.237 fused_ordering(582) 00:12:19.237 fused_ordering(583) 00:12:19.237 fused_ordering(584) 00:12:19.237 fused_ordering(585) 00:12:19.237 fused_ordering(586) 00:12:19.237 fused_ordering(587) 00:12:19.237 fused_ordering(588) 00:12:19.237 fused_ordering(589) 00:12:19.237 fused_ordering(590) 00:12:19.237 fused_ordering(591) 00:12:19.237 fused_ordering(592) 00:12:19.237 fused_ordering(593) 00:12:19.237 fused_ordering(594) 00:12:19.237 fused_ordering(595) 00:12:19.237 fused_ordering(596) 00:12:19.237 fused_ordering(597) 00:12:19.237 fused_ordering(598) 00:12:19.237 fused_ordering(599) 00:12:19.237 fused_ordering(600) 00:12:19.237 fused_ordering(601) 00:12:19.237 fused_ordering(602) 00:12:19.237 fused_ordering(603) 00:12:19.237 fused_ordering(604) 00:12:19.237 fused_ordering(605) 00:12:19.237 fused_ordering(606) 00:12:19.237 fused_ordering(607) 00:12:19.237 fused_ordering(608) 00:12:19.237 fused_ordering(609) 00:12:19.237 fused_ordering(610) 00:12:19.237 fused_ordering(611) 00:12:19.237 fused_ordering(612) 00:12:19.237 fused_ordering(613) 00:12:19.237 fused_ordering(614) 00:12:19.237 fused_ordering(615) 00:12:19.811 fused_ordering(616) 00:12:19.811 fused_ordering(617) 00:12:19.811 fused_ordering(618) 00:12:19.811 fused_ordering(619) 00:12:19.811 fused_ordering(620) 00:12:19.811 fused_ordering(621) 00:12:19.811 fused_ordering(622) 00:12:19.811 fused_ordering(623) 00:12:19.811 fused_ordering(624) 00:12:19.811 fused_ordering(625) 00:12:19.811 fused_ordering(626) 00:12:19.811 fused_ordering(627) 00:12:19.811 fused_ordering(628) 00:12:19.811 fused_ordering(629) 00:12:19.811 fused_ordering(630) 00:12:19.811 fused_ordering(631) 00:12:19.811 fused_ordering(632) 00:12:19.811 fused_ordering(633) 00:12:19.811 fused_ordering(634) 00:12:19.811 fused_ordering(635) 00:12:19.811 fused_ordering(636) 00:12:19.811 fused_ordering(637) 00:12:19.811 fused_ordering(638) 00:12:19.811 fused_ordering(639) 00:12:19.811 fused_ordering(640) 00:12:19.811 fused_ordering(641) 00:12:19.811 fused_ordering(642) 00:12:19.811 fused_ordering(643) 00:12:19.811 fused_ordering(644) 00:12:19.811 fused_ordering(645) 00:12:19.811 fused_ordering(646) 00:12:19.811 fused_ordering(647) 00:12:19.811 fused_ordering(648) 00:12:19.811 fused_ordering(649) 00:12:19.811 fused_ordering(650) 00:12:19.811 fused_ordering(651) 00:12:19.811 fused_ordering(652) 00:12:19.811 fused_ordering(653) 00:12:19.811 fused_ordering(654) 00:12:19.811 fused_ordering(655) 00:12:19.811 fused_ordering(656) 00:12:19.811 fused_ordering(657) 00:12:19.811 fused_ordering(658) 00:12:19.811 fused_ordering(659) 00:12:19.811 fused_ordering(660) 00:12:19.811 fused_ordering(661) 00:12:19.811 fused_ordering(662) 00:12:19.811 fused_ordering(663) 00:12:19.811 fused_ordering(664) 00:12:19.811 fused_ordering(665) 00:12:19.811 fused_ordering(666) 00:12:19.811 fused_ordering(667) 00:12:19.811 fused_ordering(668) 00:12:19.811 fused_ordering(669) 00:12:19.811 fused_ordering(670) 00:12:19.811 fused_ordering(671) 00:12:19.811 fused_ordering(672) 00:12:19.811 fused_ordering(673) 00:12:19.811 fused_ordering(674) 00:12:19.811 fused_ordering(675) 00:12:19.811 fused_ordering(676) 00:12:19.811 fused_ordering(677) 00:12:19.811 fused_ordering(678) 00:12:19.811 fused_ordering(679) 00:12:19.811 fused_ordering(680) 00:12:19.811 fused_ordering(681) 00:12:19.811 fused_ordering(682) 00:12:19.811 fused_ordering(683) 00:12:19.811 fused_ordering(684) 00:12:19.811 fused_ordering(685) 00:12:19.811 fused_ordering(686) 00:12:19.811 fused_ordering(687) 00:12:19.811 fused_ordering(688) 00:12:19.811 fused_ordering(689) 00:12:19.811 fused_ordering(690) 00:12:19.811 fused_ordering(691) 00:12:19.811 fused_ordering(692) 00:12:19.811 fused_ordering(693) 00:12:19.811 fused_ordering(694) 00:12:19.811 fused_ordering(695) 00:12:19.811 fused_ordering(696) 00:12:19.811 fused_ordering(697) 00:12:19.811 fused_ordering(698) 00:12:19.811 fused_ordering(699) 00:12:19.811 fused_ordering(700) 00:12:19.811 fused_ordering(701) 00:12:19.811 fused_ordering(702) 00:12:19.811 fused_ordering(703) 00:12:19.811 fused_ordering(704) 00:12:19.811 fused_ordering(705) 00:12:19.811 fused_ordering(706) 00:12:19.811 fused_ordering(707) 00:12:19.811 fused_ordering(708) 00:12:19.811 fused_ordering(709) 00:12:19.811 fused_ordering(710) 00:12:19.811 fused_ordering(711) 00:12:19.811 fused_ordering(712) 00:12:19.811 fused_ordering(713) 00:12:19.811 fused_ordering(714) 00:12:19.811 fused_ordering(715) 00:12:19.811 fused_ordering(716) 00:12:19.811 fused_ordering(717) 00:12:19.811 fused_ordering(718) 00:12:19.811 fused_ordering(719) 00:12:19.811 fused_ordering(720) 00:12:19.811 fused_ordering(721) 00:12:19.811 fused_ordering(722) 00:12:19.811 fused_ordering(723) 00:12:19.811 fused_ordering(724) 00:12:19.811 fused_ordering(725) 00:12:19.811 fused_ordering(726) 00:12:19.811 fused_ordering(727) 00:12:19.811 fused_ordering(728) 00:12:19.811 fused_ordering(729) 00:12:19.811 fused_ordering(730) 00:12:19.811 fused_ordering(731) 00:12:19.811 fused_ordering(732) 00:12:19.811 fused_ordering(733) 00:12:19.811 fused_ordering(734) 00:12:19.811 fused_ordering(735) 00:12:19.811 fused_ordering(736) 00:12:19.811 fused_ordering(737) 00:12:19.811 fused_ordering(738) 00:12:19.811 fused_ordering(739) 00:12:19.811 fused_ordering(740) 00:12:19.811 fused_ordering(741) 00:12:19.811 fused_ordering(742) 00:12:19.811 fused_ordering(743) 00:12:19.811 fused_ordering(744) 00:12:19.811 fused_ordering(745) 00:12:19.811 fused_ordering(746) 00:12:19.811 fused_ordering(747) 00:12:19.811 fused_ordering(748) 00:12:19.811 fused_ordering(749) 00:12:19.811 fused_ordering(750) 00:12:19.811 fused_ordering(751) 00:12:19.811 fused_ordering(752) 00:12:19.811 fused_ordering(753) 00:12:19.811 fused_ordering(754) 00:12:19.811 fused_ordering(755) 00:12:19.811 fused_ordering(756) 00:12:19.811 fused_ordering(757) 00:12:19.811 fused_ordering(758) 00:12:19.811 fused_ordering(759) 00:12:19.811 fused_ordering(760) 00:12:19.812 fused_ordering(761) 00:12:19.812 fused_ordering(762) 00:12:19.812 fused_ordering(763) 00:12:19.812 fused_ordering(764) 00:12:19.812 fused_ordering(765) 00:12:19.812 fused_ordering(766) 00:12:19.812 fused_ordering(767) 00:12:19.812 fused_ordering(768) 00:12:19.812 fused_ordering(769) 00:12:19.812 fused_ordering(770) 00:12:19.812 fused_ordering(771) 00:12:19.812 fused_ordering(772) 00:12:19.812 fused_ordering(773) 00:12:19.812 fused_ordering(774) 00:12:19.812 fused_ordering(775) 00:12:19.812 fused_ordering(776) 00:12:19.812 fused_ordering(777) 00:12:19.812 fused_ordering(778) 00:12:19.812 fused_ordering(779) 00:12:19.812 fused_ordering(780) 00:12:19.812 fused_ordering(781) 00:12:19.812 fused_ordering(782) 00:12:19.812 fused_ordering(783) 00:12:19.812 fused_ordering(784) 00:12:19.812 fused_ordering(785) 00:12:19.812 fused_ordering(786) 00:12:19.812 fused_ordering(787) 00:12:19.812 fused_ordering(788) 00:12:19.812 fused_ordering(789) 00:12:19.812 fused_ordering(790) 00:12:19.812 fused_ordering(791) 00:12:19.812 fused_ordering(792) 00:12:19.812 fused_ordering(793) 00:12:19.812 fused_ordering(794) 00:12:19.812 fused_ordering(795) 00:12:19.812 fused_ordering(796) 00:12:19.812 fused_ordering(797) 00:12:19.812 fused_ordering(798) 00:12:19.812 fused_ordering(799) 00:12:19.812 fused_ordering(800) 00:12:19.812 fused_ordering(801) 00:12:19.812 fused_ordering(802) 00:12:19.812 fused_ordering(803) 00:12:19.812 fused_ordering(804) 00:12:19.812 fused_ordering(805) 00:12:19.812 fused_ordering(806) 00:12:19.812 fused_ordering(807) 00:12:19.812 fused_ordering(808) 00:12:19.812 fused_ordering(809) 00:12:19.812 fused_ordering(810) 00:12:19.812 fused_ordering(811) 00:12:19.812 fused_ordering(812) 00:12:19.812 fused_ordering(813) 00:12:19.812 fused_ordering(814) 00:12:19.812 fused_ordering(815) 00:12:19.812 fused_ordering(816) 00:12:19.812 fused_ordering(817) 00:12:19.812 fused_ordering(818) 00:12:19.812 fused_ordering(819) 00:12:19.812 fused_ordering(820) 00:12:20.379 fused_ordering(821) 00:12:20.379 fused_ordering(822) 00:12:20.379 fused_ordering(823) 00:12:20.379 fused_ordering(824) 00:12:20.379 fused_ordering(825) 00:12:20.379 fused_ordering(826) 00:12:20.379 fused_ordering(827) 00:12:20.379 fused_ordering(828) 00:12:20.379 fused_ordering(829) 00:12:20.379 fused_ordering(830) 00:12:20.379 fused_ordering(831) 00:12:20.379 fused_ordering(832) 00:12:20.379 fused_ordering(833) 00:12:20.379 fused_ordering(834) 00:12:20.379 fused_ordering(835) 00:12:20.379 fused_ordering(836) 00:12:20.379 fused_ordering(837) 00:12:20.379 fused_ordering(838) 00:12:20.379 fused_ordering(839) 00:12:20.379 fused_ordering(840) 00:12:20.379 fused_ordering(841) 00:12:20.379 fused_ordering(842) 00:12:20.379 fused_ordering(843) 00:12:20.379 fused_ordering(844) 00:12:20.379 fused_ordering(845) 00:12:20.379 fused_ordering(846) 00:12:20.379 fused_ordering(847) 00:12:20.379 fused_ordering(848) 00:12:20.379 fused_ordering(849) 00:12:20.379 fused_ordering(850) 00:12:20.379 fused_ordering(851) 00:12:20.379 fused_ordering(852) 00:12:20.379 fused_ordering(853) 00:12:20.379 fused_ordering(854) 00:12:20.380 fused_ordering(855) 00:12:20.380 fused_ordering(856) 00:12:20.380 fused_ordering(857) 00:12:20.380 fused_ordering(858) 00:12:20.380 fused_ordering(859) 00:12:20.380 fused_ordering(860) 00:12:20.380 fused_ordering(861) 00:12:20.380 fused_ordering(862) 00:12:20.380 fused_ordering(863) 00:12:20.380 fused_ordering(864) 00:12:20.380 fused_ordering(865) 00:12:20.380 fused_ordering(866) 00:12:20.380 fused_ordering(867) 00:12:20.380 fused_ordering(868) 00:12:20.380 fused_ordering(869) 00:12:20.380 fused_ordering(870) 00:12:20.380 fused_ordering(871) 00:12:20.380 fused_ordering(872) 00:12:20.380 fused_ordering(873) 00:12:20.380 fused_ordering(874) 00:12:20.380 fused_ordering(875) 00:12:20.380 fused_ordering(876) 00:12:20.380 fused_ordering(877) 00:12:20.380 fused_ordering(878) 00:12:20.380 fused_ordering(879) 00:12:20.380 fused_ordering(880) 00:12:20.380 fused_ordering(881) 00:12:20.380 fused_ordering(882) 00:12:20.380 fused_ordering(883) 00:12:20.380 fused_ordering(884) 00:12:20.380 fused_ordering(885) 00:12:20.380 fused_ordering(886) 00:12:20.380 fused_ordering(887) 00:12:20.380 fused_ordering(888) 00:12:20.380 fused_ordering(889) 00:12:20.380 fused_ordering(890) 00:12:20.380 fused_ordering(891) 00:12:20.380 fused_ordering(892) 00:12:20.380 fused_ordering(893) 00:12:20.380 fused_ordering(894) 00:12:20.380 fused_ordering(895) 00:12:20.380 fused_ordering(896) 00:12:20.380 fused_ordering(897) 00:12:20.380 fused_ordering(898) 00:12:20.380 fused_ordering(899) 00:12:20.380 fused_ordering(900) 00:12:20.380 fused_ordering(901) 00:12:20.380 fused_ordering(902) 00:12:20.380 fused_ordering(903) 00:12:20.380 fused_ordering(904) 00:12:20.380 fused_ordering(905) 00:12:20.380 fused_ordering(906) 00:12:20.380 fused_ordering(907) 00:12:20.380 fused_ordering(908) 00:12:20.380 fused_ordering(909) 00:12:20.380 fused_ordering(910) 00:12:20.380 fused_ordering(911) 00:12:20.380 fused_ordering(912) 00:12:20.380 fused_ordering(913) 00:12:20.380 fused_ordering(914) 00:12:20.380 fused_ordering(915) 00:12:20.380 fused_ordering(916) 00:12:20.380 fused_ordering(917) 00:12:20.380 fused_ordering(918) 00:12:20.380 fused_ordering(919) 00:12:20.380 fused_ordering(920) 00:12:20.380 fused_ordering(921) 00:12:20.380 fused_ordering(922) 00:12:20.380 fused_ordering(923) 00:12:20.380 fused_ordering(924) 00:12:20.380 fused_ordering(925) 00:12:20.380 fused_ordering(926) 00:12:20.380 fused_ordering(927) 00:12:20.380 fused_ordering(928) 00:12:20.380 fused_ordering(929) 00:12:20.380 fused_ordering(930) 00:12:20.380 fused_ordering(931) 00:12:20.380 fused_ordering(932) 00:12:20.380 fused_ordering(933) 00:12:20.380 fused_ordering(934) 00:12:20.380 fused_ordering(935) 00:12:20.380 fused_ordering(936) 00:12:20.380 fused_ordering(937) 00:12:20.380 fused_ordering(938) 00:12:20.380 fused_ordering(939) 00:12:20.380 fused_ordering(940) 00:12:20.380 fused_ordering(941) 00:12:20.380 fused_ordering(942) 00:12:20.380 fused_ordering(943) 00:12:20.380 fused_ordering(944) 00:12:20.380 fused_ordering(945) 00:12:20.380 fused_ordering(946) 00:12:20.380 fused_ordering(947) 00:12:20.380 fused_ordering(948) 00:12:20.380 fused_ordering(949) 00:12:20.380 fused_ordering(950) 00:12:20.380 fused_ordering(951) 00:12:20.380 fused_ordering(952) 00:12:20.380 fused_ordering(953) 00:12:20.380 fused_ordering(954) 00:12:20.380 fused_ordering(955) 00:12:20.380 fused_ordering(956) 00:12:20.380 fused_ordering(957) 00:12:20.380 fused_ordering(958) 00:12:20.380 fused_ordering(959) 00:12:20.380 fused_ordering(960) 00:12:20.380 fused_ordering(961) 00:12:20.380 fused_ordering(962) 00:12:20.380 fused_ordering(963) 00:12:20.380 fused_ordering(964) 00:12:20.380 fused_ordering(965) 00:12:20.380 fused_ordering(966) 00:12:20.380 fused_ordering(967) 00:12:20.380 fused_ordering(968) 00:12:20.380 fused_ordering(969) 00:12:20.380 fused_ordering(970) 00:12:20.380 fused_ordering(971) 00:12:20.380 fused_ordering(972) 00:12:20.380 fused_ordering(973) 00:12:20.380 fused_ordering(974) 00:12:20.380 fused_ordering(975) 00:12:20.380 fused_ordering(976) 00:12:20.380 fused_ordering(977) 00:12:20.380 fused_ordering(978) 00:12:20.380 fused_ordering(979) 00:12:20.380 fused_ordering(980) 00:12:20.380 fused_ordering(981) 00:12:20.380 fused_ordering(982) 00:12:20.380 fused_ordering(983) 00:12:20.380 fused_ordering(984) 00:12:20.380 fused_ordering(985) 00:12:20.380 fused_ordering(986) 00:12:20.380 fused_ordering(987) 00:12:20.380 fused_ordering(988) 00:12:20.380 fused_ordering(989) 00:12:20.380 fused_ordering(990) 00:12:20.380 fused_ordering(991) 00:12:20.380 fused_ordering(992) 00:12:20.380 fused_ordering(993) 00:12:20.380 fused_ordering(994) 00:12:20.380 fused_ordering(995) 00:12:20.380 fused_ordering(996) 00:12:20.380 fused_ordering(997) 00:12:20.380 fused_ordering(998) 00:12:20.380 fused_ordering(999) 00:12:20.380 fused_ordering(1000) 00:12:20.380 fused_ordering(1001) 00:12:20.380 fused_ordering(1002) 00:12:20.380 fused_ordering(1003) 00:12:20.380 fused_ordering(1004) 00:12:20.380 fused_ordering(1005) 00:12:20.380 fused_ordering(1006) 00:12:20.380 fused_ordering(1007) 00:12:20.380 fused_ordering(1008) 00:12:20.380 fused_ordering(1009) 00:12:20.380 fused_ordering(1010) 00:12:20.380 fused_ordering(1011) 00:12:20.380 fused_ordering(1012) 00:12:20.380 fused_ordering(1013) 00:12:20.380 fused_ordering(1014) 00:12:20.380 fused_ordering(1015) 00:12:20.380 fused_ordering(1016) 00:12:20.380 fused_ordering(1017) 00:12:20.380 fused_ordering(1018) 00:12:20.380 fused_ordering(1019) 00:12:20.380 fused_ordering(1020) 00:12:20.380 fused_ordering(1021) 00:12:20.380 fused_ordering(1022) 00:12:20.380 fused_ordering(1023) 00:12:20.380 15:17:24 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:20.380 15:17:24 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:20.380 15:17:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:20.380 15:17:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:12:20.380 15:17:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:20.380 15:17:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:12:20.380 15:17:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:20.380 15:17:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:20.380 rmmod nvme_tcp 00:12:20.380 rmmod nvme_fabrics 00:12:20.380 rmmod nvme_keyring 00:12:20.380 15:17:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:20.380 15:17:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:12:20.381 15:17:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:12:20.381 15:17:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 2960295 ']' 00:12:20.381 15:17:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 2960295 00:12:20.381 15:17:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 2960295 ']' 00:12:20.381 15:17:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 2960295 00:12:20.381 15:17:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:12:20.381 15:17:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:20.381 15:17:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2960295 00:12:20.381 15:17:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:20.381 15:17:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:20.381 15:17:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2960295' 00:12:20.381 killing process with pid 2960295 00:12:20.381 15:17:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 2960295 00:12:20.381 15:17:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 2960295 00:12:20.639 15:17:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:20.639 15:17:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:20.639 15:17:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:20.639 15:17:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:20.639 15:17:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:20.639 15:17:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:20.639 15:17:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:20.640 15:17:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:23.170 15:17:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:23.170 00:12:23.170 real 0m12.578s 00:12:23.170 user 0m6.417s 00:12:23.170 sys 0m7.080s 00:12:23.170 15:17:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:23.170 15:17:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:23.170 ************************************ 00:12:23.170 END TEST nvmf_fused_ordering 00:12:23.170 ************************************ 00:12:23.170 15:17:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:23.170 15:17:26 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:12:23.170 15:17:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:23.170 15:17:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:23.170 15:17:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:23.170 ************************************ 00:12:23.170 START TEST nvmf_delete_subsystem 00:12:23.170 ************************************ 00:12:23.170 15:17:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:12:23.170 * Looking for test storage... 00:12:23.170 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:23.170 15:17:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:23.170 15:17:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:12:23.170 15:17:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:23.170 15:17:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:23.171 15:17:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:23.171 15:17:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:23.171 15:17:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:23.171 15:17:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:23.171 15:17:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:23.171 15:17:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:23.171 15:17:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:23.171 15:17:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:23.171 15:17:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:12:23.171 15:17:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:12:23.171 15:17:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:23.171 15:17:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:23.171 15:17:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:23.171 15:17:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:23.171 15:17:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:23.171 15:17:26 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:23.171 15:17:26 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:23.171 15:17:26 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:23.171 15:17:26 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.171 15:17:26 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.171 15:17:26 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.171 15:17:26 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:12:23.171 15:17:26 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.171 15:17:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:12:23.171 15:17:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:23.171 15:17:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:23.171 15:17:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:23.171 15:17:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:23.171 15:17:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:23.171 15:17:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:23.171 15:17:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:23.171 15:17:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:23.171 15:17:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:12:23.171 15:17:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:23.171 15:17:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:23.171 15:17:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:23.171 15:17:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:23.171 15:17:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:23.171 15:17:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:23.171 15:17:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:23.171 15:17:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:23.171 15:17:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:23.171 15:17:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:23.171 15:17:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:12:23.171 15:17:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:29.736 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:29.736 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:29.736 Found net devices under 0000:af:00.0: cvl_0_0 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:29.736 Found net devices under 0000:af:00.1: cvl_0_1 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:29.736 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:29.995 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:29.995 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:29.995 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:29.995 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:29.995 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:12:29.995 00:12:29.995 --- 10.0.0.2 ping statistics --- 00:12:29.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:29.995 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:12:29.995 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:29.995 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:29.995 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:12:29.995 00:12:29.995 --- 10.0.0.1 ping statistics --- 00:12:29.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:29.995 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:12:29.995 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:29.995 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:12:29.995 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:29.995 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:29.995 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:29.995 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:29.995 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:29.995 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:29.995 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:29.995 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:12:29.995 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:29.995 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:29.995 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:29.995 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=2964651 00:12:29.995 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:12:29.995 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 2964651 00:12:29.995 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 2964651 ']' 00:12:29.995 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:29.995 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:29.995 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:29.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:29.996 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:29.996 15:17:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:29.996 [2024-07-15 15:17:33.855316] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:12:29.996 [2024-07-15 15:17:33.855366] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:29.996 EAL: No free 2048 kB hugepages reported on node 1 00:12:30.253 [2024-07-15 15:17:33.929009] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:30.253 [2024-07-15 15:17:33.999439] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:30.253 [2024-07-15 15:17:33.999476] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:30.253 [2024-07-15 15:17:33.999486] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:30.253 [2024-07-15 15:17:33.999495] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:30.253 [2024-07-15 15:17:33.999502] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:30.253 [2024-07-15 15:17:33.999549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:30.253 [2024-07-15 15:17:33.999551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:30.819 15:17:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:30.819 15:17:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:12:30.819 15:17:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:30.819 15:17:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:30.819 15:17:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:30.819 15:17:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:30.819 15:17:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:30.819 15:17:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.819 15:17:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:30.819 [2024-07-15 15:17:34.691129] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:30.819 15:17:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.819 15:17:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:30.819 15:17:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.819 15:17:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:30.819 15:17:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.819 15:17:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:30.819 15:17:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.819 15:17:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:30.819 [2024-07-15 15:17:34.707291] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:30.819 15:17:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.819 15:17:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:30.819 15:17:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.819 15:17:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:30.819 NULL1 00:12:30.819 15:17:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.819 15:17:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:30.819 15:17:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.819 15:17:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:31.077 Delay0 00:12:31.078 15:17:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.078 15:17:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:31.078 15:17:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.078 15:17:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:31.078 15:17:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.078 15:17:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2964800 00:12:31.078 15:17:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:12:31.078 15:17:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:31.078 EAL: No free 2048 kB hugepages reported on node 1 00:12:31.078 [2024-07-15 15:17:34.791953] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:32.982 15:17:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:32.982 15:17:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.982 15:17:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:32.982 Write completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 starting I/O failed: -6 00:12:32.982 Write completed with error (sct=0, sc=8) 00:12:32.982 Write completed with error (sct=0, sc=8) 00:12:32.982 Write completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 starting I/O failed: -6 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Write completed with error (sct=0, sc=8) 00:12:32.982 starting I/O failed: -6 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Write completed with error (sct=0, sc=8) 00:12:32.982 starting I/O failed: -6 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 starting I/O failed: -6 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Write completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 starting I/O failed: -6 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 starting I/O failed: -6 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Write completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 starting I/O failed: -6 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Write completed with error (sct=0, sc=8) 00:12:32.982 Write completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 starting I/O failed: -6 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Write completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 starting I/O failed: -6 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Write completed with error (sct=0, sc=8) 00:12:32.982 starting I/O failed: -6 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Write completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 starting I/O failed: -6 00:12:32.982 Write completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Write completed with error (sct=0, sc=8) 00:12:32.982 Write completed with error (sct=0, sc=8) 00:12:32.982 starting I/O failed: -6 00:12:32.982 starting I/O failed: -6 00:12:32.982 starting I/O failed: -6 00:12:32.982 starting I/O failed: -6 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Write completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 starting I/O failed: -6 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 starting I/O failed: -6 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Write completed with error (sct=0, sc=8) 00:12:32.982 starting I/O failed: -6 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 starting I/O failed: -6 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 starting I/O failed: -6 00:12:32.982 Write completed with error (sct=0, sc=8) 00:12:32.982 Write completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 starting I/O failed: -6 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Write completed with error (sct=0, sc=8) 00:12:32.982 starting I/O failed: -6 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Write completed with error (sct=0, sc=8) 00:12:32.982 starting I/O failed: -6 00:12:32.982 Write completed with error (sct=0, sc=8) 00:12:32.982 Write completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 starting I/O failed: -6 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 starting I/O failed: -6 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 [2024-07-15 15:17:36.872072] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3cc000cfe0 is same with the state(5) to be set 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Write completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Write completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Write completed with error (sct=0, sc=8) 00:12:32.982 Write completed with error (sct=0, sc=8) 00:12:32.982 Write completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Write completed with error (sct=0, sc=8) 00:12:32.982 Write completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Write completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Write completed with error (sct=0, sc=8) 00:12:32.982 Write completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Write completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Write completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Write completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Write completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Write completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Write completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Write completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Write completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Write completed with error (sct=0, sc=8) 00:12:32.982 Write completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Write completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Write completed with error (sct=0, sc=8) 00:12:32.982 Write completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Write completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.982 Read completed with error (sct=0, sc=8) 00:12:32.983 Write completed with error (sct=0, sc=8) 00:12:32.983 Read completed with error (sct=0, sc=8) 00:12:32.983 Read completed with error (sct=0, sc=8) 00:12:32.983 Read completed with error (sct=0, sc=8) 00:12:32.983 Read completed with error (sct=0, sc=8) 00:12:32.983 Write completed with error (sct=0, sc=8) 00:12:32.983 Write completed with error (sct=0, sc=8) 00:12:32.983 Write completed with error (sct=0, sc=8) 00:12:32.983 Read completed with error (sct=0, sc=8) 00:12:32.983 Read completed with error (sct=0, sc=8) 00:12:32.983 Read completed with error (sct=0, sc=8) 00:12:32.983 Write completed with error (sct=0, sc=8) 00:12:32.983 Read completed with error (sct=0, sc=8) 00:12:32.983 Read completed with error (sct=0, sc=8) 00:12:32.983 Read completed with error (sct=0, sc=8) 00:12:32.983 Read completed with error (sct=0, sc=8) 00:12:32.983 Write completed with error (sct=0, sc=8) 00:12:32.983 Read completed with error (sct=0, sc=8) 00:12:32.983 Write completed with error (sct=0, sc=8) 00:12:32.983 Write completed with error (sct=0, sc=8) 00:12:32.983 Read completed with error (sct=0, sc=8) 00:12:32.983 Read completed with error (sct=0, sc=8) 00:12:32.983 Read completed with error (sct=0, sc=8) 00:12:32.983 Read completed with error (sct=0, sc=8) 00:12:34.360 [2024-07-15 15:17:37.846615] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1936450 is same with the state(5) to be set 00:12:34.360 Read completed with error (sct=0, sc=8) 00:12:34.360 Write completed with error (sct=0, sc=8) 00:12:34.360 Write completed with error (sct=0, sc=8) 00:12:34.360 Write completed with error (sct=0, sc=8) 00:12:34.360 Read completed with error (sct=0, sc=8) 00:12:34.360 Read completed with error (sct=0, sc=8) 00:12:34.360 Read completed with error (sct=0, sc=8) 00:12:34.360 Read completed with error (sct=0, sc=8) 00:12:34.360 Read completed with error (sct=0, sc=8) 00:12:34.360 Read completed with error (sct=0, sc=8) 00:12:34.360 Write completed with error (sct=0, sc=8) 00:12:34.360 Read completed with error (sct=0, sc=8) 00:12:34.360 Write completed with error (sct=0, sc=8) 00:12:34.360 Write completed with error (sct=0, sc=8) 00:12:34.360 Read completed with error (sct=0, sc=8) 00:12:34.360 Read completed with error (sct=0, sc=8) 00:12:34.360 Read completed with error (sct=0, sc=8) 00:12:34.360 Read completed with error (sct=0, sc=8) 00:12:34.360 Read completed with error (sct=0, sc=8) 00:12:34.360 Read completed with error (sct=0, sc=8) 00:12:34.360 Read completed with error (sct=0, sc=8) 00:12:34.360 Read completed with error (sct=0, sc=8) 00:12:34.360 Read completed with error (sct=0, sc=8) 00:12:34.360 Read completed with error (sct=0, sc=8) 00:12:34.360 Read completed with error (sct=0, sc=8) 00:12:34.360 Read completed with error (sct=0, sc=8) 00:12:34.360 Read completed with error (sct=0, sc=8) 00:12:34.360 Write completed with error (sct=0, sc=8) 00:12:34.360 Write completed with error (sct=0, sc=8) 00:12:34.360 Write completed with error (sct=0, sc=8) 00:12:34.360 Write completed with error (sct=0, sc=8) 00:12:34.360 Write completed with error (sct=0, sc=8) 00:12:34.360 Read completed with error (sct=0, sc=8) 00:12:34.360 Read completed with error (sct=0, sc=8) 00:12:34.360 Write completed with error (sct=0, sc=8) 00:12:34.360 Read completed with error (sct=0, sc=8) 00:12:34.360 [2024-07-15 15:17:37.873981] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19565c0 is same with the state(5) to be set 00:12:34.360 Read completed with error (sct=0, sc=8) 00:12:34.360 Read completed with error (sct=0, sc=8) 00:12:34.360 Read completed with error (sct=0, sc=8) 00:12:34.360 Read completed with error (sct=0, sc=8) 00:12:34.360 Read completed with error (sct=0, sc=8) 00:12:34.360 Write completed with error (sct=0, sc=8) 00:12:34.360 Write completed with error (sct=0, sc=8) 00:12:34.360 Read completed with error (sct=0, sc=8) 00:12:34.360 Read completed with error (sct=0, sc=8) 00:12:34.360 Read completed with error (sct=0, sc=8) 00:12:34.360 Read completed with error (sct=0, sc=8) 00:12:34.360 Write completed with error (sct=0, sc=8) 00:12:34.360 Read completed with error (sct=0, sc=8) 00:12:34.360 Read completed with error (sct=0, sc=8) 00:12:34.360 Read completed with error (sct=0, sc=8) 00:12:34.360 Read completed with error (sct=0, sc=8) 00:12:34.360 Write completed with error (sct=0, sc=8) 00:12:34.360 Write completed with error (sct=0, sc=8) 00:12:34.360 Write completed with error (sct=0, sc=8) 00:12:34.360 Read completed with error (sct=0, sc=8) 00:12:34.360 Read completed with error (sct=0, sc=8) 00:12:34.360 Read completed with error (sct=0, sc=8) 00:12:34.360 Read completed with error (sct=0, sc=8) 00:12:34.360 Write completed with error (sct=0, sc=8) 00:12:34.360 Read completed with error (sct=0, sc=8) 00:12:34.360 Read completed with error (sct=0, sc=8) 00:12:34.360 Read completed with error (sct=0, sc=8) 00:12:34.360 Read completed with error (sct=0, sc=8) 00:12:34.360 Write completed with error (sct=0, sc=8) 00:12:34.360 Write completed with error (sct=0, sc=8) 00:12:34.360 Read completed with error (sct=0, sc=8) 00:12:34.360 Write completed with error (sct=0, sc=8) 00:12:34.360 Read completed with error (sct=0, sc=8) 00:12:34.360 Read completed with error (sct=0, sc=8) 00:12:34.360 Read completed with error (sct=0, sc=8) 00:12:34.360 Read completed with error (sct=0, sc=8) 00:12:34.360 Read completed with error (sct=0, sc=8) 00:12:34.360 [2024-07-15 15:17:37.874307] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1959560 is same with the state(5) to be set 00:12:34.360 Read completed with error (sct=0, sc=8) 00:12:34.360 Read completed with error (sct=0, sc=8) 00:12:34.360 Write completed with error (sct=0, sc=8) 00:12:34.360 Read completed with error (sct=0, sc=8) 00:12:34.360 Write completed with error (sct=0, sc=8) 00:12:34.360 Read completed with error (sct=0, sc=8) 00:12:34.360 Read completed with error (sct=0, sc=8) 00:12:34.360 Write completed with error (sct=0, sc=8) 00:12:34.360 Read completed with error (sct=0, sc=8) 00:12:34.360 Read completed with error (sct=0, sc=8) 00:12:34.360 Read completed with error (sct=0, sc=8) 00:12:34.361 Read completed with error (sct=0, sc=8) 00:12:34.361 Read completed with error (sct=0, sc=8) 00:12:34.361 Read completed with error (sct=0, sc=8) 00:12:34.361 Read completed with error (sct=0, sc=8) 00:12:34.361 Write completed with error (sct=0, sc=8) 00:12:34.361 Read completed with error (sct=0, sc=8) 00:12:34.361 Read completed with error (sct=0, sc=8) 00:12:34.361 Read completed with error (sct=0, sc=8) 00:12:34.361 Read completed with error (sct=0, sc=8) 00:12:34.361 Read completed with error (sct=0, sc=8) 00:12:34.361 Write completed with error (sct=0, sc=8) 00:12:34.361 Read completed with error (sct=0, sc=8) 00:12:34.361 Read completed with error (sct=0, sc=8) 00:12:34.361 Read completed with error (sct=0, sc=8) 00:12:34.361 Read completed with error (sct=0, sc=8) 00:12:34.361 Read completed with error (sct=0, sc=8) 00:12:34.361 Read completed with error (sct=0, sc=8) 00:12:34.361 Read completed with error (sct=0, sc=8) 00:12:34.361 Write completed with error (sct=0, sc=8) 00:12:34.361 Read completed with error (sct=0, sc=8) 00:12:34.361 Read completed with error (sct=0, sc=8) 00:12:34.361 Read completed with error (sct=0, sc=8) 00:12:34.361 Read completed with error (sct=0, sc=8) 00:12:34.361 Read completed with error (sct=0, sc=8) 00:12:34.361 Read completed with error (sct=0, sc=8) 00:12:34.361 [2024-07-15 15:17:37.874470] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935910 is same with the state(5) to be set 00:12:34.361 Read completed with error (sct=0, sc=8) 00:12:34.361 Read completed with error (sct=0, sc=8) 00:12:34.361 Read completed with error (sct=0, sc=8) 00:12:34.361 Write completed with error (sct=0, sc=8) 00:12:34.361 Read completed with error (sct=0, sc=8) 00:12:34.361 Write completed with error (sct=0, sc=8) 00:12:34.361 Read completed with error (sct=0, sc=8) 00:12:34.361 Read completed with error (sct=0, sc=8) 00:12:34.361 Read completed with error (sct=0, sc=8) 00:12:34.361 Write completed with error (sct=0, sc=8) 00:12:34.361 Read completed with error (sct=0, sc=8) 00:12:34.361 Read completed with error (sct=0, sc=8) 00:12:34.361 Read completed with error (sct=0, sc=8) 00:12:34.361 Write completed with error (sct=0, sc=8) 00:12:34.361 Read completed with error (sct=0, sc=8) 00:12:34.361 Read completed with error (sct=0, sc=8) 00:12:34.361 Write completed with error (sct=0, sc=8) 00:12:34.361 Write completed with error (sct=0, sc=8) 00:12:34.361 [2024-07-15 15:17:37.874573] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3cc000d2f0 is same with the state(5) to be set 00:12:34.361 Initializing NVMe Controllers 00:12:34.361 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:34.361 Controller IO queue size 128, less than required. 00:12:34.361 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:34.361 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:34.361 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:34.361 Initialization complete. Launching workers. 00:12:34.361 ======================================================== 00:12:34.361 Latency(us) 00:12:34.361 Device Information : IOPS MiB/s Average min max 00:12:34.361 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 191.26 0.09 952385.87 489.78 1010529.63 00:12:34.361 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 157.48 0.08 869788.67 270.43 1012130.08 00:12:34.361 ======================================================== 00:12:34.361 Total : 348.73 0.17 915087.70 270.43 1012130.08 00:12:34.361 00:12:34.361 [2024-07-15 15:17:37.875275] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1936450 (9): Bad file descriptor 00:12:34.361 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:12:34.361 15:17:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.361 15:17:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:12:34.361 15:17:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2964800 00:12:34.361 15:17:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:12:34.620 15:17:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:12:34.621 15:17:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2964800 00:12:34.621 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2964800) - No such process 00:12:34.621 15:17:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2964800 00:12:34.621 15:17:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:12:34.621 15:17:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 2964800 00:12:34.621 15:17:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:12:34.621 15:17:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:34.621 15:17:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:12:34.621 15:17:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:34.621 15:17:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 2964800 00:12:34.621 15:17:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:12:34.621 15:17:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:34.621 15:17:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:34.621 15:17:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:34.621 15:17:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:34.621 15:17:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.621 15:17:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:34.621 15:17:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.621 15:17:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:34.621 15:17:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.621 15:17:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:34.621 [2024-07-15 15:17:38.402860] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:34.621 15:17:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.621 15:17:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:34.621 15:17:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.621 15:17:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:34.621 15:17:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.621 15:17:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2965447 00:12:34.621 15:17:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:12:34.621 15:17:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:34.621 15:17:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2965447 00:12:34.621 15:17:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:34.621 EAL: No free 2048 kB hugepages reported on node 1 00:12:34.621 [2024-07-15 15:17:38.473445] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:35.188 15:17:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:35.188 15:17:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2965447 00:12:35.188 15:17:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:35.755 15:17:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:35.755 15:17:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2965447 00:12:35.755 15:17:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:36.323 15:17:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:36.323 15:17:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2965447 00:12:36.323 15:17:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:36.581 15:17:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:36.581 15:17:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2965447 00:12:36.581 15:17:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:37.148 15:17:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:37.148 15:17:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2965447 00:12:37.148 15:17:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:37.715 15:17:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:37.715 15:17:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2965447 00:12:37.715 15:17:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:37.973 Initializing NVMe Controllers 00:12:37.973 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:37.973 Controller IO queue size 128, less than required. 00:12:37.973 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:37.973 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:37.973 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:37.973 Initialization complete. Launching workers. 00:12:37.973 ======================================================== 00:12:37.973 Latency(us) 00:12:37.973 Device Information : IOPS MiB/s Average min max 00:12:37.973 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003108.14 1000210.22 1009719.86 00:12:37.973 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005504.34 1000336.23 1042992.56 00:12:37.973 ======================================================== 00:12:37.973 Total : 256.00 0.12 1004306.24 1000210.22 1042992.56 00:12:37.973 00:12:38.231 15:17:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:38.231 15:17:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2965447 00:12:38.231 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2965447) - No such process 00:12:38.231 15:17:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2965447 00:12:38.232 15:17:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:12:38.232 15:17:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:12:38.232 15:17:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:38.232 15:17:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:12:38.232 15:17:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:38.232 15:17:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:12:38.232 15:17:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:38.232 15:17:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:38.232 rmmod nvme_tcp 00:12:38.232 rmmod nvme_fabrics 00:12:38.232 rmmod nvme_keyring 00:12:38.232 15:17:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:38.232 15:17:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:12:38.232 15:17:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:12:38.232 15:17:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 2964651 ']' 00:12:38.232 15:17:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 2964651 00:12:38.232 15:17:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 2964651 ']' 00:12:38.232 15:17:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 2964651 00:12:38.232 15:17:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:12:38.232 15:17:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:38.232 15:17:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2964651 00:12:38.232 15:17:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:38.232 15:17:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:38.232 15:17:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2964651' 00:12:38.232 killing process with pid 2964651 00:12:38.232 15:17:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 2964651 00:12:38.232 15:17:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 2964651 00:12:38.491 15:17:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:38.491 15:17:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:38.491 15:17:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:38.491 15:17:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:38.491 15:17:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:38.491 15:17:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:38.491 15:17:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:38.491 15:17:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:41.042 15:17:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:41.042 00:12:41.042 real 0m17.773s 00:12:41.042 user 0m29.742s 00:12:41.042 sys 0m7.310s 00:12:41.042 15:17:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:41.042 15:17:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:41.042 ************************************ 00:12:41.042 END TEST nvmf_delete_subsystem 00:12:41.042 ************************************ 00:12:41.042 15:17:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:41.042 15:17:44 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:12:41.042 15:17:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:41.042 15:17:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:41.042 15:17:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:41.042 ************************************ 00:12:41.042 START TEST nvmf_ns_masking 00:12:41.042 ************************************ 00:12:41.042 15:17:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:12:41.042 * Looking for test storage... 00:12:41.042 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:41.042 15:17:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:41.042 15:17:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:12:41.042 15:17:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:41.042 15:17:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:41.042 15:17:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:41.042 15:17:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:41.042 15:17:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:41.042 15:17:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:41.042 15:17:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:41.042 15:17:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:41.042 15:17:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:41.042 15:17:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:41.042 15:17:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:12:41.042 15:17:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:12:41.042 15:17:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:41.042 15:17:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:41.042 15:17:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:41.042 15:17:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:41.042 15:17:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:41.042 15:17:44 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:41.042 15:17:44 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:41.042 15:17:44 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:41.042 15:17:44 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.042 15:17:44 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.042 15:17:44 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.042 15:17:44 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:12:41.042 15:17:44 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.042 15:17:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:12:41.042 15:17:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:41.042 15:17:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:41.042 15:17:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:41.042 15:17:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:41.042 15:17:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:41.042 15:17:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:41.042 15:17:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:41.042 15:17:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:41.043 15:17:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:41.043 15:17:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:12:41.043 15:17:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:12:41.043 15:17:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:12:41.043 15:17:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=e9b76322-0940-4c64-93fb-fdb7e927e0f0 00:12:41.043 15:17:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:12:41.043 15:17:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=2ab1c88f-9cd0-41d4-98e8-2dd6d73363c2 00:12:41.043 15:17:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:12:41.043 15:17:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:12:41.043 15:17:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:12:41.043 15:17:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:12:41.043 15:17:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=c3037766-672b-416c-8f0c-15ab231e9095 00:12:41.043 15:17:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:12:41.043 15:17:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:41.043 15:17:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:41.043 15:17:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:41.043 15:17:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:41.043 15:17:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:41.043 15:17:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:41.043 15:17:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:41.043 15:17:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:41.043 15:17:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:41.043 15:17:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:41.043 15:17:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:12:41.043 15:17:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:47.676 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:47.676 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:12:47.676 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:47.676 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:47.676 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:47.676 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:47.676 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:47.676 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:12:47.676 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:47.676 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:12:47.676 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:12:47.676 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:12:47.676 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:12:47.676 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:12:47.676 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:12:47.676 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:47.676 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:47.676 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:47.676 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:47.676 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:47.676 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:47.676 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:47.676 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:47.676 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:47.676 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:47.676 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:47.676 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:47.676 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:47.676 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:47.676 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:47.676 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:47.676 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:47.676 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:47.677 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:47.677 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:47.677 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:47.677 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:47.677 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:47.677 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:47.677 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:47.677 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:47.677 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:47.677 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:47.677 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:47.677 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:47.677 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:47.677 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:47.677 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:47.677 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:47.677 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:47.677 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:47.677 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:47.677 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:47.677 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:47.677 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:47.677 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:47.677 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:47.677 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:47.677 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:47.677 Found net devices under 0000:af:00.0: cvl_0_0 00:12:47.677 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:47.677 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:47.677 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:47.677 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:47.677 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:47.677 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:47.677 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:47.677 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:47.677 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:47.677 Found net devices under 0000:af:00.1: cvl_0_1 00:12:47.677 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:47.677 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:47.677 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:12:47.677 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:47.677 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:47.677 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:47.677 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:47.677 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:47.677 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:47.677 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:47.677 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:47.677 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:47.677 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:47.677 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:47.677 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:47.677 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:47.677 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:47.677 15:17:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:47.677 15:17:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:47.677 15:17:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:47.677 15:17:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:47.677 15:17:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:47.677 15:17:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:47.677 15:17:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:47.677 15:17:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:47.677 15:17:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:47.677 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:47.677 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:12:47.677 00:12:47.677 --- 10.0.0.2 ping statistics --- 00:12:47.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:47.677 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:12:47.677 15:17:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:47.677 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:47.677 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.241 ms 00:12:47.677 00:12:47.677 --- 10.0.0.1 ping statistics --- 00:12:47.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:47.677 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:12:47.677 15:17:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:47.677 15:17:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:12:47.677 15:17:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:47.677 15:17:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:47.677 15:17:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:47.677 15:17:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:47.677 15:17:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:47.677 15:17:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:47.677 15:17:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:47.677 15:17:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:12:47.677 15:17:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:47.677 15:17:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:47.677 15:17:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:47.677 15:17:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=2969801 00:12:47.677 15:17:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 2969801 00:12:47.678 15:17:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:47.678 15:17:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 2969801 ']' 00:12:47.678 15:17:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:47.678 15:17:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:47.678 15:17:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:47.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:47.678 15:17:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:47.678 15:17:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:47.678 [2024-07-15 15:17:51.389528] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:12:47.678 [2024-07-15 15:17:51.389575] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:47.678 EAL: No free 2048 kB hugepages reported on node 1 00:12:47.678 [2024-07-15 15:17:51.461774] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:47.678 [2024-07-15 15:17:51.534010] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:47.678 [2024-07-15 15:17:51.534048] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:47.678 [2024-07-15 15:17:51.534057] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:47.678 [2024-07-15 15:17:51.534065] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:47.678 [2024-07-15 15:17:51.534088] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:47.678 [2024-07-15 15:17:51.534111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.613 15:17:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:48.613 15:17:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:12:48.613 15:17:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:48.613 15:17:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:48.613 15:17:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:48.613 15:17:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:48.613 15:17:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:48.613 [2024-07-15 15:17:52.385649] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:48.613 15:17:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:12:48.613 15:17:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:12:48.613 15:17:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:48.872 Malloc1 00:12:48.872 15:17:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:49.129 Malloc2 00:12:49.129 15:17:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:49.129 15:17:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:49.388 15:17:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:49.388 [2024-07-15 15:17:53.278856] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:49.646 15:17:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:12:49.646 15:17:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c3037766-672b-416c-8f0c-15ab231e9095 -a 10.0.0.2 -s 4420 -i 4 00:12:49.646 15:17:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:12:49.646 15:17:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:49.646 15:17:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:49.646 15:17:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:49.646 15:17:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:52.201 15:17:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:52.201 15:17:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:52.201 15:17:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:52.201 15:17:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:52.201 15:17:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:52.201 15:17:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:52.201 15:17:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:52.201 15:17:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:52.201 15:17:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:52.201 15:17:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:52.201 15:17:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:12:52.201 15:17:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:52.201 15:17:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:52.201 [ 0]:0x1 00:12:52.201 15:17:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:52.201 15:17:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:52.201 15:17:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e99e16ddccd94e9797019778ac715360 00:12:52.201 15:17:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e99e16ddccd94e9797019778ac715360 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:52.201 15:17:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:52.201 15:17:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:12:52.201 15:17:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:52.201 15:17:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:52.201 [ 0]:0x1 00:12:52.201 15:17:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:52.201 15:17:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:52.201 15:17:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e99e16ddccd94e9797019778ac715360 00:12:52.201 15:17:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e99e16ddccd94e9797019778ac715360 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:52.201 15:17:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:12:52.201 15:17:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:52.201 15:17:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:52.201 [ 1]:0x2 00:12:52.201 15:17:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:52.201 15:17:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:52.201 15:17:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f95b65b7a9f740539826d1615a15105f 00:12:52.201 15:17:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f95b65b7a9f740539826d1615a15105f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:52.201 15:17:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:12:52.201 15:17:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:52.201 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.201 15:17:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:52.460 15:17:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:52.718 15:17:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:12:52.718 15:17:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c3037766-672b-416c-8f0c-15ab231e9095 -a 10.0.0.2 -s 4420 -i 4 00:12:52.718 15:17:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:52.718 15:17:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:52.718 15:17:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:52.718 15:17:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:12:52.718 15:17:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:12:52.718 15:17:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:55.249 15:17:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:55.249 15:17:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:55.249 15:17:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:55.249 15:17:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:55.249 15:17:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:55.249 15:17:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:55.249 15:17:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:55.249 15:17:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:55.249 15:17:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:55.249 15:17:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:55.249 15:17:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:12:55.249 15:17:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:55.249 15:17:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:55.249 15:17:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:55.249 15:17:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:55.249 15:17:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:55.249 15:17:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:55.249 15:17:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:55.249 15:17:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:55.249 15:17:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:55.249 15:17:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:55.249 15:17:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:55.249 15:17:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:55.249 15:17:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:55.249 15:17:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:55.249 15:17:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:55.249 15:17:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:55.249 15:17:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:55.249 15:17:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:12:55.249 15:17:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:55.249 15:17:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:55.249 [ 0]:0x2 00:12:55.249 15:17:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:55.249 15:17:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:55.249 15:17:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f95b65b7a9f740539826d1615a15105f 00:12:55.249 15:17:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f95b65b7a9f740539826d1615a15105f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:55.249 15:17:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:55.249 15:17:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:12:55.249 15:17:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:55.249 15:17:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:55.249 [ 0]:0x1 00:12:55.249 15:17:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:55.249 15:17:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:55.249 15:17:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e99e16ddccd94e9797019778ac715360 00:12:55.249 15:17:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e99e16ddccd94e9797019778ac715360 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:55.249 15:17:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:12:55.249 15:17:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:55.249 15:17:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:55.249 [ 1]:0x2 00:12:55.249 15:17:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:55.249 15:17:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:55.508 15:17:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f95b65b7a9f740539826d1615a15105f 00:12:55.508 15:17:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f95b65b7a9f740539826d1615a15105f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:55.508 15:17:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:55.508 15:17:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:12:55.508 15:17:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:55.508 15:17:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:55.508 15:17:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:55.508 15:17:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:55.508 15:17:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:55.508 15:17:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:55.508 15:17:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:55.508 15:17:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:55.508 15:17:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:55.508 15:17:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:55.508 15:17:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:55.508 15:17:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:55.508 15:17:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:55.508 15:17:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:55.508 15:17:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:55.508 15:17:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:55.508 15:17:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:55.508 15:17:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:12:55.508 15:17:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:55.508 15:17:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:55.508 [ 0]:0x2 00:12:55.508 15:17:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:55.508 15:17:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:55.767 15:17:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f95b65b7a9f740539826d1615a15105f 00:12:55.767 15:17:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f95b65b7a9f740539826d1615a15105f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:55.767 15:17:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:12:55.767 15:17:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:55.767 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.767 15:17:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:55.767 15:17:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:12:55.767 15:17:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c3037766-672b-416c-8f0c-15ab231e9095 -a 10.0.0.2 -s 4420 -i 4 00:12:56.026 15:17:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:56.026 15:17:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:56.026 15:17:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:56.026 15:17:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:12:56.026 15:17:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:12:56.026 15:17:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:58.591 15:18:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:58.592 15:18:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:58.592 15:18:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:58.592 15:18:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:12:58.592 15:18:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:58.592 15:18:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:58.592 15:18:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:58.592 15:18:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:58.592 15:18:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:58.592 15:18:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:58.592 15:18:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:12:58.592 15:18:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:58.592 15:18:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:58.592 [ 0]:0x1 00:12:58.592 15:18:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:58.592 15:18:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:58.592 15:18:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e99e16ddccd94e9797019778ac715360 00:12:58.592 15:18:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e99e16ddccd94e9797019778ac715360 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:58.592 15:18:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:12:58.592 15:18:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:58.592 15:18:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:58.592 [ 1]:0x2 00:12:58.592 15:18:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:58.592 15:18:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:58.592 15:18:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f95b65b7a9f740539826d1615a15105f 00:12:58.592 15:18:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f95b65b7a9f740539826d1615a15105f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:58.592 15:18:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:58.592 15:18:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:12:58.592 15:18:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:58.592 15:18:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:58.592 15:18:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:58.592 15:18:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:58.592 15:18:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:58.592 15:18:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:58.592 15:18:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:58.592 15:18:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:58.592 15:18:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:58.592 15:18:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:58.592 15:18:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:58.592 15:18:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:58.592 15:18:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:58.592 15:18:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:58.592 15:18:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:58.592 15:18:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:58.592 15:18:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:58.592 15:18:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:12:58.592 15:18:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:58.592 15:18:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:58.592 [ 0]:0x2 00:12:58.592 15:18:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:58.592 15:18:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:58.592 15:18:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f95b65b7a9f740539826d1615a15105f 00:12:58.592 15:18:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f95b65b7a9f740539826d1615a15105f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:58.592 15:18:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:58.592 15:18:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:58.592 15:18:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:58.592 15:18:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:58.592 15:18:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:58.592 15:18:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:58.592 15:18:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:58.592 15:18:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:58.592 15:18:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:58.592 15:18:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:58.592 15:18:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:58.592 15:18:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:58.592 [2024-07-15 15:18:02.493106] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:58.592 request: 00:12:58.592 { 00:12:58.592 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:58.592 "nsid": 2, 00:12:58.592 "host": "nqn.2016-06.io.spdk:host1", 00:12:58.592 "method": "nvmf_ns_remove_host", 00:12:58.592 "req_id": 1 00:12:58.592 } 00:12:58.592 Got JSON-RPC error response 00:12:58.592 response: 00:12:58.592 { 00:12:58.592 "code": -32602, 00:12:58.592 "message": "Invalid parameters" 00:12:58.592 } 00:12:58.852 15:18:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:58.852 15:18:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:58.852 15:18:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:58.852 15:18:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:58.852 15:18:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:12:58.852 15:18:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:58.852 15:18:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:58.852 15:18:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:58.852 15:18:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:58.852 15:18:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:58.852 15:18:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:58.852 15:18:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:58.852 15:18:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:58.852 15:18:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:58.852 15:18:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:58.852 15:18:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:58.852 15:18:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:58.852 15:18:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:58.852 15:18:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:58.852 15:18:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:58.852 15:18:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:58.852 15:18:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:58.852 15:18:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:12:58.852 15:18:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:58.852 15:18:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:58.852 [ 0]:0x2 00:12:58.852 15:18:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:58.852 15:18:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:58.852 15:18:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f95b65b7a9f740539826d1615a15105f 00:12:58.852 15:18:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f95b65b7a9f740539826d1615a15105f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:58.852 15:18:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:12:58.852 15:18:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:58.852 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.852 15:18:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2971946 00:12:58.852 15:18:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:12:58.852 15:18:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:12:58.852 15:18:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2971946 /var/tmp/host.sock 00:12:58.852 15:18:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 2971946 ']' 00:12:58.852 15:18:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:12:58.852 15:18:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:58.852 15:18:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:58.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:58.852 15:18:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:58.852 15:18:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:58.852 [2024-07-15 15:18:02.722963] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:12:58.852 [2024-07-15 15:18:02.723011] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2971946 ] 00:12:58.852 EAL: No free 2048 kB hugepages reported on node 1 00:12:59.111 [2024-07-15 15:18:02.793037] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:59.111 [2024-07-15 15:18:02.862199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:59.677 15:18:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:59.677 15:18:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:12:59.677 15:18:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:59.936 15:18:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:00.195 15:18:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid e9b76322-0940-4c64-93fb-fdb7e927e0f0 00:13:00.195 15:18:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:13:00.195 15:18:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g E9B7632209404C6493FBFDB7E927E0F0 -i 00:13:00.195 15:18:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 2ab1c88f-9cd0-41d4-98e8-2dd6d73363c2 00:13:00.195 15:18:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:13:00.195 15:18:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 2AB1C88F9CD041D498E82DD6D73363C2 -i 00:13:00.452 15:18:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:00.711 15:18:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:13:00.711 15:18:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:00.711 15:18:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:01.276 nvme0n1 00:13:01.276 15:18:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:01.276 15:18:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:01.276 nvme1n2 00:13:01.276 15:18:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:13:01.276 15:18:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:13:01.276 15:18:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:13:01.276 15:18:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:01.276 15:18:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:13:01.534 15:18:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:13:01.534 15:18:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:13:01.534 15:18:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:13:01.534 15:18:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:13:01.791 15:18:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ e9b76322-0940-4c64-93fb-fdb7e927e0f0 == \e\9\b\7\6\3\2\2\-\0\9\4\0\-\4\c\6\4\-\9\3\f\b\-\f\d\b\7\e\9\2\7\e\0\f\0 ]] 00:13:01.791 15:18:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:13:01.791 15:18:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:13:01.791 15:18:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:13:01.791 15:18:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 2ab1c88f-9cd0-41d4-98e8-2dd6d73363c2 == \2\a\b\1\c\8\8\f\-\9\c\d\0\-\4\1\d\4\-\9\8\e\8\-\2\d\d\6\d\7\3\3\6\3\c\2 ]] 00:13:01.791 15:18:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 2971946 00:13:01.791 15:18:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 2971946 ']' 00:13:01.792 15:18:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 2971946 00:13:01.792 15:18:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:13:01.792 15:18:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:01.792 15:18:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2971946 00:13:02.049 15:18:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:02.050 15:18:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:02.050 15:18:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2971946' 00:13:02.050 killing process with pid 2971946 00:13:02.050 15:18:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 2971946 00:13:02.050 15:18:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 2971946 00:13:02.307 15:18:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:02.565 15:18:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:13:02.565 15:18:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:13:02.565 15:18:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:02.565 15:18:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:13:02.565 15:18:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:02.565 15:18:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:13:02.565 15:18:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:02.565 15:18:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:02.565 rmmod nvme_tcp 00:13:02.565 rmmod nvme_fabrics 00:13:02.565 rmmod nvme_keyring 00:13:02.565 15:18:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:02.565 15:18:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:13:02.565 15:18:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:13:02.565 15:18:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 2969801 ']' 00:13:02.565 15:18:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 2969801 00:13:02.565 15:18:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 2969801 ']' 00:13:02.565 15:18:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 2969801 00:13:02.565 15:18:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:13:02.565 15:18:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:02.565 15:18:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2969801 00:13:02.565 15:18:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:02.565 15:18:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:02.565 15:18:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2969801' 00:13:02.565 killing process with pid 2969801 00:13:02.565 15:18:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 2969801 00:13:02.565 15:18:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 2969801 00:13:02.823 15:18:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:02.823 15:18:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:02.823 15:18:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:02.823 15:18:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:02.823 15:18:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:02.823 15:18:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:02.823 15:18:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:02.823 15:18:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:05.355 15:18:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:05.355 00:13:05.355 real 0m24.234s 00:13:05.355 user 0m24.144s 00:13:05.355 sys 0m8.103s 00:13:05.355 15:18:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:05.355 15:18:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:05.355 ************************************ 00:13:05.355 END TEST nvmf_ns_masking 00:13:05.355 ************************************ 00:13:05.355 15:18:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:05.355 15:18:08 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:13:05.355 15:18:08 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:05.355 15:18:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:05.355 15:18:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:05.355 15:18:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:05.355 ************************************ 00:13:05.355 START TEST nvmf_nvme_cli 00:13:05.355 ************************************ 00:13:05.355 15:18:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:05.355 * Looking for test storage... 00:13:05.355 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:05.355 15:18:08 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:05.355 15:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:13:05.355 15:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:05.355 15:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:05.355 15:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:05.355 15:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:05.355 15:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:05.355 15:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:05.355 15:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:05.355 15:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:05.355 15:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:05.355 15:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:05.355 15:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:13:05.355 15:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:13:05.355 15:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:05.355 15:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:05.355 15:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:05.355 15:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:05.355 15:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:05.355 15:18:08 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:05.355 15:18:08 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:05.355 15:18:08 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:05.355 15:18:08 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.355 15:18:08 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.356 15:18:08 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.356 15:18:08 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:13:05.356 15:18:08 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.356 15:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:13:05.356 15:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:05.356 15:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:05.356 15:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:05.356 15:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:05.356 15:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:05.356 15:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:05.356 15:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:05.356 15:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:05.356 15:18:08 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:05.356 15:18:08 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:05.356 15:18:08 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:13:05.356 15:18:08 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:05.356 15:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:05.356 15:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:05.356 15:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:05.356 15:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:05.356 15:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:05.356 15:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:05.356 15:18:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:05.356 15:18:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:05.356 15:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:05.356 15:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:05.356 15:18:08 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:13:05.356 15:18:08 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:11.993 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:11.993 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:11.993 Found net devices under 0000:af:00.0: cvl_0_0 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:11.993 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:11.994 Found net devices under 0000:af:00.1: cvl_0_1 00:13:11.994 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:11.994 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:11.994 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:13:11.994 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:11.994 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:11.994 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:11.994 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:11.994 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:11.994 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:11.994 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:11.994 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:11.994 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:11.994 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:11.994 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:11.994 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:11.994 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:11.994 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:11.994 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:11.994 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:11.994 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:11.994 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:11.994 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:11.994 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:12.252 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:12.252 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:12.252 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:12.252 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:12.252 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:13:12.252 00:13:12.252 --- 10.0.0.2 ping statistics --- 00:13:12.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.252 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:13:12.252 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:12.252 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:12.252 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:13:12.252 00:13:12.252 --- 10.0.0.1 ping statistics --- 00:13:12.252 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.252 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:13:12.252 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:12.252 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:13:12.252 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:12.252 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:12.252 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:12.252 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:12.252 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:12.252 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:12.252 15:18:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:12.252 15:18:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:12.252 15:18:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:12.252 15:18:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:12.252 15:18:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:12.252 15:18:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=2976855 00:13:12.252 15:18:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 2976855 00:13:12.252 15:18:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:12.252 15:18:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 2976855 ']' 00:13:12.252 15:18:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:12.252 15:18:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:12.252 15:18:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:12.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:12.252 15:18:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:12.252 15:18:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:12.252 [2024-07-15 15:18:16.072266] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:13:12.252 [2024-07-15 15:18:16.072316] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:12.252 EAL: No free 2048 kB hugepages reported on node 1 00:13:12.252 [2024-07-15 15:18:16.146642] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:12.511 [2024-07-15 15:18:16.220124] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:12.511 [2024-07-15 15:18:16.220164] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:12.511 [2024-07-15 15:18:16.220173] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:12.511 [2024-07-15 15:18:16.220182] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:12.511 [2024-07-15 15:18:16.220189] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:12.511 [2024-07-15 15:18:16.220240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:12.511 [2024-07-15 15:18:16.220328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:12.511 [2024-07-15 15:18:16.220419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:12.511 [2024-07-15 15:18:16.220421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.078 15:18:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:13.078 15:18:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:13:13.078 15:18:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:13.078 15:18:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:13.078 15:18:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:13.078 15:18:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:13.078 15:18:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:13.078 15:18:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.078 15:18:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:13.078 [2024-07-15 15:18:16.924595] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:13.078 15:18:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.078 15:18:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:13.078 15:18:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.078 15:18:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:13.078 Malloc0 00:13:13.078 15:18:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.078 15:18:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:13.078 15:18:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.078 15:18:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:13.078 Malloc1 00:13:13.078 15:18:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.078 15:18:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:13.078 15:18:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.078 15:18:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:13.337 15:18:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.337 15:18:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:13.337 15:18:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.337 15:18:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:13.337 15:18:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.337 15:18:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:13.337 15:18:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.337 15:18:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:13.337 15:18:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.337 15:18:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:13.337 15:18:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.337 15:18:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:13.337 [2024-07-15 15:18:17.009038] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:13.337 15:18:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.337 15:18:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:13.337 15:18:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.337 15:18:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:13.337 15:18:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.337 15:18:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 4420 00:13:13.337 00:13:13.337 Discovery Log Number of Records 2, Generation counter 2 00:13:13.337 =====Discovery Log Entry 0====== 00:13:13.337 trtype: tcp 00:13:13.337 adrfam: ipv4 00:13:13.337 subtype: current discovery subsystem 00:13:13.337 treq: not required 00:13:13.337 portid: 0 00:13:13.337 trsvcid: 4420 00:13:13.337 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:13.337 traddr: 10.0.0.2 00:13:13.337 eflags: explicit discovery connections, duplicate discovery information 00:13:13.337 sectype: none 00:13:13.337 =====Discovery Log Entry 1====== 00:13:13.337 trtype: tcp 00:13:13.337 adrfam: ipv4 00:13:13.337 subtype: nvme subsystem 00:13:13.337 treq: not required 00:13:13.337 portid: 0 00:13:13.337 trsvcid: 4420 00:13:13.337 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:13.337 traddr: 10.0.0.2 00:13:13.337 eflags: none 00:13:13.337 sectype: none 00:13:13.337 15:18:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:13.337 15:18:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:13.337 15:18:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:13:13.337 15:18:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:13.337 15:18:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:13:13.337 15:18:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:13:13.337 15:18:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:13.337 15:18:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:13:13.338 15:18:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:13.338 15:18:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:13.338 15:18:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:14.713 15:18:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:14.713 15:18:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:13:14.713 15:18:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:14.713 15:18:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:13:14.713 15:18:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:13:14.713 15:18:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:13:17.243 /dev/nvme0n1 ]] 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:17.243 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:17.243 rmmod nvme_tcp 00:13:17.243 rmmod nvme_fabrics 00:13:17.243 rmmod nvme_keyring 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 2976855 ']' 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 2976855 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 2976855 ']' 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 2976855 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2976855 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2976855' 00:13:17.243 killing process with pid 2976855 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 2976855 00:13:17.243 15:18:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 2976855 00:13:17.243 15:18:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:17.243 15:18:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:17.243 15:18:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:17.243 15:18:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:17.243 15:18:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:17.243 15:18:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:17.243 15:18:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:17.243 15:18:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:19.828 15:18:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:19.828 00:13:19.828 real 0m14.485s 00:13:19.828 user 0m21.365s 00:13:19.828 sys 0m6.260s 00:13:19.828 15:18:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:19.828 15:18:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:19.828 ************************************ 00:13:19.828 END TEST nvmf_nvme_cli 00:13:19.828 ************************************ 00:13:19.828 15:18:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:19.828 15:18:23 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:13:19.828 15:18:23 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:19.828 15:18:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:19.828 15:18:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:19.828 15:18:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:19.828 ************************************ 00:13:19.828 START TEST nvmf_vfio_user 00:13:19.828 ************************************ 00:13:19.828 15:18:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:19.828 * Looking for test storage... 00:13:19.828 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:19.828 15:18:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:19.828 15:18:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:13:19.828 15:18:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:19.828 15:18:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:19.828 15:18:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:19.828 15:18:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:19.828 15:18:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:19.828 15:18:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:19.828 15:18:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:19.828 15:18:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:19.828 15:18:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:19.828 15:18:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:19.828 15:18:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:13:19.828 15:18:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:13:19.828 15:18:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:19.828 15:18:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:19.828 15:18:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:19.828 15:18:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:19.828 15:18:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:19.828 15:18:23 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:19.828 15:18:23 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:19.828 15:18:23 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:19.828 15:18:23 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.829 15:18:23 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.829 15:18:23 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.829 15:18:23 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:13:19.829 15:18:23 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.829 15:18:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:13:19.829 15:18:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:19.829 15:18:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:19.829 15:18:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:19.829 15:18:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:19.829 15:18:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:19.829 15:18:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:19.829 15:18:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:19.829 15:18:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:19.829 15:18:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:19.829 15:18:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:19.829 15:18:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:13:19.829 15:18:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:19.829 15:18:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:19.829 15:18:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:19.829 15:18:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:13:19.829 15:18:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:13:19.829 15:18:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:13:19.829 15:18:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:13:19.829 15:18:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2978150 00:13:19.829 15:18:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2978150' 00:13:19.829 Process pid: 2978150 00:13:19.829 15:18:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:19.829 15:18:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2978150 00:13:19.829 15:18:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:13:19.829 15:18:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 2978150 ']' 00:13:19.829 15:18:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:19.829 15:18:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:19.829 15:18:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:19.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:19.829 15:18:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:19.829 15:18:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:19.829 [2024-07-15 15:18:23.482072] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:13:19.829 [2024-07-15 15:18:23.482122] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:19.829 EAL: No free 2048 kB hugepages reported on node 1 00:13:19.829 [2024-07-15 15:18:23.549653] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:19.829 [2024-07-15 15:18:23.623206] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:19.829 [2024-07-15 15:18:23.623251] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:19.829 [2024-07-15 15:18:23.623260] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:19.829 [2024-07-15 15:18:23.623268] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:19.829 [2024-07-15 15:18:23.623294] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:19.829 [2024-07-15 15:18:23.623354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:19.829 [2024-07-15 15:18:23.623449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:19.829 [2024-07-15 15:18:23.623538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:19.829 [2024-07-15 15:18:23.623539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.395 15:18:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:20.395 15:18:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:13:20.395 15:18:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:21.771 15:18:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:13:21.771 15:18:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:21.771 15:18:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:21.771 15:18:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:21.771 15:18:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:21.771 15:18:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:21.771 Malloc1 00:13:21.771 15:18:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:22.030 15:18:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:22.288 15:18:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:22.547 15:18:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:22.547 15:18:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:22.547 15:18:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:22.547 Malloc2 00:13:22.547 15:18:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:22.806 15:18:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:23.064 15:18:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:23.324 15:18:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:13:23.324 15:18:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:13:23.324 15:18:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:23.324 15:18:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:23.324 15:18:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:13:23.324 15:18:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:23.324 [2024-07-15 15:18:27.011654] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:13:23.324 [2024-07-15 15:18:27.011692] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2978861 ] 00:13:23.324 EAL: No free 2048 kB hugepages reported on node 1 00:13:23.324 [2024-07-15 15:18:27.043202] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:13:23.324 [2024-07-15 15:18:27.053174] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:23.324 [2024-07-15 15:18:27.053195] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f72c203f000 00:13:23.324 [2024-07-15 15:18:27.054174] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:23.324 [2024-07-15 15:18:27.055176] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:23.324 [2024-07-15 15:18:27.056187] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:23.324 [2024-07-15 15:18:27.057195] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:23.324 [2024-07-15 15:18:27.058195] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:23.324 [2024-07-15 15:18:27.059195] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:23.324 [2024-07-15 15:18:27.060205] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:23.325 [2024-07-15 15:18:27.061209] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:23.325 [2024-07-15 15:18:27.062211] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:23.325 [2024-07-15 15:18:27.062221] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f72c2034000 00:13:23.325 [2024-07-15 15:18:27.063114] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:23.325 [2024-07-15 15:18:27.071413] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:13:23.325 [2024-07-15 15:18:27.071441] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:13:23.325 [2024-07-15 15:18:27.078838] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:23.325 [2024-07-15 15:18:27.078875] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:23.325 [2024-07-15 15:18:27.078950] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:13:23.325 [2024-07-15 15:18:27.078970] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:13:23.325 [2024-07-15 15:18:27.078977] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:13:23.325 [2024-07-15 15:18:27.079306] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:13:23.325 [2024-07-15 15:18:27.079316] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:13:23.325 [2024-07-15 15:18:27.079325] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:13:23.325 [2024-07-15 15:18:27.080312] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:23.325 [2024-07-15 15:18:27.080323] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:13:23.325 [2024-07-15 15:18:27.080332] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:13:23.325 [2024-07-15 15:18:27.081318] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:13:23.325 [2024-07-15 15:18:27.081328] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:23.325 [2024-07-15 15:18:27.082323] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:13:23.325 [2024-07-15 15:18:27.082333] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:13:23.325 [2024-07-15 15:18:27.082339] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:13:23.325 [2024-07-15 15:18:27.082348] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:23.325 [2024-07-15 15:18:27.082455] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:13:23.325 [2024-07-15 15:18:27.082461] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:23.325 [2024-07-15 15:18:27.082467] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:13:23.325 [2024-07-15 15:18:27.083327] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:13:23.325 [2024-07-15 15:18:27.084331] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:13:23.325 [2024-07-15 15:18:27.085336] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:23.325 [2024-07-15 15:18:27.086336] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:23.325 [2024-07-15 15:18:27.086412] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:23.325 [2024-07-15 15:18:27.087349] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:13:23.325 [2024-07-15 15:18:27.087358] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:23.325 [2024-07-15 15:18:27.087364] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:13:23.325 [2024-07-15 15:18:27.087383] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:13:23.325 [2024-07-15 15:18:27.087392] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:13:23.325 [2024-07-15 15:18:27.087407] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:23.325 [2024-07-15 15:18:27.087414] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:23.325 [2024-07-15 15:18:27.087427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:23.325 [2024-07-15 15:18:27.087483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:23.325 [2024-07-15 15:18:27.087494] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:13:23.325 [2024-07-15 15:18:27.087502] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:13:23.325 [2024-07-15 15:18:27.087508] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:13:23.325 [2024-07-15 15:18:27.087514] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:23.325 [2024-07-15 15:18:27.087520] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:13:23.325 [2024-07-15 15:18:27.087526] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:13:23.325 [2024-07-15 15:18:27.087532] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:13:23.325 [2024-07-15 15:18:27.087541] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:13:23.325 [2024-07-15 15:18:27.087552] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:23.325 [2024-07-15 15:18:27.087563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:23.325 [2024-07-15 15:18:27.087576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:23.325 [2024-07-15 15:18:27.087586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:23.325 [2024-07-15 15:18:27.087595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:23.325 [2024-07-15 15:18:27.087604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:23.325 [2024-07-15 15:18:27.087610] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:13:23.325 [2024-07-15 15:18:27.087620] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:23.325 [2024-07-15 15:18:27.087630] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:23.325 [2024-07-15 15:18:27.087638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:23.325 [2024-07-15 15:18:27.087645] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:13:23.325 [2024-07-15 15:18:27.087652] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:23.325 [2024-07-15 15:18:27.087660] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:13:23.325 [2024-07-15 15:18:27.087667] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:13:23.325 [2024-07-15 15:18:27.087676] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:23.325 [2024-07-15 15:18:27.087689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:23.325 [2024-07-15 15:18:27.087739] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:13:23.325 [2024-07-15 15:18:27.087748] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:13:23.325 [2024-07-15 15:18:27.087756] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:23.325 [2024-07-15 15:18:27.087762] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:23.325 [2024-07-15 15:18:27.087769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:23.325 [2024-07-15 15:18:27.087782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:23.325 [2024-07-15 15:18:27.087792] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:13:23.325 [2024-07-15 15:18:27.087806] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:13:23.325 [2024-07-15 15:18:27.087815] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:13:23.325 [2024-07-15 15:18:27.087822] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:23.325 [2024-07-15 15:18:27.087828] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:23.325 [2024-07-15 15:18:27.087839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:23.325 [2024-07-15 15:18:27.087858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:23.325 [2024-07-15 15:18:27.087872] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:23.325 [2024-07-15 15:18:27.087881] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:23.325 [2024-07-15 15:18:27.087888] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:23.325 [2024-07-15 15:18:27.087894] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:23.326 [2024-07-15 15:18:27.087901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:23.326 [2024-07-15 15:18:27.087916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:23.326 [2024-07-15 15:18:27.087925] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:23.326 [2024-07-15 15:18:27.087933] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:13:23.326 [2024-07-15 15:18:27.087942] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:13:23.326 [2024-07-15 15:18:27.087948] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:13:23.326 [2024-07-15 15:18:27.087955] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:23.326 [2024-07-15 15:18:27.087962] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:13:23.326 [2024-07-15 15:18:27.087970] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:13:23.326 [2024-07-15 15:18:27.087976] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:13:23.326 [2024-07-15 15:18:27.087983] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:13:23.326 [2024-07-15 15:18:27.088003] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:23.326 [2024-07-15 15:18:27.088014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:23.326 [2024-07-15 15:18:27.088027] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:23.326 [2024-07-15 15:18:27.088038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:23.326 [2024-07-15 15:18:27.088051] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:23.326 [2024-07-15 15:18:27.088065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:23.326 [2024-07-15 15:18:27.088078] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:23.326 [2024-07-15 15:18:27.088092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:23.326 [2024-07-15 15:18:27.088108] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:23.326 [2024-07-15 15:18:27.088114] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:23.326 [2024-07-15 15:18:27.088118] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:23.326 [2024-07-15 15:18:27.088123] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:23.326 [2024-07-15 15:18:27.088130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:23.326 [2024-07-15 15:18:27.088138] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:23.326 [2024-07-15 15:18:27.088144] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:23.326 [2024-07-15 15:18:27.088150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:23.326 [2024-07-15 15:18:27.088158] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:23.326 [2024-07-15 15:18:27.088164] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:23.326 [2024-07-15 15:18:27.088171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:23.326 [2024-07-15 15:18:27.088179] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:23.326 [2024-07-15 15:18:27.088184] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:23.326 [2024-07-15 15:18:27.088191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:23.326 [2024-07-15 15:18:27.088199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:23.326 [2024-07-15 15:18:27.088212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:23.326 [2024-07-15 15:18:27.088225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:23.326 [2024-07-15 15:18:27.088237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:23.326 ===================================================== 00:13:23.326 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:23.326 ===================================================== 00:13:23.326 Controller Capabilities/Features 00:13:23.326 ================================ 00:13:23.326 Vendor ID: 4e58 00:13:23.326 Subsystem Vendor ID: 4e58 00:13:23.326 Serial Number: SPDK1 00:13:23.326 Model Number: SPDK bdev Controller 00:13:23.326 Firmware Version: 24.09 00:13:23.326 Recommended Arb Burst: 6 00:13:23.326 IEEE OUI Identifier: 8d 6b 50 00:13:23.326 Multi-path I/O 00:13:23.326 May have multiple subsystem ports: Yes 00:13:23.326 May have multiple controllers: Yes 00:13:23.326 Associated with SR-IOV VF: No 00:13:23.326 Max Data Transfer Size: 131072 00:13:23.326 Max Number of Namespaces: 32 00:13:23.326 Max Number of I/O Queues: 127 00:13:23.326 NVMe Specification Version (VS): 1.3 00:13:23.326 NVMe Specification Version (Identify): 1.3 00:13:23.326 Maximum Queue Entries: 256 00:13:23.326 Contiguous Queues Required: Yes 00:13:23.326 Arbitration Mechanisms Supported 00:13:23.326 Weighted Round Robin: Not Supported 00:13:23.326 Vendor Specific: Not Supported 00:13:23.326 Reset Timeout: 15000 ms 00:13:23.326 Doorbell Stride: 4 bytes 00:13:23.326 NVM Subsystem Reset: Not Supported 00:13:23.326 Command Sets Supported 00:13:23.326 NVM Command Set: Supported 00:13:23.326 Boot Partition: Not Supported 00:13:23.326 Memory Page Size Minimum: 4096 bytes 00:13:23.326 Memory Page Size Maximum: 4096 bytes 00:13:23.326 Persistent Memory Region: Not Supported 00:13:23.326 Optional Asynchronous Events Supported 00:13:23.326 Namespace Attribute Notices: Supported 00:13:23.326 Firmware Activation Notices: Not Supported 00:13:23.326 ANA Change Notices: Not Supported 00:13:23.326 PLE Aggregate Log Change Notices: Not Supported 00:13:23.326 LBA Status Info Alert Notices: Not Supported 00:13:23.326 EGE Aggregate Log Change Notices: Not Supported 00:13:23.326 Normal NVM Subsystem Shutdown event: Not Supported 00:13:23.326 Zone Descriptor Change Notices: Not Supported 00:13:23.326 Discovery Log Change Notices: Not Supported 00:13:23.326 Controller Attributes 00:13:23.326 128-bit Host Identifier: Supported 00:13:23.326 Non-Operational Permissive Mode: Not Supported 00:13:23.326 NVM Sets: Not Supported 00:13:23.326 Read Recovery Levels: Not Supported 00:13:23.326 Endurance Groups: Not Supported 00:13:23.326 Predictable Latency Mode: Not Supported 00:13:23.326 Traffic Based Keep ALive: Not Supported 00:13:23.326 Namespace Granularity: Not Supported 00:13:23.326 SQ Associations: Not Supported 00:13:23.326 UUID List: Not Supported 00:13:23.326 Multi-Domain Subsystem: Not Supported 00:13:23.326 Fixed Capacity Management: Not Supported 00:13:23.326 Variable Capacity Management: Not Supported 00:13:23.326 Delete Endurance Group: Not Supported 00:13:23.326 Delete NVM Set: Not Supported 00:13:23.326 Extended LBA Formats Supported: Not Supported 00:13:23.326 Flexible Data Placement Supported: Not Supported 00:13:23.326 00:13:23.326 Controller Memory Buffer Support 00:13:23.326 ================================ 00:13:23.326 Supported: No 00:13:23.326 00:13:23.326 Persistent Memory Region Support 00:13:23.326 ================================ 00:13:23.326 Supported: No 00:13:23.326 00:13:23.326 Admin Command Set Attributes 00:13:23.326 ============================ 00:13:23.326 Security Send/Receive: Not Supported 00:13:23.326 Format NVM: Not Supported 00:13:23.326 Firmware Activate/Download: Not Supported 00:13:23.326 Namespace Management: Not Supported 00:13:23.326 Device Self-Test: Not Supported 00:13:23.326 Directives: Not Supported 00:13:23.326 NVMe-MI: Not Supported 00:13:23.326 Virtualization Management: Not Supported 00:13:23.326 Doorbell Buffer Config: Not Supported 00:13:23.326 Get LBA Status Capability: Not Supported 00:13:23.326 Command & Feature Lockdown Capability: Not Supported 00:13:23.326 Abort Command Limit: 4 00:13:23.326 Async Event Request Limit: 4 00:13:23.327 Number of Firmware Slots: N/A 00:13:23.327 Firmware Slot 1 Read-Only: N/A 00:13:23.327 Firmware Activation Without Reset: N/A 00:13:23.327 Multiple Update Detection Support: N/A 00:13:23.327 Firmware Update Granularity: No Information Provided 00:13:23.327 Per-Namespace SMART Log: No 00:13:23.327 Asymmetric Namespace Access Log Page: Not Supported 00:13:23.327 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:13:23.327 Command Effects Log Page: Supported 00:13:23.327 Get Log Page Extended Data: Supported 00:13:23.327 Telemetry Log Pages: Not Supported 00:13:23.327 Persistent Event Log Pages: Not Supported 00:13:23.327 Supported Log Pages Log Page: May Support 00:13:23.327 Commands Supported & Effects Log Page: Not Supported 00:13:23.327 Feature Identifiers & Effects Log Page:May Support 00:13:23.327 NVMe-MI Commands & Effects Log Page: May Support 00:13:23.327 Data Area 4 for Telemetry Log: Not Supported 00:13:23.327 Error Log Page Entries Supported: 128 00:13:23.327 Keep Alive: Supported 00:13:23.327 Keep Alive Granularity: 10000 ms 00:13:23.327 00:13:23.327 NVM Command Set Attributes 00:13:23.327 ========================== 00:13:23.327 Submission Queue Entry Size 00:13:23.327 Max: 64 00:13:23.327 Min: 64 00:13:23.327 Completion Queue Entry Size 00:13:23.327 Max: 16 00:13:23.327 Min: 16 00:13:23.327 Number of Namespaces: 32 00:13:23.327 Compare Command: Supported 00:13:23.327 Write Uncorrectable Command: Not Supported 00:13:23.327 Dataset Management Command: Supported 00:13:23.327 Write Zeroes Command: Supported 00:13:23.327 Set Features Save Field: Not Supported 00:13:23.327 Reservations: Not Supported 00:13:23.327 Timestamp: Not Supported 00:13:23.327 Copy: Supported 00:13:23.327 Volatile Write Cache: Present 00:13:23.327 Atomic Write Unit (Normal): 1 00:13:23.327 Atomic Write Unit (PFail): 1 00:13:23.327 Atomic Compare & Write Unit: 1 00:13:23.327 Fused Compare & Write: Supported 00:13:23.327 Scatter-Gather List 00:13:23.327 SGL Command Set: Supported (Dword aligned) 00:13:23.327 SGL Keyed: Not Supported 00:13:23.327 SGL Bit Bucket Descriptor: Not Supported 00:13:23.327 SGL Metadata Pointer: Not Supported 00:13:23.327 Oversized SGL: Not Supported 00:13:23.327 SGL Metadata Address: Not Supported 00:13:23.327 SGL Offset: Not Supported 00:13:23.327 Transport SGL Data Block: Not Supported 00:13:23.327 Replay Protected Memory Block: Not Supported 00:13:23.327 00:13:23.327 Firmware Slot Information 00:13:23.327 ========================= 00:13:23.327 Active slot: 1 00:13:23.327 Slot 1 Firmware Revision: 24.09 00:13:23.327 00:13:23.327 00:13:23.327 Commands Supported and Effects 00:13:23.327 ============================== 00:13:23.327 Admin Commands 00:13:23.327 -------------- 00:13:23.327 Get Log Page (02h): Supported 00:13:23.327 Identify (06h): Supported 00:13:23.327 Abort (08h): Supported 00:13:23.327 Set Features (09h): Supported 00:13:23.327 Get Features (0Ah): Supported 00:13:23.327 Asynchronous Event Request (0Ch): Supported 00:13:23.327 Keep Alive (18h): Supported 00:13:23.327 I/O Commands 00:13:23.327 ------------ 00:13:23.327 Flush (00h): Supported LBA-Change 00:13:23.327 Write (01h): Supported LBA-Change 00:13:23.327 Read (02h): Supported 00:13:23.327 Compare (05h): Supported 00:13:23.327 Write Zeroes (08h): Supported LBA-Change 00:13:23.327 Dataset Management (09h): Supported LBA-Change 00:13:23.327 Copy (19h): Supported LBA-Change 00:13:23.327 00:13:23.327 Error Log 00:13:23.327 ========= 00:13:23.327 00:13:23.327 Arbitration 00:13:23.327 =========== 00:13:23.327 Arbitration Burst: 1 00:13:23.327 00:13:23.327 Power Management 00:13:23.327 ================ 00:13:23.327 Number of Power States: 1 00:13:23.327 Current Power State: Power State #0 00:13:23.327 Power State #0: 00:13:23.327 Max Power: 0.00 W 00:13:23.327 Non-Operational State: Operational 00:13:23.327 Entry Latency: Not Reported 00:13:23.327 Exit Latency: Not Reported 00:13:23.327 Relative Read Throughput: 0 00:13:23.327 Relative Read Latency: 0 00:13:23.327 Relative Write Throughput: 0 00:13:23.327 Relative Write Latency: 0 00:13:23.327 Idle Power: Not Reported 00:13:23.327 Active Power: Not Reported 00:13:23.327 Non-Operational Permissive Mode: Not Supported 00:13:23.327 00:13:23.327 Health Information 00:13:23.327 ================== 00:13:23.327 Critical Warnings: 00:13:23.327 Available Spare Space: OK 00:13:23.327 Temperature: OK 00:13:23.327 Device Reliability: OK 00:13:23.327 Read Only: No 00:13:23.327 Volatile Memory Backup: OK 00:13:23.327 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:23.327 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:23.327 Available Spare: 0% 00:13:23.327 Available Sp[2024-07-15 15:18:27.088326] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:23.327 [2024-07-15 15:18:27.088335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:23.327 [2024-07-15 15:18:27.088367] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:13:23.327 [2024-07-15 15:18:27.088377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.327 [2024-07-15 15:18:27.088385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.327 [2024-07-15 15:18:27.088393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.327 [2024-07-15 15:18:27.088401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.327 [2024-07-15 15:18:27.089368] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:23.327 [2024-07-15 15:18:27.089380] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:13:23.327 [2024-07-15 15:18:27.090370] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:23.327 [2024-07-15 15:18:27.090419] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:13:23.327 [2024-07-15 15:18:27.090426] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:13:23.327 [2024-07-15 15:18:27.091379] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:13:23.327 [2024-07-15 15:18:27.091391] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:13:23.327 [2024-07-15 15:18:27.091442] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:13:23.327 [2024-07-15 15:18:27.094838] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:23.327 are Threshold: 0% 00:13:23.327 Life Percentage Used: 0% 00:13:23.327 Data Units Read: 0 00:13:23.327 Data Units Written: 0 00:13:23.327 Host Read Commands: 0 00:13:23.327 Host Write Commands: 0 00:13:23.327 Controller Busy Time: 0 minutes 00:13:23.327 Power Cycles: 0 00:13:23.327 Power On Hours: 0 hours 00:13:23.327 Unsafe Shutdowns: 0 00:13:23.327 Unrecoverable Media Errors: 0 00:13:23.327 Lifetime Error Log Entries: 0 00:13:23.327 Warning Temperature Time: 0 minutes 00:13:23.327 Critical Temperature Time: 0 minutes 00:13:23.327 00:13:23.327 Number of Queues 00:13:23.327 ================ 00:13:23.327 Number of I/O Submission Queues: 127 00:13:23.327 Number of I/O Completion Queues: 127 00:13:23.327 00:13:23.327 Active Namespaces 00:13:23.327 ================= 00:13:23.327 Namespace ID:1 00:13:23.327 Error Recovery Timeout: Unlimited 00:13:23.327 Command Set Identifier: NVM (00h) 00:13:23.327 Deallocate: Supported 00:13:23.327 Deallocated/Unwritten Error: Not Supported 00:13:23.327 Deallocated Read Value: Unknown 00:13:23.327 Deallocate in Write Zeroes: Not Supported 00:13:23.327 Deallocated Guard Field: 0xFFFF 00:13:23.327 Flush: Supported 00:13:23.327 Reservation: Supported 00:13:23.327 Namespace Sharing Capabilities: Multiple Controllers 00:13:23.327 Size (in LBAs): 131072 (0GiB) 00:13:23.327 Capacity (in LBAs): 131072 (0GiB) 00:13:23.327 Utilization (in LBAs): 131072 (0GiB) 00:13:23.327 NGUID: FDEFDA33E1EF4712816BB225E1F55502 00:13:23.327 UUID: fdefda33-e1ef-4712-816b-b225e1f55502 00:13:23.327 Thin Provisioning: Not Supported 00:13:23.327 Per-NS Atomic Units: Yes 00:13:23.328 Atomic Boundary Size (Normal): 0 00:13:23.328 Atomic Boundary Size (PFail): 0 00:13:23.328 Atomic Boundary Offset: 0 00:13:23.328 Maximum Single Source Range Length: 65535 00:13:23.328 Maximum Copy Length: 65535 00:13:23.328 Maximum Source Range Count: 1 00:13:23.328 NGUID/EUI64 Never Reused: No 00:13:23.328 Namespace Write Protected: No 00:13:23.328 Number of LBA Formats: 1 00:13:23.328 Current LBA Format: LBA Format #00 00:13:23.328 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:23.328 00:13:23.328 15:18:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:23.328 EAL: No free 2048 kB hugepages reported on node 1 00:13:23.586 [2024-07-15 15:18:27.303597] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:28.855 Initializing NVMe Controllers 00:13:28.855 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:28.855 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:28.855 Initialization complete. Launching workers. 00:13:28.855 ======================================================== 00:13:28.855 Latency(us) 00:13:28.855 Device Information : IOPS MiB/s Average min max 00:13:28.855 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39920.49 155.94 3206.20 904.46 7586.64 00:13:28.855 ======================================================== 00:13:28.855 Total : 39920.49 155.94 3206.20 904.46 7586.64 00:13:28.855 00:13:28.855 [2024-07-15 15:18:32.326118] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:28.855 15:18:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:28.855 EAL: No free 2048 kB hugepages reported on node 1 00:13:28.855 [2024-07-15 15:18:32.547160] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:34.125 Initializing NVMe Controllers 00:13:34.125 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:34.125 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:34.125 Initialization complete. Launching workers. 00:13:34.125 ======================================================== 00:13:34.125 Latency(us) 00:13:34.125 Device Information : IOPS MiB/s Average min max 00:13:34.125 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16042.17 62.66 7978.31 6981.76 8979.74 00:13:34.125 ======================================================== 00:13:34.125 Total : 16042.17 62.66 7978.31 6981.76 8979.74 00:13:34.125 00:13:34.125 [2024-07-15 15:18:37.582415] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:34.125 15:18:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:34.125 EAL: No free 2048 kB hugepages reported on node 1 00:13:34.125 [2024-07-15 15:18:37.805406] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:39.467 [2024-07-15 15:18:42.922378] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:39.467 Initializing NVMe Controllers 00:13:39.467 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:39.467 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:39.467 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:13:39.467 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:13:39.467 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:13:39.467 Initialization complete. Launching workers. 00:13:39.467 Starting thread on core 2 00:13:39.467 Starting thread on core 3 00:13:39.467 Starting thread on core 1 00:13:39.467 15:18:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:13:39.467 EAL: No free 2048 kB hugepages reported on node 1 00:13:39.467 [2024-07-15 15:18:43.221270] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:42.756 [2024-07-15 15:18:46.282735] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:42.756 Initializing NVMe Controllers 00:13:42.756 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:42.756 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:42.756 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:13:42.756 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:13:42.756 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:13:42.756 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:13:42.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:42.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:42.756 Initialization complete. Launching workers. 00:13:42.756 Starting thread on core 1 with urgent priority queue 00:13:42.756 Starting thread on core 2 with urgent priority queue 00:13:42.756 Starting thread on core 3 with urgent priority queue 00:13:42.756 Starting thread on core 0 with urgent priority queue 00:13:42.756 SPDK bdev Controller (SPDK1 ) core 0: 8500.00 IO/s 11.76 secs/100000 ios 00:13:42.756 SPDK bdev Controller (SPDK1 ) core 1: 8159.00 IO/s 12.26 secs/100000 ios 00:13:42.756 SPDK bdev Controller (SPDK1 ) core 2: 9857.67 IO/s 10.14 secs/100000 ios 00:13:42.756 SPDK bdev Controller (SPDK1 ) core 3: 7509.67 IO/s 13.32 secs/100000 ios 00:13:42.756 ======================================================== 00:13:42.756 00:13:42.756 15:18:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:42.756 EAL: No free 2048 kB hugepages reported on node 1 00:13:42.756 [2024-07-15 15:18:46.568304] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:42.756 Initializing NVMe Controllers 00:13:42.756 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:42.756 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:42.756 Namespace ID: 1 size: 0GB 00:13:42.756 Initialization complete. 00:13:42.756 INFO: using host memory buffer for IO 00:13:42.756 Hello world! 00:13:42.756 [2024-07-15 15:18:46.604661] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:42.756 15:18:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:43.014 EAL: No free 2048 kB hugepages reported on node 1 00:13:43.014 [2024-07-15 15:18:46.891262] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:44.391 Initializing NVMe Controllers 00:13:44.391 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:44.391 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:44.391 Initialization complete. Launching workers. 00:13:44.391 submit (in ns) avg, min, max = 4965.4, 3085.6, 4000267.2 00:13:44.391 complete (in ns) avg, min, max = 20132.1, 1704.0, 3999690.4 00:13:44.391 00:13:44.391 Submit histogram 00:13:44.391 ================ 00:13:44.391 Range in us Cumulative Count 00:13:44.391 3.085 - 3.098: 0.0596% ( 10) 00:13:44.391 3.098 - 3.110: 0.4889% ( 72) 00:13:44.391 3.110 - 3.123: 1.8006% ( 220) 00:13:44.391 3.123 - 3.136: 4.3167% ( 422) 00:13:44.391 3.136 - 3.149: 8.2697% ( 663) 00:13:44.391 3.149 - 3.162: 12.4314% ( 698) 00:13:44.391 3.162 - 3.174: 17.3384% ( 823) 00:13:44.391 3.174 - 3.187: 22.8118% ( 918) 00:13:44.391 3.187 - 3.200: 28.3747% ( 933) 00:13:44.391 3.200 - 3.213: 34.2535% ( 986) 00:13:44.391 3.213 - 3.226: 41.1579% ( 1158) 00:13:44.391 3.226 - 3.238: 48.1815% ( 1178) 00:13:44.391 3.238 - 3.251: 53.3806% ( 872) 00:13:44.391 3.251 - 3.264: 57.3813% ( 671) 00:13:44.391 3.264 - 3.277: 60.9886% ( 605) 00:13:44.391 3.277 - 3.302: 67.1059% ( 1026) 00:13:44.391 3.302 - 3.328: 72.3885% ( 886) 00:13:44.391 3.328 - 3.354: 77.7248% ( 895) 00:13:44.391 3.354 - 3.379: 84.9630% ( 1214) 00:13:44.391 3.379 - 3.405: 87.2526% ( 384) 00:13:44.391 3.405 - 3.430: 88.3675% ( 187) 00:13:44.391 3.430 - 3.456: 89.1903% ( 138) 00:13:44.391 3.456 - 3.482: 90.3649% ( 197) 00:13:44.391 3.482 - 3.507: 91.8257% ( 245) 00:13:44.391 3.507 - 3.533: 93.7455% ( 322) 00:13:44.391 3.533 - 3.558: 95.1824% ( 241) 00:13:44.391 3.558 - 3.584: 96.1901% ( 169) 00:13:44.391 3.584 - 3.610: 97.1798% ( 166) 00:13:44.391 3.610 - 3.635: 98.3246% ( 192) 00:13:44.391 3.635 - 3.661: 98.8791% ( 93) 00:13:44.391 3.661 - 3.686: 99.2309% ( 59) 00:13:44.391 3.686 - 3.712: 99.4515% ( 37) 00:13:44.391 3.712 - 3.738: 99.5946% ( 24) 00:13:44.391 3.738 - 3.763: 99.6601% ( 11) 00:13:44.391 3.763 - 3.789: 99.6959% ( 6) 00:13:44.391 3.840 - 3.866: 99.7078% ( 2) 00:13:44.391 5.709 - 5.734: 99.7138% ( 1) 00:13:44.391 5.965 - 5.990: 99.7198% ( 1) 00:13:44.391 6.093 - 6.118: 99.7257% ( 1) 00:13:44.391 6.144 - 6.170: 99.7317% ( 1) 00:13:44.391 6.195 - 6.221: 99.7436% ( 2) 00:13:44.391 6.272 - 6.298: 99.7496% ( 1) 00:13:44.391 6.400 - 6.426: 99.7555% ( 1) 00:13:44.391 6.528 - 6.554: 99.7615% ( 1) 00:13:44.391 6.605 - 6.656: 99.7675% ( 1) 00:13:44.391 6.656 - 6.707: 99.7794% ( 2) 00:13:44.391 6.707 - 6.758: 99.7854% ( 1) 00:13:44.391 6.758 - 6.810: 99.7913% ( 1) 00:13:44.391 6.810 - 6.861: 99.8092% ( 3) 00:13:44.391 6.861 - 6.912: 99.8152% ( 1) 00:13:44.391 7.014 - 7.066: 99.8271% ( 2) 00:13:44.391 7.066 - 7.117: 99.8331% ( 1) 00:13:44.391 7.117 - 7.168: 99.8390% ( 1) 00:13:44.391 7.168 - 7.219: 99.8509% ( 2) 00:13:44.391 7.219 - 7.270: 99.8629% ( 2) 00:13:44.391 7.270 - 7.322: 99.8688% ( 1) 00:13:44.391 7.373 - 7.424: 99.8748% ( 1) 00:13:44.391 7.424 - 7.475: 99.8808% ( 1) 00:13:44.391 7.526 - 7.578: 99.8927% ( 2) 00:13:44.391 7.578 - 7.629: 99.8986% ( 1) 00:13:44.391 7.629 - 7.680: 99.9106% ( 2) 00:13:44.391 7.885 - 7.936: 99.9165% ( 1) 00:13:44.391 7.936 - 7.987: 99.9225% ( 1) 00:13:44.391 8.090 - 8.141: 99.9285% ( 1) 00:13:44.391 8.141 - 8.192: 99.9344% ( 1) 00:13:44.391 8.294 - 8.346: 99.9404% ( 1) 00:13:44.391 9.574 - 9.626: 99.9463% ( 1) 00:13:44.391 14.438 - 14.541: 99.9523% ( 1) 00:13:44.391 172.851 - 173.670: 99.9583% ( 1) 00:13:44.391 3984.589 - 4010.803: 100.0000% ( 7) 00:13:44.391 00:13:44.391 Complete histogram 00:13:44.391 ================== 00:13:44.391 Range in us Cumulative Count 00:13:44.391 1.702 - 1.715: 0.1133% ( 19) 00:13:44.391 1.715 - 1.728: 4.7460% ( 777) 00:13:44.391 1.728 - 1.741: 15.4186% ( 1790) 00:13:44.391 1.741 - 1.754: 18.4236% ( 504) 00:13:44.391 1.754 - 1.766: 19.3060% ( 148) 00:13:44.391 1.766 - 1.779: 33.3174% ( 2350) 00:13:44.391 1.779 - 1.792: 77.2418% ( 7367) 00:13:44.391 1.792 - [2024-07-15 15:18:47.910348] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:44.391 1.805: 93.9781% ( 2807) 00:13:44.391 1.805 - 1.818: 96.8996% ( 490) 00:13:44.391 1.818 - 1.830: 97.5137% ( 103) 00:13:44.391 1.830 - 1.843: 97.7164% ( 34) 00:13:44.391 1.843 - 1.856: 98.3186% ( 101) 00:13:44.391 1.856 - 1.869: 98.9208% ( 101) 00:13:44.391 1.869 - 1.882: 99.1772% ( 43) 00:13:44.391 1.882 - 1.894: 99.2607% ( 14) 00:13:44.391 1.894 - 1.907: 99.2726% ( 2) 00:13:44.391 1.907 - 1.920: 99.2845% ( 2) 00:13:44.391 1.920 - 1.933: 99.2905% ( 1) 00:13:44.391 1.933 - 1.946: 99.2964% ( 1) 00:13:44.391 1.958 - 1.971: 99.3084% ( 2) 00:13:44.391 2.048 - 2.061: 99.3143% ( 1) 00:13:44.391 2.176 - 2.189: 99.3203% ( 1) 00:13:44.391 4.378 - 4.403: 99.3263% ( 1) 00:13:44.391 4.429 - 4.454: 99.3322% ( 1) 00:13:44.391 4.659 - 4.685: 99.3382% ( 1) 00:13:44.391 4.890 - 4.915: 99.3441% ( 1) 00:13:44.391 5.197 - 5.222: 99.3561% ( 2) 00:13:44.391 5.299 - 5.325: 99.3620% ( 1) 00:13:44.391 5.350 - 5.376: 99.3740% ( 2) 00:13:44.391 5.402 - 5.427: 99.3799% ( 1) 00:13:44.391 5.453 - 5.478: 99.3859% ( 1) 00:13:44.391 5.478 - 5.504: 99.3978% ( 2) 00:13:44.391 5.581 - 5.606: 99.4157% ( 3) 00:13:44.391 5.658 - 5.683: 99.4217% ( 1) 00:13:44.391 5.709 - 5.734: 99.4276% ( 1) 00:13:44.391 5.734 - 5.760: 99.4336% ( 1) 00:13:44.391 5.862 - 5.888: 99.4395% ( 1) 00:13:44.391 6.118 - 6.144: 99.4455% ( 1) 00:13:44.391 6.323 - 6.349: 99.4515% ( 1) 00:13:44.391 6.349 - 6.374: 99.4634% ( 2) 00:13:44.391 6.400 - 6.426: 99.4694% ( 1) 00:13:44.391 6.605 - 6.656: 99.4753% ( 1) 00:13:44.391 6.656 - 6.707: 99.4872% ( 2) 00:13:44.391 6.758 - 6.810: 99.4932% ( 1) 00:13:44.391 6.912 - 6.963: 99.4992% ( 1) 00:13:44.391 6.963 - 7.014: 99.5051% ( 1) 00:13:44.391 7.117 - 7.168: 99.5111% ( 1) 00:13:44.391 7.322 - 7.373: 99.5171% ( 1) 00:13:44.391 7.424 - 7.475: 99.5230% ( 1) 00:13:44.391 8.192 - 8.243: 99.5290% ( 1) 00:13:44.391 16.998 - 17.101: 99.5349% ( 1) 00:13:44.391 48.333 - 48.538: 99.5409% ( 1) 00:13:44.391 3984.589 - 4010.803: 100.0000% ( 77) 00:13:44.391 00:13:44.391 15:18:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:13:44.391 15:18:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:44.391 15:18:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:13:44.391 15:18:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:13:44.392 15:18:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:44.392 [ 00:13:44.392 { 00:13:44.392 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:44.392 "subtype": "Discovery", 00:13:44.392 "listen_addresses": [], 00:13:44.392 "allow_any_host": true, 00:13:44.392 "hosts": [] 00:13:44.392 }, 00:13:44.392 { 00:13:44.392 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:44.392 "subtype": "NVMe", 00:13:44.392 "listen_addresses": [ 00:13:44.392 { 00:13:44.392 "trtype": "VFIOUSER", 00:13:44.392 "adrfam": "IPv4", 00:13:44.392 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:44.392 "trsvcid": "0" 00:13:44.392 } 00:13:44.392 ], 00:13:44.392 "allow_any_host": true, 00:13:44.392 "hosts": [], 00:13:44.392 "serial_number": "SPDK1", 00:13:44.392 "model_number": "SPDK bdev Controller", 00:13:44.392 "max_namespaces": 32, 00:13:44.392 "min_cntlid": 1, 00:13:44.392 "max_cntlid": 65519, 00:13:44.392 "namespaces": [ 00:13:44.392 { 00:13:44.392 "nsid": 1, 00:13:44.392 "bdev_name": "Malloc1", 00:13:44.392 "name": "Malloc1", 00:13:44.392 "nguid": "FDEFDA33E1EF4712816BB225E1F55502", 00:13:44.392 "uuid": "fdefda33-e1ef-4712-816b-b225e1f55502" 00:13:44.392 } 00:13:44.392 ] 00:13:44.392 }, 00:13:44.392 { 00:13:44.392 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:44.392 "subtype": "NVMe", 00:13:44.392 "listen_addresses": [ 00:13:44.392 { 00:13:44.392 "trtype": "VFIOUSER", 00:13:44.392 "adrfam": "IPv4", 00:13:44.392 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:44.392 "trsvcid": "0" 00:13:44.392 } 00:13:44.392 ], 00:13:44.392 "allow_any_host": true, 00:13:44.392 "hosts": [], 00:13:44.392 "serial_number": "SPDK2", 00:13:44.392 "model_number": "SPDK bdev Controller", 00:13:44.392 "max_namespaces": 32, 00:13:44.392 "min_cntlid": 1, 00:13:44.392 "max_cntlid": 65519, 00:13:44.392 "namespaces": [ 00:13:44.392 { 00:13:44.392 "nsid": 1, 00:13:44.392 "bdev_name": "Malloc2", 00:13:44.392 "name": "Malloc2", 00:13:44.392 "nguid": "E6166BD3F6CE4E2AAAE3C242405D1890", 00:13:44.392 "uuid": "e6166bd3-f6ce-4e2a-aae3-c242405d1890" 00:13:44.392 } 00:13:44.392 ] 00:13:44.392 } 00:13:44.392 ] 00:13:44.392 15:18:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:44.392 15:18:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2982311 00:13:44.392 15:18:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:44.392 15:18:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:13:44.392 15:18:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:13:44.392 15:18:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:44.392 15:18:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:44.392 15:18:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:13:44.392 15:18:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:44.392 15:18:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:13:44.392 EAL: No free 2048 kB hugepages reported on node 1 00:13:44.650 [2024-07-15 15:18:48.302558] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:44.651 Malloc3 00:13:44.651 15:18:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:13:44.651 [2024-07-15 15:18:48.495911] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:44.651 15:18:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:44.651 Asynchronous Event Request test 00:13:44.651 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:44.651 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:44.651 Registering asynchronous event callbacks... 00:13:44.651 Starting namespace attribute notice tests for all controllers... 00:13:44.651 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:44.651 aer_cb - Changed Namespace 00:13:44.651 Cleaning up... 00:13:44.909 [ 00:13:44.910 { 00:13:44.910 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:44.910 "subtype": "Discovery", 00:13:44.910 "listen_addresses": [], 00:13:44.910 "allow_any_host": true, 00:13:44.910 "hosts": [] 00:13:44.910 }, 00:13:44.910 { 00:13:44.910 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:44.910 "subtype": "NVMe", 00:13:44.910 "listen_addresses": [ 00:13:44.910 { 00:13:44.910 "trtype": "VFIOUSER", 00:13:44.910 "adrfam": "IPv4", 00:13:44.910 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:44.910 "trsvcid": "0" 00:13:44.910 } 00:13:44.910 ], 00:13:44.910 "allow_any_host": true, 00:13:44.910 "hosts": [], 00:13:44.910 "serial_number": "SPDK1", 00:13:44.910 "model_number": "SPDK bdev Controller", 00:13:44.910 "max_namespaces": 32, 00:13:44.910 "min_cntlid": 1, 00:13:44.910 "max_cntlid": 65519, 00:13:44.910 "namespaces": [ 00:13:44.910 { 00:13:44.910 "nsid": 1, 00:13:44.910 "bdev_name": "Malloc1", 00:13:44.910 "name": "Malloc1", 00:13:44.910 "nguid": "FDEFDA33E1EF4712816BB225E1F55502", 00:13:44.910 "uuid": "fdefda33-e1ef-4712-816b-b225e1f55502" 00:13:44.910 }, 00:13:44.910 { 00:13:44.910 "nsid": 2, 00:13:44.910 "bdev_name": "Malloc3", 00:13:44.910 "name": "Malloc3", 00:13:44.910 "nguid": "E94945F590E24BA3B3B09456FD654551", 00:13:44.910 "uuid": "e94945f5-90e2-4ba3-b3b0-9456fd654551" 00:13:44.910 } 00:13:44.910 ] 00:13:44.910 }, 00:13:44.910 { 00:13:44.910 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:44.910 "subtype": "NVMe", 00:13:44.910 "listen_addresses": [ 00:13:44.910 { 00:13:44.910 "trtype": "VFIOUSER", 00:13:44.910 "adrfam": "IPv4", 00:13:44.910 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:44.910 "trsvcid": "0" 00:13:44.910 } 00:13:44.910 ], 00:13:44.910 "allow_any_host": true, 00:13:44.910 "hosts": [], 00:13:44.910 "serial_number": "SPDK2", 00:13:44.910 "model_number": "SPDK bdev Controller", 00:13:44.910 "max_namespaces": 32, 00:13:44.910 "min_cntlid": 1, 00:13:44.910 "max_cntlid": 65519, 00:13:44.910 "namespaces": [ 00:13:44.910 { 00:13:44.910 "nsid": 1, 00:13:44.910 "bdev_name": "Malloc2", 00:13:44.910 "name": "Malloc2", 00:13:44.910 "nguid": "E6166BD3F6CE4E2AAAE3C242405D1890", 00:13:44.910 "uuid": "e6166bd3-f6ce-4e2a-aae3-c242405d1890" 00:13:44.910 } 00:13:44.910 ] 00:13:44.910 } 00:13:44.910 ] 00:13:44.910 15:18:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2982311 00:13:44.910 15:18:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:44.910 15:18:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:44.910 15:18:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:13:44.910 15:18:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:44.910 [2024-07-15 15:18:48.736558] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:13:44.910 [2024-07-15 15:18:48.736598] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2982571 ] 00:13:44.910 EAL: No free 2048 kB hugepages reported on node 1 00:13:44.910 [2024-07-15 15:18:48.769061] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:13:44.910 [2024-07-15 15:18:48.780795] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:44.910 [2024-07-15 15:18:48.780818] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fa69643b000 00:13:44.910 [2024-07-15 15:18:48.781793] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:44.910 [2024-07-15 15:18:48.782793] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:44.910 [2024-07-15 15:18:48.783808] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:44.910 [2024-07-15 15:18:48.784820] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:44.910 [2024-07-15 15:18:48.785830] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:44.910 [2024-07-15 15:18:48.786835] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:44.910 [2024-07-15 15:18:48.787844] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:44.910 [2024-07-15 15:18:48.788856] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:44.910 [2024-07-15 15:18:48.789862] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:44.910 [2024-07-15 15:18:48.789874] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fa696430000 00:13:44.910 [2024-07-15 15:18:48.790764] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:44.910 [2024-07-15 15:18:48.798981] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:13:44.910 [2024-07-15 15:18:48.799005] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:13:44.910 [2024-07-15 15:18:48.804091] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:44.910 [2024-07-15 15:18:48.804128] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:44.910 [2024-07-15 15:18:48.804194] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:13:44.910 [2024-07-15 15:18:48.804215] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:13:44.910 [2024-07-15 15:18:48.804221] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:13:44.910 [2024-07-15 15:18:48.805092] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:13:44.910 [2024-07-15 15:18:48.805103] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:13:44.910 [2024-07-15 15:18:48.805111] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:13:44.910 [2024-07-15 15:18:48.806099] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:44.910 [2024-07-15 15:18:48.806109] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:13:44.910 [2024-07-15 15:18:48.806118] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:13:44.910 [2024-07-15 15:18:48.807105] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:13:44.910 [2024-07-15 15:18:48.807116] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:44.910 [2024-07-15 15:18:48.808114] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:13:44.910 [2024-07-15 15:18:48.808124] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:13:44.910 [2024-07-15 15:18:48.808131] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:13:44.910 [2024-07-15 15:18:48.808139] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:44.910 [2024-07-15 15:18:48.808246] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:13:44.910 [2024-07-15 15:18:48.808252] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:44.910 [2024-07-15 15:18:48.808258] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:13:44.910 [2024-07-15 15:18:48.809121] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:13:44.910 [2024-07-15 15:18:48.810131] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:13:44.910 [2024-07-15 15:18:48.811140] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:44.910 [2024-07-15 15:18:48.812142] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:44.910 [2024-07-15 15:18:48.812182] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:44.910 [2024-07-15 15:18:48.813151] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:13:44.910 [2024-07-15 15:18:48.813161] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:44.910 [2024-07-15 15:18:48.813170] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:13:44.910 [2024-07-15 15:18:48.813189] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:13:44.910 [2024-07-15 15:18:48.813200] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:13:44.910 [2024-07-15 15:18:48.813213] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:44.910 [2024-07-15 15:18:48.813220] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:44.910 [2024-07-15 15:18:48.813232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:45.170 [2024-07-15 15:18:48.820844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:45.170 [2024-07-15 15:18:48.820857] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:13:45.170 [2024-07-15 15:18:48.820866] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:13:45.170 [2024-07-15 15:18:48.820872] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:13:45.170 [2024-07-15 15:18:48.820878] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:45.170 [2024-07-15 15:18:48.820885] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:13:45.170 [2024-07-15 15:18:48.820891] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:13:45.170 [2024-07-15 15:18:48.820897] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:13:45.170 [2024-07-15 15:18:48.820906] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:13:45.170 [2024-07-15 15:18:48.820917] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:45.170 [2024-07-15 15:18:48.828840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:45.170 [2024-07-15 15:18:48.828855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:45.170 [2024-07-15 15:18:48.828865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:45.170 [2024-07-15 15:18:48.828874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:45.170 [2024-07-15 15:18:48.828883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:45.170 [2024-07-15 15:18:48.828889] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:13:45.170 [2024-07-15 15:18:48.828899] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:45.170 [2024-07-15 15:18:48.828909] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:45.170 [2024-07-15 15:18:48.836840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:45.170 [2024-07-15 15:18:48.836850] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:13:45.170 [2024-07-15 15:18:48.836860] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:45.170 [2024-07-15 15:18:48.836869] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:13:45.170 [2024-07-15 15:18:48.836875] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:13:45.170 [2024-07-15 15:18:48.836885] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:45.170 [2024-07-15 15:18:48.844839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:45.170 [2024-07-15 15:18:48.844891] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:13:45.170 [2024-07-15 15:18:48.844901] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:13:45.170 [2024-07-15 15:18:48.844910] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:45.170 [2024-07-15 15:18:48.844916] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:45.170 [2024-07-15 15:18:48.844923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:45.170 [2024-07-15 15:18:48.852839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:45.170 [2024-07-15 15:18:48.852852] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:13:45.170 [2024-07-15 15:18:48.852863] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:13:45.170 [2024-07-15 15:18:48.852872] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:13:45.170 [2024-07-15 15:18:48.852880] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:45.170 [2024-07-15 15:18:48.852886] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:45.170 [2024-07-15 15:18:48.852893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:45.170 [2024-07-15 15:18:48.860838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:45.170 [2024-07-15 15:18:48.860853] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:45.170 [2024-07-15 15:18:48.860863] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:45.170 [2024-07-15 15:18:48.860872] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:45.170 [2024-07-15 15:18:48.860877] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:45.171 [2024-07-15 15:18:48.860884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:45.171 [2024-07-15 15:18:48.868837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:45.171 [2024-07-15 15:18:48.868849] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:45.171 [2024-07-15 15:18:48.868860] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:13:45.171 [2024-07-15 15:18:48.868871] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:13:45.171 [2024-07-15 15:18:48.868878] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:13:45.171 [2024-07-15 15:18:48.868884] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:45.171 [2024-07-15 15:18:48.868891] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:13:45.171 [2024-07-15 15:18:48.868897] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:13:45.171 [2024-07-15 15:18:48.868903] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:13:45.171 [2024-07-15 15:18:48.868909] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:13:45.171 [2024-07-15 15:18:48.868927] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:45.171 [2024-07-15 15:18:48.876839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:45.171 [2024-07-15 15:18:48.876854] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:45.171 [2024-07-15 15:18:48.884838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:45.171 [2024-07-15 15:18:48.884853] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:45.171 [2024-07-15 15:18:48.892838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:45.171 [2024-07-15 15:18:48.892853] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:45.171 [2024-07-15 15:18:48.900840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:45.171 [2024-07-15 15:18:48.900858] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:45.171 [2024-07-15 15:18:48.900865] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:45.171 [2024-07-15 15:18:48.900870] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:45.171 [2024-07-15 15:18:48.900874] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:45.171 [2024-07-15 15:18:48.900881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:45.171 [2024-07-15 15:18:48.900890] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:45.171 [2024-07-15 15:18:48.900895] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:45.171 [2024-07-15 15:18:48.900902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:45.171 [2024-07-15 15:18:48.900910] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:45.171 [2024-07-15 15:18:48.900916] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:45.171 [2024-07-15 15:18:48.900923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:45.171 [2024-07-15 15:18:48.900933] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:45.171 [2024-07-15 15:18:48.900939] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:45.171 [2024-07-15 15:18:48.900946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:45.171 [2024-07-15 15:18:48.908839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:45.171 [2024-07-15 15:18:48.908856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:45.171 [2024-07-15 15:18:48.908868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:45.171 [2024-07-15 15:18:48.908877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:45.171 ===================================================== 00:13:45.171 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:45.171 ===================================================== 00:13:45.171 Controller Capabilities/Features 00:13:45.171 ================================ 00:13:45.171 Vendor ID: 4e58 00:13:45.171 Subsystem Vendor ID: 4e58 00:13:45.171 Serial Number: SPDK2 00:13:45.171 Model Number: SPDK bdev Controller 00:13:45.171 Firmware Version: 24.09 00:13:45.171 Recommended Arb Burst: 6 00:13:45.171 IEEE OUI Identifier: 8d 6b 50 00:13:45.171 Multi-path I/O 00:13:45.171 May have multiple subsystem ports: Yes 00:13:45.171 May have multiple controllers: Yes 00:13:45.171 Associated with SR-IOV VF: No 00:13:45.171 Max Data Transfer Size: 131072 00:13:45.171 Max Number of Namespaces: 32 00:13:45.171 Max Number of I/O Queues: 127 00:13:45.171 NVMe Specification Version (VS): 1.3 00:13:45.171 NVMe Specification Version (Identify): 1.3 00:13:45.171 Maximum Queue Entries: 256 00:13:45.171 Contiguous Queues Required: Yes 00:13:45.171 Arbitration Mechanisms Supported 00:13:45.171 Weighted Round Robin: Not Supported 00:13:45.171 Vendor Specific: Not Supported 00:13:45.171 Reset Timeout: 15000 ms 00:13:45.171 Doorbell Stride: 4 bytes 00:13:45.171 NVM Subsystem Reset: Not Supported 00:13:45.171 Command Sets Supported 00:13:45.171 NVM Command Set: Supported 00:13:45.171 Boot Partition: Not Supported 00:13:45.171 Memory Page Size Minimum: 4096 bytes 00:13:45.171 Memory Page Size Maximum: 4096 bytes 00:13:45.171 Persistent Memory Region: Not Supported 00:13:45.171 Optional Asynchronous Events Supported 00:13:45.171 Namespace Attribute Notices: Supported 00:13:45.171 Firmware Activation Notices: Not Supported 00:13:45.171 ANA Change Notices: Not Supported 00:13:45.171 PLE Aggregate Log Change Notices: Not Supported 00:13:45.171 LBA Status Info Alert Notices: Not Supported 00:13:45.171 EGE Aggregate Log Change Notices: Not Supported 00:13:45.171 Normal NVM Subsystem Shutdown event: Not Supported 00:13:45.171 Zone Descriptor Change Notices: Not Supported 00:13:45.171 Discovery Log Change Notices: Not Supported 00:13:45.171 Controller Attributes 00:13:45.171 128-bit Host Identifier: Supported 00:13:45.171 Non-Operational Permissive Mode: Not Supported 00:13:45.171 NVM Sets: Not Supported 00:13:45.171 Read Recovery Levels: Not Supported 00:13:45.171 Endurance Groups: Not Supported 00:13:45.171 Predictable Latency Mode: Not Supported 00:13:45.171 Traffic Based Keep ALive: Not Supported 00:13:45.171 Namespace Granularity: Not Supported 00:13:45.171 SQ Associations: Not Supported 00:13:45.171 UUID List: Not Supported 00:13:45.171 Multi-Domain Subsystem: Not Supported 00:13:45.171 Fixed Capacity Management: Not Supported 00:13:45.171 Variable Capacity Management: Not Supported 00:13:45.171 Delete Endurance Group: Not Supported 00:13:45.171 Delete NVM Set: Not Supported 00:13:45.171 Extended LBA Formats Supported: Not Supported 00:13:45.171 Flexible Data Placement Supported: Not Supported 00:13:45.171 00:13:45.171 Controller Memory Buffer Support 00:13:45.171 ================================ 00:13:45.171 Supported: No 00:13:45.171 00:13:45.171 Persistent Memory Region Support 00:13:45.171 ================================ 00:13:45.171 Supported: No 00:13:45.171 00:13:45.171 Admin Command Set Attributes 00:13:45.171 ============================ 00:13:45.171 Security Send/Receive: Not Supported 00:13:45.171 Format NVM: Not Supported 00:13:45.171 Firmware Activate/Download: Not Supported 00:13:45.171 Namespace Management: Not Supported 00:13:45.171 Device Self-Test: Not Supported 00:13:45.171 Directives: Not Supported 00:13:45.171 NVMe-MI: Not Supported 00:13:45.171 Virtualization Management: Not Supported 00:13:45.171 Doorbell Buffer Config: Not Supported 00:13:45.171 Get LBA Status Capability: Not Supported 00:13:45.171 Command & Feature Lockdown Capability: Not Supported 00:13:45.171 Abort Command Limit: 4 00:13:45.171 Async Event Request Limit: 4 00:13:45.171 Number of Firmware Slots: N/A 00:13:45.171 Firmware Slot 1 Read-Only: N/A 00:13:45.171 Firmware Activation Without Reset: N/A 00:13:45.171 Multiple Update Detection Support: N/A 00:13:45.171 Firmware Update Granularity: No Information Provided 00:13:45.171 Per-Namespace SMART Log: No 00:13:45.171 Asymmetric Namespace Access Log Page: Not Supported 00:13:45.171 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:13:45.171 Command Effects Log Page: Supported 00:13:45.171 Get Log Page Extended Data: Supported 00:13:45.171 Telemetry Log Pages: Not Supported 00:13:45.171 Persistent Event Log Pages: Not Supported 00:13:45.171 Supported Log Pages Log Page: May Support 00:13:45.171 Commands Supported & Effects Log Page: Not Supported 00:13:45.171 Feature Identifiers & Effects Log Page:May Support 00:13:45.171 NVMe-MI Commands & Effects Log Page: May Support 00:13:45.171 Data Area 4 for Telemetry Log: Not Supported 00:13:45.171 Error Log Page Entries Supported: 128 00:13:45.171 Keep Alive: Supported 00:13:45.171 Keep Alive Granularity: 10000 ms 00:13:45.171 00:13:45.171 NVM Command Set Attributes 00:13:45.171 ========================== 00:13:45.171 Submission Queue Entry Size 00:13:45.171 Max: 64 00:13:45.172 Min: 64 00:13:45.172 Completion Queue Entry Size 00:13:45.172 Max: 16 00:13:45.172 Min: 16 00:13:45.172 Number of Namespaces: 32 00:13:45.172 Compare Command: Supported 00:13:45.172 Write Uncorrectable Command: Not Supported 00:13:45.172 Dataset Management Command: Supported 00:13:45.172 Write Zeroes Command: Supported 00:13:45.172 Set Features Save Field: Not Supported 00:13:45.172 Reservations: Not Supported 00:13:45.172 Timestamp: Not Supported 00:13:45.172 Copy: Supported 00:13:45.172 Volatile Write Cache: Present 00:13:45.172 Atomic Write Unit (Normal): 1 00:13:45.172 Atomic Write Unit (PFail): 1 00:13:45.172 Atomic Compare & Write Unit: 1 00:13:45.172 Fused Compare & Write: Supported 00:13:45.172 Scatter-Gather List 00:13:45.172 SGL Command Set: Supported (Dword aligned) 00:13:45.172 SGL Keyed: Not Supported 00:13:45.172 SGL Bit Bucket Descriptor: Not Supported 00:13:45.172 SGL Metadata Pointer: Not Supported 00:13:45.172 Oversized SGL: Not Supported 00:13:45.172 SGL Metadata Address: Not Supported 00:13:45.172 SGL Offset: Not Supported 00:13:45.172 Transport SGL Data Block: Not Supported 00:13:45.172 Replay Protected Memory Block: Not Supported 00:13:45.172 00:13:45.172 Firmware Slot Information 00:13:45.172 ========================= 00:13:45.172 Active slot: 1 00:13:45.172 Slot 1 Firmware Revision: 24.09 00:13:45.172 00:13:45.172 00:13:45.172 Commands Supported and Effects 00:13:45.172 ============================== 00:13:45.172 Admin Commands 00:13:45.172 -------------- 00:13:45.172 Get Log Page (02h): Supported 00:13:45.172 Identify (06h): Supported 00:13:45.172 Abort (08h): Supported 00:13:45.172 Set Features (09h): Supported 00:13:45.172 Get Features (0Ah): Supported 00:13:45.172 Asynchronous Event Request (0Ch): Supported 00:13:45.172 Keep Alive (18h): Supported 00:13:45.172 I/O Commands 00:13:45.172 ------------ 00:13:45.172 Flush (00h): Supported LBA-Change 00:13:45.172 Write (01h): Supported LBA-Change 00:13:45.172 Read (02h): Supported 00:13:45.172 Compare (05h): Supported 00:13:45.172 Write Zeroes (08h): Supported LBA-Change 00:13:45.172 Dataset Management (09h): Supported LBA-Change 00:13:45.172 Copy (19h): Supported LBA-Change 00:13:45.172 00:13:45.172 Error Log 00:13:45.172 ========= 00:13:45.172 00:13:45.172 Arbitration 00:13:45.172 =========== 00:13:45.172 Arbitration Burst: 1 00:13:45.172 00:13:45.172 Power Management 00:13:45.172 ================ 00:13:45.172 Number of Power States: 1 00:13:45.172 Current Power State: Power State #0 00:13:45.172 Power State #0: 00:13:45.172 Max Power: 0.00 W 00:13:45.172 Non-Operational State: Operational 00:13:45.172 Entry Latency: Not Reported 00:13:45.172 Exit Latency: Not Reported 00:13:45.172 Relative Read Throughput: 0 00:13:45.172 Relative Read Latency: 0 00:13:45.172 Relative Write Throughput: 0 00:13:45.172 Relative Write Latency: 0 00:13:45.172 Idle Power: Not Reported 00:13:45.172 Active Power: Not Reported 00:13:45.172 Non-Operational Permissive Mode: Not Supported 00:13:45.172 00:13:45.172 Health Information 00:13:45.172 ================== 00:13:45.172 Critical Warnings: 00:13:45.172 Available Spare Space: OK 00:13:45.172 Temperature: OK 00:13:45.172 Device Reliability: OK 00:13:45.172 Read Only: No 00:13:45.172 Volatile Memory Backup: OK 00:13:45.172 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:45.172 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:45.172 Available Spare: 0% 00:13:45.172 Available Sp[2024-07-15 15:18:48.908967] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:45.172 [2024-07-15 15:18:48.916840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:45.172 [2024-07-15 15:18:48.916876] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:13:45.172 [2024-07-15 15:18:48.916886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.172 [2024-07-15 15:18:48.916894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.172 [2024-07-15 15:18:48.916902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.172 [2024-07-15 15:18:48.916910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.172 [2024-07-15 15:18:48.916963] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:45.172 [2024-07-15 15:18:48.916975] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:13:45.172 [2024-07-15 15:18:48.917966] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:45.172 [2024-07-15 15:18:48.918011] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:13:45.172 [2024-07-15 15:18:48.918019] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:13:45.172 [2024-07-15 15:18:48.918964] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:13:45.172 [2024-07-15 15:18:48.918977] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:13:45.172 [2024-07-15 15:18:48.919025] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:13:45.172 [2024-07-15 15:18:48.919987] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:45.172 are Threshold: 0% 00:13:45.172 Life Percentage Used: 0% 00:13:45.172 Data Units Read: 0 00:13:45.172 Data Units Written: 0 00:13:45.172 Host Read Commands: 0 00:13:45.172 Host Write Commands: 0 00:13:45.172 Controller Busy Time: 0 minutes 00:13:45.172 Power Cycles: 0 00:13:45.172 Power On Hours: 0 hours 00:13:45.172 Unsafe Shutdowns: 0 00:13:45.172 Unrecoverable Media Errors: 0 00:13:45.172 Lifetime Error Log Entries: 0 00:13:45.172 Warning Temperature Time: 0 minutes 00:13:45.172 Critical Temperature Time: 0 minutes 00:13:45.172 00:13:45.172 Number of Queues 00:13:45.172 ================ 00:13:45.172 Number of I/O Submission Queues: 127 00:13:45.172 Number of I/O Completion Queues: 127 00:13:45.172 00:13:45.172 Active Namespaces 00:13:45.172 ================= 00:13:45.172 Namespace ID:1 00:13:45.172 Error Recovery Timeout: Unlimited 00:13:45.172 Command Set Identifier: NVM (00h) 00:13:45.172 Deallocate: Supported 00:13:45.172 Deallocated/Unwritten Error: Not Supported 00:13:45.172 Deallocated Read Value: Unknown 00:13:45.172 Deallocate in Write Zeroes: Not Supported 00:13:45.172 Deallocated Guard Field: 0xFFFF 00:13:45.172 Flush: Supported 00:13:45.172 Reservation: Supported 00:13:45.172 Namespace Sharing Capabilities: Multiple Controllers 00:13:45.172 Size (in LBAs): 131072 (0GiB) 00:13:45.172 Capacity (in LBAs): 131072 (0GiB) 00:13:45.172 Utilization (in LBAs): 131072 (0GiB) 00:13:45.172 NGUID: E6166BD3F6CE4E2AAAE3C242405D1890 00:13:45.172 UUID: e6166bd3-f6ce-4e2a-aae3-c242405d1890 00:13:45.172 Thin Provisioning: Not Supported 00:13:45.172 Per-NS Atomic Units: Yes 00:13:45.172 Atomic Boundary Size (Normal): 0 00:13:45.172 Atomic Boundary Size (PFail): 0 00:13:45.172 Atomic Boundary Offset: 0 00:13:45.172 Maximum Single Source Range Length: 65535 00:13:45.172 Maximum Copy Length: 65535 00:13:45.172 Maximum Source Range Count: 1 00:13:45.172 NGUID/EUI64 Never Reused: No 00:13:45.172 Namespace Write Protected: No 00:13:45.172 Number of LBA Formats: 1 00:13:45.172 Current LBA Format: LBA Format #00 00:13:45.172 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:45.172 00:13:45.172 15:18:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:45.172 EAL: No free 2048 kB hugepages reported on node 1 00:13:45.431 [2024-07-15 15:18:49.128836] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:50.696 Initializing NVMe Controllers 00:13:50.696 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:50.696 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:50.696 Initialization complete. Launching workers. 00:13:50.696 ======================================================== 00:13:50.696 Latency(us) 00:13:50.696 Device Information : IOPS MiB/s Average min max 00:13:50.696 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39929.35 155.97 3205.49 904.43 7670.58 00:13:50.696 ======================================================== 00:13:50.696 Total : 39929.35 155.97 3205.49 904.43 7670.58 00:13:50.696 00:13:50.696 [2024-07-15 15:18:54.235103] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:50.696 15:18:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:50.696 EAL: No free 2048 kB hugepages reported on node 1 00:13:50.696 [2024-07-15 15:18:54.448711] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:55.956 Initializing NVMe Controllers 00:13:55.956 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:55.956 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:55.956 Initialization complete. Launching workers. 00:13:55.956 ======================================================== 00:13:55.956 Latency(us) 00:13:55.956 Device Information : IOPS MiB/s Average min max 00:13:55.956 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39944.00 156.03 3204.73 919.04 8625.62 00:13:55.956 ======================================================== 00:13:55.956 Total : 39944.00 156.03 3204.73 919.04 8625.62 00:13:55.956 00:13:55.956 [2024-07-15 15:18:59.469206] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:55.956 15:18:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:55.956 EAL: No free 2048 kB hugepages reported on node 1 00:13:55.956 [2024-07-15 15:18:59.684046] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:01.224 [2024-07-15 15:19:04.819951] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:01.224 Initializing NVMe Controllers 00:14:01.224 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:01.224 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:01.224 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:14:01.224 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:14:01.224 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:14:01.224 Initialization complete. Launching workers. 00:14:01.224 Starting thread on core 2 00:14:01.224 Starting thread on core 3 00:14:01.224 Starting thread on core 1 00:14:01.224 15:19:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:14:01.224 EAL: No free 2048 kB hugepages reported on node 1 00:14:01.224 [2024-07-15 15:19:05.117459] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:04.510 [2024-07-15 15:19:08.272040] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:04.510 Initializing NVMe Controllers 00:14:04.510 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:04.510 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:04.510 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:14:04.510 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:14:04.510 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:14:04.510 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:14:04.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:04.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:04.510 Initialization complete. Launching workers. 00:14:04.510 Starting thread on core 1 with urgent priority queue 00:14:04.510 Starting thread on core 2 with urgent priority queue 00:14:04.510 Starting thread on core 3 with urgent priority queue 00:14:04.510 Starting thread on core 0 with urgent priority queue 00:14:04.510 SPDK bdev Controller (SPDK2 ) core 0: 2255.33 IO/s 44.34 secs/100000 ios 00:14:04.510 SPDK bdev Controller (SPDK2 ) core 1: 2012.33 IO/s 49.69 secs/100000 ios 00:14:04.510 SPDK bdev Controller (SPDK2 ) core 2: 1887.00 IO/s 52.99 secs/100000 ios 00:14:04.510 SPDK bdev Controller (SPDK2 ) core 3: 1935.33 IO/s 51.67 secs/100000 ios 00:14:04.510 ======================================================== 00:14:04.510 00:14:04.510 15:19:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:04.510 EAL: No free 2048 kB hugepages reported on node 1 00:14:04.768 [2024-07-15 15:19:08.564298] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:04.768 Initializing NVMe Controllers 00:14:04.768 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:04.768 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:04.768 Namespace ID: 1 size: 0GB 00:14:04.768 Initialization complete. 00:14:04.768 INFO: using host memory buffer for IO 00:14:04.768 Hello world! 00:14:04.768 [2024-07-15 15:19:08.576389] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:04.768 15:19:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:05.027 EAL: No free 2048 kB hugepages reported on node 1 00:14:05.027 [2024-07-15 15:19:08.870056] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:06.402 Initializing NVMe Controllers 00:14:06.402 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:06.402 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:06.402 Initialization complete. Launching workers. 00:14:06.402 submit (in ns) avg, min, max = 6609.2, 3094.4, 4000687.2 00:14:06.402 complete (in ns) avg, min, max = 20887.8, 1703.2, 4173681.6 00:14:06.402 00:14:06.402 Submit histogram 00:14:06.402 ================ 00:14:06.402 Range in us Cumulative Count 00:14:06.402 3.085 - 3.098: 0.0178% ( 3) 00:14:06.402 3.098 - 3.110: 0.0829% ( 11) 00:14:06.402 3.110 - 3.123: 0.6100% ( 89) 00:14:06.402 3.123 - 3.136: 1.9010% ( 218) 00:14:06.402 3.136 - 3.149: 3.9915% ( 353) 00:14:06.402 3.149 - 3.162: 7.4618% ( 586) 00:14:06.402 3.162 - 3.174: 11.6013% ( 699) 00:14:06.402 3.174 - 3.187: 17.0615% ( 922) 00:14:06.402 3.187 - 3.200: 23.5402% ( 1094) 00:14:06.402 3.200 - 3.213: 29.8827% ( 1071) 00:14:06.402 3.213 - 3.226: 36.3496% ( 1092) 00:14:06.402 3.226 - 3.238: 42.8698% ( 1101) 00:14:06.402 3.238 - 3.251: 50.1658% ( 1232) 00:14:06.402 3.251 - 3.264: 56.6386% ( 1093) 00:14:06.402 3.264 - 3.277: 60.3991% ( 635) 00:14:06.402 3.277 - 3.302: 65.4211% ( 848) 00:14:06.402 3.302 - 3.328: 69.8152% ( 742) 00:14:06.402 3.328 - 3.354: 73.8422% ( 680) 00:14:06.402 3.354 - 3.379: 81.1441% ( 1233) 00:14:06.403 3.379 - 3.405: 86.4858% ( 902) 00:14:06.403 3.405 - 3.430: 88.1736% ( 285) 00:14:06.403 3.430 - 3.456: 88.7658% ( 100) 00:14:06.403 3.456 - 3.482: 89.6897% ( 156) 00:14:06.403 3.482 - 3.507: 91.0221% ( 225) 00:14:06.403 3.507 - 3.533: 92.7928% ( 299) 00:14:06.403 3.533 - 3.558: 94.7234% ( 326) 00:14:06.403 3.558 - 3.584: 95.7835% ( 179) 00:14:06.403 3.584 - 3.610: 96.7133% ( 157) 00:14:06.403 3.610 - 3.635: 97.7851% ( 181) 00:14:06.403 3.635 - 3.661: 98.5965% ( 137) 00:14:06.403 3.661 - 3.686: 99.0465% ( 76) 00:14:06.403 3.686 - 3.712: 99.3604% ( 53) 00:14:06.403 3.712 - 3.738: 99.5262% ( 28) 00:14:06.403 3.738 - 3.763: 99.6032% ( 13) 00:14:06.403 3.763 - 3.789: 99.6388% ( 6) 00:14:06.403 3.789 - 3.814: 99.6447% ( 1) 00:14:06.403 3.840 - 3.866: 99.6506% ( 1) 00:14:06.403 3.994 - 4.019: 99.6565% ( 1) 00:14:06.403 5.299 - 5.325: 99.6624% ( 1) 00:14:06.403 5.325 - 5.350: 99.6743% ( 2) 00:14:06.403 5.453 - 5.478: 99.6861% ( 2) 00:14:06.403 5.555 - 5.581: 99.6980% ( 2) 00:14:06.403 5.683 - 5.709: 99.7098% ( 2) 00:14:06.403 5.709 - 5.734: 99.7157% ( 1) 00:14:06.403 5.734 - 5.760: 99.7276% ( 2) 00:14:06.403 5.760 - 5.786: 99.7513% ( 4) 00:14:06.403 5.786 - 5.811: 99.7572% ( 1) 00:14:06.403 5.811 - 5.837: 99.7631% ( 1) 00:14:06.403 5.888 - 5.914: 99.7690% ( 1) 00:14:06.403 5.914 - 5.939: 99.7809% ( 2) 00:14:06.403 5.965 - 5.990: 99.7868% ( 1) 00:14:06.403 6.118 - 6.144: 99.7927% ( 1) 00:14:06.403 6.144 - 6.170: 99.7986% ( 1) 00:14:06.403 6.170 - 6.195: 99.8046% ( 1) 00:14:06.403 6.195 - 6.221: 99.8105% ( 1) 00:14:06.403 6.272 - 6.298: 99.8164% ( 1) 00:14:06.403 6.400 - 6.426: 99.8223% ( 1) 00:14:06.403 6.605 - 6.656: 99.8283% ( 1) 00:14:06.403 6.707 - 6.758: 99.8342% ( 1) 00:14:06.403 6.861 - 6.912: 99.8460% ( 2) 00:14:06.403 6.912 - 6.963: 99.8519% ( 1) 00:14:06.403 6.963 - 7.014: 99.8579% ( 1) 00:14:06.403 7.014 - 7.066: 99.8638% ( 1) 00:14:06.403 7.066 - 7.117: 99.8697% ( 1) 00:14:06.403 7.424 - 7.475: 99.8816% ( 2) 00:14:06.403 7.782 - 7.834: 99.8934% ( 2) 00:14:06.403 7.885 - 7.936: 99.8993% ( 1) 00:14:06.403 8.550 - 8.602: 99.9052% ( 1) 00:14:06.403 8.806 - 8.858: 99.9112% ( 1) 00:14:06.403 8.909 - 8.960: 99.9171% ( 1) 00:14:06.403 3984.589 - 4010.803: 100.0000% ( 14) 00:14:06.403 00:14:06.403 Complete histogram 00:14:06.403 ================== 00:14:06.403 Range in us Cumulative Count 00:14:06.403 1.702 - 1.715: 0.3198% ( 54) 00:14:06.403 1.715 - 1.728: 1.3088% ( 167) 00:14:06.403 1.728 - 1.741: 2.4991% ( 201) 00:14:06.403 1.741 - 1.754: 4.1632% ( 281) 00:14:06.403 1.754 - 1.766: 37.8124% ( 5682) 00:14:06.403 1.766 - 1.779: 85.3903% ( 8034) 00:14:06.403 1.779 - [2024-07-15 15:19:09.961676] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:06.403 1.792: 94.6583% ( 1565) 00:14:06.403 1.792 - 1.805: 96.7429% ( 352) 00:14:06.403 1.805 - 1.818: 97.2995% ( 94) 00:14:06.403 1.818 - 1.830: 97.6430% ( 58) 00:14:06.403 1.830 - 1.843: 98.3655% ( 122) 00:14:06.403 1.843 - 1.856: 99.0051% ( 108) 00:14:06.403 1.856 - 1.869: 99.2361% ( 39) 00:14:06.403 1.869 - 1.882: 99.2775% ( 7) 00:14:06.403 1.882 - 1.894: 99.3071% ( 5) 00:14:06.403 1.894 - 1.907: 99.3130% ( 1) 00:14:06.403 1.946 - 1.958: 99.3190% ( 1) 00:14:06.403 1.971 - 1.984: 99.3249% ( 1) 00:14:06.403 2.035 - 2.048: 99.3308% ( 1) 00:14:06.403 2.061 - 2.074: 99.3367% ( 1) 00:14:06.403 2.240 - 2.253: 99.3486% ( 2) 00:14:06.403 3.891 - 3.917: 99.3545% ( 1) 00:14:06.403 3.942 - 3.968: 99.3723% ( 3) 00:14:06.403 3.994 - 4.019: 99.3841% ( 2) 00:14:06.403 4.147 - 4.173: 99.3900% ( 1) 00:14:06.403 4.224 - 4.250: 99.3959% ( 1) 00:14:06.403 4.275 - 4.301: 99.4019% ( 1) 00:14:06.403 4.301 - 4.326: 99.4078% ( 1) 00:14:06.403 4.454 - 4.480: 99.4137% ( 1) 00:14:06.403 4.506 - 4.531: 99.4196% ( 1) 00:14:06.403 4.531 - 4.557: 99.4256% ( 1) 00:14:06.403 4.582 - 4.608: 99.4374% ( 2) 00:14:06.403 4.608 - 4.634: 99.4433% ( 1) 00:14:06.403 4.634 - 4.659: 99.4492% ( 1) 00:14:06.403 4.787 - 4.813: 99.4552% ( 1) 00:14:06.403 4.813 - 4.838: 99.4611% ( 1) 00:14:06.403 4.838 - 4.864: 99.4670% ( 1) 00:14:06.403 5.094 - 5.120: 99.4729% ( 1) 00:14:06.403 5.248 - 5.274: 99.4789% ( 1) 00:14:06.403 5.325 - 5.350: 99.4848% ( 1) 00:14:06.403 5.606 - 5.632: 99.4907% ( 1) 00:14:06.403 6.451 - 6.477: 99.4966% ( 1) 00:14:06.403 7.578 - 7.629: 99.5025% ( 1) 00:14:06.403 8.448 - 8.499: 99.5085% ( 1) 00:14:06.403 12.186 - 12.237: 99.5144% ( 1) 00:14:06.403 13.517 - 13.619: 99.5203% ( 1) 00:14:06.403 2962.227 - 2975.334: 99.5262% ( 1) 00:14:06.403 3984.589 - 4010.803: 99.9941% ( 79) 00:14:06.403 4168.090 - 4194.304: 100.0000% ( 1) 00:14:06.403 00:14:06.403 15:19:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:14:06.403 15:19:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:06.403 15:19:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:14:06.403 15:19:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:14:06.403 15:19:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:06.403 [ 00:14:06.403 { 00:14:06.403 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:06.403 "subtype": "Discovery", 00:14:06.403 "listen_addresses": [], 00:14:06.403 "allow_any_host": true, 00:14:06.403 "hosts": [] 00:14:06.403 }, 00:14:06.403 { 00:14:06.403 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:06.403 "subtype": "NVMe", 00:14:06.403 "listen_addresses": [ 00:14:06.403 { 00:14:06.403 "trtype": "VFIOUSER", 00:14:06.403 "adrfam": "IPv4", 00:14:06.403 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:06.403 "trsvcid": "0" 00:14:06.403 } 00:14:06.403 ], 00:14:06.403 "allow_any_host": true, 00:14:06.403 "hosts": [], 00:14:06.403 "serial_number": "SPDK1", 00:14:06.403 "model_number": "SPDK bdev Controller", 00:14:06.403 "max_namespaces": 32, 00:14:06.403 "min_cntlid": 1, 00:14:06.403 "max_cntlid": 65519, 00:14:06.403 "namespaces": [ 00:14:06.403 { 00:14:06.403 "nsid": 1, 00:14:06.403 "bdev_name": "Malloc1", 00:14:06.403 "name": "Malloc1", 00:14:06.403 "nguid": "FDEFDA33E1EF4712816BB225E1F55502", 00:14:06.403 "uuid": "fdefda33-e1ef-4712-816b-b225e1f55502" 00:14:06.403 }, 00:14:06.403 { 00:14:06.403 "nsid": 2, 00:14:06.403 "bdev_name": "Malloc3", 00:14:06.403 "name": "Malloc3", 00:14:06.403 "nguid": "E94945F590E24BA3B3B09456FD654551", 00:14:06.403 "uuid": "e94945f5-90e2-4ba3-b3b0-9456fd654551" 00:14:06.403 } 00:14:06.403 ] 00:14:06.403 }, 00:14:06.403 { 00:14:06.403 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:06.403 "subtype": "NVMe", 00:14:06.403 "listen_addresses": [ 00:14:06.403 { 00:14:06.403 "trtype": "VFIOUSER", 00:14:06.403 "adrfam": "IPv4", 00:14:06.403 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:06.403 "trsvcid": "0" 00:14:06.403 } 00:14:06.403 ], 00:14:06.403 "allow_any_host": true, 00:14:06.403 "hosts": [], 00:14:06.403 "serial_number": "SPDK2", 00:14:06.403 "model_number": "SPDK bdev Controller", 00:14:06.403 "max_namespaces": 32, 00:14:06.403 "min_cntlid": 1, 00:14:06.403 "max_cntlid": 65519, 00:14:06.403 "namespaces": [ 00:14:06.403 { 00:14:06.403 "nsid": 1, 00:14:06.403 "bdev_name": "Malloc2", 00:14:06.403 "name": "Malloc2", 00:14:06.403 "nguid": "E6166BD3F6CE4E2AAAE3C242405D1890", 00:14:06.403 "uuid": "e6166bd3-f6ce-4e2a-aae3-c242405d1890" 00:14:06.403 } 00:14:06.403 ] 00:14:06.403 } 00:14:06.403 ] 00:14:06.403 15:19:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:06.403 15:19:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2986115 00:14:06.403 15:19:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:06.403 15:19:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:14:06.403 15:19:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:14:06.403 15:19:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:06.403 15:19:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:06.403 15:19:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:14:06.403 15:19:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:06.403 15:19:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:14:06.403 EAL: No free 2048 kB hugepages reported on node 1 00:14:06.662 [2024-07-15 15:19:10.361086] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:06.662 Malloc4 00:14:06.662 15:19:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:14:06.662 [2024-07-15 15:19:10.564562] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:06.920 15:19:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:06.920 Asynchronous Event Request test 00:14:06.920 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:06.920 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:06.920 Registering asynchronous event callbacks... 00:14:06.920 Starting namespace attribute notice tests for all controllers... 00:14:06.920 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:06.920 aer_cb - Changed Namespace 00:14:06.920 Cleaning up... 00:14:06.920 [ 00:14:06.920 { 00:14:06.920 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:06.920 "subtype": "Discovery", 00:14:06.920 "listen_addresses": [], 00:14:06.920 "allow_any_host": true, 00:14:06.920 "hosts": [] 00:14:06.920 }, 00:14:06.920 { 00:14:06.920 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:06.920 "subtype": "NVMe", 00:14:06.920 "listen_addresses": [ 00:14:06.920 { 00:14:06.920 "trtype": "VFIOUSER", 00:14:06.920 "adrfam": "IPv4", 00:14:06.920 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:06.920 "trsvcid": "0" 00:14:06.920 } 00:14:06.920 ], 00:14:06.920 "allow_any_host": true, 00:14:06.920 "hosts": [], 00:14:06.920 "serial_number": "SPDK1", 00:14:06.920 "model_number": "SPDK bdev Controller", 00:14:06.920 "max_namespaces": 32, 00:14:06.920 "min_cntlid": 1, 00:14:06.920 "max_cntlid": 65519, 00:14:06.920 "namespaces": [ 00:14:06.920 { 00:14:06.920 "nsid": 1, 00:14:06.920 "bdev_name": "Malloc1", 00:14:06.920 "name": "Malloc1", 00:14:06.920 "nguid": "FDEFDA33E1EF4712816BB225E1F55502", 00:14:06.920 "uuid": "fdefda33-e1ef-4712-816b-b225e1f55502" 00:14:06.920 }, 00:14:06.920 { 00:14:06.920 "nsid": 2, 00:14:06.920 "bdev_name": "Malloc3", 00:14:06.920 "name": "Malloc3", 00:14:06.920 "nguid": "E94945F590E24BA3B3B09456FD654551", 00:14:06.920 "uuid": "e94945f5-90e2-4ba3-b3b0-9456fd654551" 00:14:06.920 } 00:14:06.920 ] 00:14:06.920 }, 00:14:06.920 { 00:14:06.920 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:06.920 "subtype": "NVMe", 00:14:06.920 "listen_addresses": [ 00:14:06.920 { 00:14:06.920 "trtype": "VFIOUSER", 00:14:06.920 "adrfam": "IPv4", 00:14:06.920 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:06.920 "trsvcid": "0" 00:14:06.920 } 00:14:06.920 ], 00:14:06.920 "allow_any_host": true, 00:14:06.920 "hosts": [], 00:14:06.920 "serial_number": "SPDK2", 00:14:06.920 "model_number": "SPDK bdev Controller", 00:14:06.920 "max_namespaces": 32, 00:14:06.920 "min_cntlid": 1, 00:14:06.920 "max_cntlid": 65519, 00:14:06.920 "namespaces": [ 00:14:06.920 { 00:14:06.920 "nsid": 1, 00:14:06.920 "bdev_name": "Malloc2", 00:14:06.920 "name": "Malloc2", 00:14:06.920 "nguid": "E6166BD3F6CE4E2AAAE3C242405D1890", 00:14:06.920 "uuid": "e6166bd3-f6ce-4e2a-aae3-c242405d1890" 00:14:06.920 }, 00:14:06.920 { 00:14:06.920 "nsid": 2, 00:14:06.920 "bdev_name": "Malloc4", 00:14:06.920 "name": "Malloc4", 00:14:06.920 "nguid": "B72BEE424D524748BD4F0D28AE0C8D6E", 00:14:06.920 "uuid": "b72bee42-4d52-4748-bd4f-0d28ae0c8d6e" 00:14:06.920 } 00:14:06.920 ] 00:14:06.920 } 00:14:06.920 ] 00:14:06.920 15:19:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2986115 00:14:06.920 15:19:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:14:06.920 15:19:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2978150 00:14:06.920 15:19:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 2978150 ']' 00:14:06.920 15:19:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 2978150 00:14:06.921 15:19:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:14:06.921 15:19:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:06.921 15:19:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2978150 00:14:06.921 15:19:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:06.921 15:19:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:06.921 15:19:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2978150' 00:14:06.921 killing process with pid 2978150 00:14:06.921 15:19:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 2978150 00:14:06.921 15:19:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 2978150 00:14:07.181 15:19:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:07.440 15:19:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:07.440 15:19:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:14:07.440 15:19:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:14:07.440 15:19:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:14:07.440 15:19:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2986277 00:14:07.440 15:19:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2986277' 00:14:07.440 Process pid: 2986277 00:14:07.440 15:19:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:14:07.440 15:19:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:07.440 15:19:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2986277 00:14:07.440 15:19:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 2986277 ']' 00:14:07.440 15:19:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.440 15:19:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:07.440 15:19:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.440 15:19:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:07.440 15:19:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:07.440 [2024-07-15 15:19:11.140675] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:14:07.440 [2024-07-15 15:19:11.141554] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:14:07.440 [2024-07-15 15:19:11.141593] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:07.440 EAL: No free 2048 kB hugepages reported on node 1 00:14:07.440 [2024-07-15 15:19:11.208805] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:07.440 [2024-07-15 15:19:11.274441] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:07.440 [2024-07-15 15:19:11.274485] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:07.440 [2024-07-15 15:19:11.274494] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:07.440 [2024-07-15 15:19:11.274502] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:07.440 [2024-07-15 15:19:11.274509] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:07.440 [2024-07-15 15:19:11.274604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:07.440 [2024-07-15 15:19:11.274720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:07.440 [2024-07-15 15:19:11.274810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:07.440 [2024-07-15 15:19:11.274812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.698 [2024-07-15 15:19:11.351185] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:14:07.698 [2024-07-15 15:19:11.351327] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:14:07.698 [2024-07-15 15:19:11.351571] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:14:07.698 [2024-07-15 15:19:11.351897] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:14:07.698 [2024-07-15 15:19:11.352165] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:14:08.263 15:19:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:08.263 15:19:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:14:08.263 15:19:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:09.197 15:19:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:14:09.456 15:19:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:09.456 15:19:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:09.456 15:19:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:09.456 15:19:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:09.456 15:19:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:09.456 Malloc1 00:14:09.456 15:19:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:09.714 15:19:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:09.991 15:19:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:09.991 15:19:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:09.991 15:19:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:09.991 15:19:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:10.251 Malloc2 00:14:10.251 15:19:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:10.508 15:19:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:10.508 15:19:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:10.767 15:19:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:14:10.767 15:19:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2986277 00:14:10.767 15:19:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 2986277 ']' 00:14:10.767 15:19:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 2986277 00:14:10.767 15:19:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:14:10.767 15:19:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:10.767 15:19:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2986277 00:14:10.767 15:19:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:10.767 15:19:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:10.767 15:19:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2986277' 00:14:10.767 killing process with pid 2986277 00:14:10.767 15:19:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 2986277 00:14:10.767 15:19:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 2986277 00:14:11.025 15:19:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:11.025 15:19:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:11.025 00:14:11.025 real 0m51.527s 00:14:11.025 user 3m22.879s 00:14:11.025 sys 0m4.778s 00:14:11.025 15:19:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:11.025 15:19:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:11.025 ************************************ 00:14:11.025 END TEST nvmf_vfio_user 00:14:11.025 ************************************ 00:14:11.025 15:19:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:11.025 15:19:14 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:11.026 15:19:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:11.026 15:19:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:11.026 15:19:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:11.026 ************************************ 00:14:11.026 START TEST nvmf_vfio_user_nvme_compliance 00:14:11.026 ************************************ 00:14:11.026 15:19:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:11.284 * Looking for test storage... 00:14:11.284 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:14:11.284 15:19:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:11.284 15:19:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:14:11.284 15:19:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:11.284 15:19:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:11.284 15:19:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:11.284 15:19:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:11.284 15:19:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:11.284 15:19:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:11.284 15:19:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:11.284 15:19:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:11.284 15:19:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:11.284 15:19:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:11.285 15:19:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:14:11.285 15:19:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:14:11.285 15:19:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:11.285 15:19:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:11.285 15:19:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:11.285 15:19:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:11.285 15:19:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:11.285 15:19:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:11.285 15:19:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:11.285 15:19:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:11.285 15:19:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.285 15:19:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.285 15:19:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.285 15:19:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:14:11.285 15:19:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.285 15:19:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:14:11.285 15:19:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:11.285 15:19:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:11.285 15:19:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:11.285 15:19:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:11.285 15:19:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:11.285 15:19:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:11.285 15:19:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:11.285 15:19:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:11.285 15:19:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:11.285 15:19:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:11.285 15:19:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:14:11.285 15:19:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:14:11.285 15:19:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:14:11.285 15:19:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2987087 00:14:11.285 15:19:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2987087' 00:14:11.285 Process pid: 2987087 00:14:11.285 15:19:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:11.285 15:19:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:11.285 15:19:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2987087 00:14:11.285 15:19:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 2987087 ']' 00:14:11.285 15:19:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.285 15:19:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:11.285 15:19:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:11.285 15:19:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:11.285 15:19:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:11.285 [2024-07-15 15:19:15.101158] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:14:11.285 [2024-07-15 15:19:15.101214] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:11.285 EAL: No free 2048 kB hugepages reported on node 1 00:14:11.285 [2024-07-15 15:19:15.170703] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:11.544 [2024-07-15 15:19:15.244955] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:11.544 [2024-07-15 15:19:15.244991] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:11.544 [2024-07-15 15:19:15.245003] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:11.544 [2024-07-15 15:19:15.245011] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:11.544 [2024-07-15 15:19:15.245018] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:11.544 [2024-07-15 15:19:15.245059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:11.544 [2024-07-15 15:19:15.245155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.544 [2024-07-15 15:19:15.245155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:12.112 15:19:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:12.112 15:19:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:14:12.112 15:19:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:14:13.082 15:19:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:13.082 15:19:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:14:13.082 15:19:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:13.082 15:19:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.082 15:19:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:13.082 15:19:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.082 15:19:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:14:13.082 15:19:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:13.082 15:19:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.082 15:19:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:13.082 malloc0 00:14:13.082 15:19:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.082 15:19:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:14:13.082 15:19:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.082 15:19:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:13.082 15:19:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.082 15:19:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:13.082 15:19:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.082 15:19:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:13.082 15:19:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.082 15:19:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:13.082 15:19:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.082 15:19:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:13.082 15:19:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.341 15:19:16 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:14:13.341 EAL: No free 2048 kB hugepages reported on node 1 00:14:13.341 00:14:13.341 00:14:13.341 CUnit - A unit testing framework for C - Version 2.1-3 00:14:13.341 http://cunit.sourceforge.net/ 00:14:13.341 00:14:13.341 00:14:13.341 Suite: nvme_compliance 00:14:13.341 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-15 15:19:17.160254] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:13.341 [2024-07-15 15:19:17.161596] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:14:13.341 [2024-07-15 15:19:17.161611] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:14:13.341 [2024-07-15 15:19:17.161619] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:14:13.341 [2024-07-15 15:19:17.163278] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:13.341 passed 00:14:13.341 Test: admin_identify_ctrlr_verify_fused ...[2024-07-15 15:19:17.241804] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:13.341 [2024-07-15 15:19:17.244821] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:13.600 passed 00:14:13.600 Test: admin_identify_ns ...[2024-07-15 15:19:17.324925] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:13.600 [2024-07-15 15:19:17.386845] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:13.600 [2024-07-15 15:19:17.394846] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:14:13.600 [2024-07-15 15:19:17.415944] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:13.600 passed 00:14:13.600 Test: admin_get_features_mandatory_features ...[2024-07-15 15:19:17.489513] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:13.600 [2024-07-15 15:19:17.492543] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:13.859 passed 00:14:13.859 Test: admin_get_features_optional_features ...[2024-07-15 15:19:17.570057] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:13.859 [2024-07-15 15:19:17.573072] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:13.859 passed 00:14:13.859 Test: admin_set_features_number_of_queues ...[2024-07-15 15:19:17.648670] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:13.859 [2024-07-15 15:19:17.752920] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:14.117 passed 00:14:14.117 Test: admin_get_log_page_mandatory_logs ...[2024-07-15 15:19:17.828310] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:14.117 [2024-07-15 15:19:17.831337] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:14.117 passed 00:14:14.117 Test: admin_get_log_page_with_lpo ...[2024-07-15 15:19:17.907946] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:14.117 [2024-07-15 15:19:17.977847] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:14:14.117 [2024-07-15 15:19:17.990914] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:14.117 passed 00:14:14.376 Test: fabric_property_get ...[2024-07-15 15:19:18.064390] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:14.376 [2024-07-15 15:19:18.065633] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:14:14.376 [2024-07-15 15:19:18.067409] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:14.376 passed 00:14:14.376 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-15 15:19:18.143892] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:14.376 [2024-07-15 15:19:18.145129] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:14:14.376 [2024-07-15 15:19:18.146913] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:14.376 passed 00:14:14.376 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-15 15:19:18.222445] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:14.635 [2024-07-15 15:19:18.305845] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:14.635 [2024-07-15 15:19:18.321843] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:14.635 [2024-07-15 15:19:18.326928] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:14.635 passed 00:14:14.635 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-15 15:19:18.402303] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:14.635 [2024-07-15 15:19:18.403546] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:14:14.635 [2024-07-15 15:19:18.405319] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:14.635 passed 00:14:14.635 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-15 15:19:18.479818] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:14.894 [2024-07-15 15:19:18.557843] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:14.894 [2024-07-15 15:19:18.581843] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:14.894 [2024-07-15 15:19:18.586928] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:14.894 passed 00:14:14.894 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-15 15:19:18.659315] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:14.894 [2024-07-15 15:19:18.660547] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:14:14.894 [2024-07-15 15:19:18.660576] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:14:14.894 [2024-07-15 15:19:18.662335] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:14.894 passed 00:14:14.894 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-15 15:19:18.739946] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:15.154 [2024-07-15 15:19:18.833838] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:14:15.155 [2024-07-15 15:19:18.841838] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:14:15.155 [2024-07-15 15:19:18.849839] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:14:15.155 [2024-07-15 15:19:18.857842] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:14:15.155 [2024-07-15 15:19:18.886914] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:15.155 passed 00:14:15.155 Test: admin_create_io_sq_verify_pc ...[2024-07-15 15:19:18.959447] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:15.155 [2024-07-15 15:19:18.974846] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:14:15.155 [2024-07-15 15:19:18.992627] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:15.155 passed 00:14:15.413 Test: admin_create_io_qp_max_qps ...[2024-07-15 15:19:19.070176] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:16.350 [2024-07-15 15:19:20.181843] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:14:16.933 [2024-07-15 15:19:20.560550] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:16.933 passed 00:14:16.933 Test: admin_create_io_sq_shared_cq ...[2024-07-15 15:19:20.634377] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:16.933 [2024-07-15 15:19:20.765838] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:16.934 [2024-07-15 15:19:20.802911] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:16.934 passed 00:14:16.934 00:14:16.934 Run Summary: Type Total Ran Passed Failed Inactive 00:14:16.934 suites 1 1 n/a 0 0 00:14:16.934 tests 18 18 18 0 0 00:14:16.934 asserts 360 360 360 0 n/a 00:14:16.934 00:14:16.934 Elapsed time = 1.497 seconds 00:14:17.192 15:19:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2987087 00:14:17.192 15:19:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 2987087 ']' 00:14:17.193 15:19:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 2987087 00:14:17.193 15:19:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:14:17.193 15:19:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:17.193 15:19:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2987087 00:14:17.193 15:19:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:17.193 15:19:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:17.193 15:19:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2987087' 00:14:17.193 killing process with pid 2987087 00:14:17.193 15:19:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 2987087 00:14:17.193 15:19:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 2987087 00:14:17.193 15:19:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:14:17.452 15:19:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:14:17.452 00:14:17.452 real 0m6.197s 00:14:17.452 user 0m17.462s 00:14:17.452 sys 0m0.716s 00:14:17.452 15:19:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:17.452 15:19:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:17.452 ************************************ 00:14:17.452 END TEST nvmf_vfio_user_nvme_compliance 00:14:17.452 ************************************ 00:14:17.452 15:19:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:17.452 15:19:21 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:17.452 15:19:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:17.452 15:19:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:17.452 15:19:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:17.452 ************************************ 00:14:17.452 START TEST nvmf_vfio_user_fuzz 00:14:17.452 ************************************ 00:14:17.452 15:19:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:17.452 * Looking for test storage... 00:14:17.452 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:17.452 15:19:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:17.452 15:19:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:14:17.452 15:19:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:17.452 15:19:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:17.452 15:19:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:17.452 15:19:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:17.452 15:19:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:17.452 15:19:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:17.452 15:19:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:17.452 15:19:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:17.452 15:19:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:17.452 15:19:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:17.452 15:19:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:14:17.452 15:19:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:14:17.452 15:19:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:17.452 15:19:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:17.452 15:19:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:17.452 15:19:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:17.452 15:19:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:17.452 15:19:21 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:17.452 15:19:21 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:17.452 15:19:21 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:17.452 15:19:21 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.452 15:19:21 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.452 15:19:21 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.452 15:19:21 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:14:17.452 15:19:21 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.452 15:19:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:14:17.452 15:19:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:17.452 15:19:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:17.452 15:19:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:17.452 15:19:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:17.452 15:19:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:17.452 15:19:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:17.452 15:19:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:17.452 15:19:21 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:17.452 15:19:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:17.452 15:19:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:17.452 15:19:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:17.452 15:19:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:14:17.452 15:19:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:17.452 15:19:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:17.452 15:19:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:14:17.452 15:19:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2988219 00:14:17.453 15:19:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2988219' 00:14:17.453 Process pid: 2988219 00:14:17.453 15:19:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:17.453 15:19:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2988219 00:14:17.453 15:19:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:17.453 15:19:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 2988219 ']' 00:14:17.453 15:19:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:17.453 15:19:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:17.453 15:19:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:17.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:17.453 15:19:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:17.453 15:19:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:18.388 15:19:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:18.388 15:19:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:14:18.388 15:19:22 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:14:19.324 15:19:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:19.324 15:19:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.324 15:19:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:19.324 15:19:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.324 15:19:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:14:19.324 15:19:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:19.324 15:19:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.324 15:19:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:19.324 malloc0 00:14:19.324 15:19:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.324 15:19:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:14:19.324 15:19:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.324 15:19:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:19.324 15:19:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.324 15:19:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:19.324 15:19:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.324 15:19:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:19.324 15:19:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.324 15:19:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:19.324 15:19:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.324 15:19:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:19.582 15:19:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.582 15:19:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:14:19.582 15:19:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:14:51.661 Fuzzing completed. Shutting down the fuzz application 00:14:51.661 00:14:51.661 Dumping successful admin opcodes: 00:14:51.661 8, 9, 10, 24, 00:14:51.661 Dumping successful io opcodes: 00:14:51.661 0, 00:14:51.661 NS: 0x200003a1ef00 I/O qp, Total commands completed: 884082, total successful commands: 3440, random_seed: 3716983488 00:14:51.661 NS: 0x200003a1ef00 admin qp, Total commands completed: 215272, total successful commands: 1734, random_seed: 200361600 00:14:51.661 15:19:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:14:51.661 15:19:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.661 15:19:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:51.661 15:19:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.661 15:19:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2988219 00:14:51.661 15:19:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 2988219 ']' 00:14:51.661 15:19:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 2988219 00:14:51.661 15:19:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:14:51.661 15:19:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:51.661 15:19:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2988219 00:14:51.661 15:19:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:51.661 15:19:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:51.661 15:19:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2988219' 00:14:51.661 killing process with pid 2988219 00:14:51.661 15:19:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 2988219 00:14:51.661 15:19:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 2988219 00:14:51.661 15:19:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:14:51.661 15:19:53 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:14:51.661 00:14:51.661 real 0m32.794s 00:14:51.661 user 0m28.806s 00:14:51.661 sys 0m33.173s 00:14:51.661 15:19:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:51.661 15:19:53 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:51.661 ************************************ 00:14:51.661 END TEST nvmf_vfio_user_fuzz 00:14:51.661 ************************************ 00:14:51.661 15:19:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:51.661 15:19:54 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:51.661 15:19:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:51.661 15:19:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:51.661 15:19:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:51.661 ************************************ 00:14:51.661 START TEST nvmf_host_management 00:14:51.661 ************************************ 00:14:51.661 15:19:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:51.661 * Looking for test storage... 00:14:51.661 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:51.661 15:19:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:51.661 15:19:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:14:51.661 15:19:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:51.661 15:19:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:51.661 15:19:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:51.661 15:19:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:51.661 15:19:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:51.661 15:19:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:51.661 15:19:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:51.661 15:19:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:51.661 15:19:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:51.661 15:19:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:51.661 15:19:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:14:51.661 15:19:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:14:51.661 15:19:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:51.661 15:19:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:51.661 15:19:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:51.661 15:19:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:51.661 15:19:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:51.661 15:19:54 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:51.661 15:19:54 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:51.661 15:19:54 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:51.661 15:19:54 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.661 15:19:54 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.662 15:19:54 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.662 15:19:54 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:14:51.662 15:19:54 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.662 15:19:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:14:51.662 15:19:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:51.662 15:19:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:51.662 15:19:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:51.662 15:19:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:51.662 15:19:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:51.662 15:19:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:51.662 15:19:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:51.662 15:19:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:51.662 15:19:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:51.662 15:19:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:51.662 15:19:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:14:51.662 15:19:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:51.662 15:19:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:51.662 15:19:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:51.662 15:19:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:51.662 15:19:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:51.662 15:19:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:51.662 15:19:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:51.662 15:19:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:51.662 15:19:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:51.662 15:19:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:51.662 15:19:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:14:51.662 15:19:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:58.332 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:58.332 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:14:58.332 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:58.332 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:58.332 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:58.332 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:58.332 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:58.332 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:14:58.332 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:58.332 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:14:58.332 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:14:58.332 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:14:58.332 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:14:58.332 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:14:58.332 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:14:58.332 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:58.332 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:58.332 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:58.332 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:58.332 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:58.332 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:58.332 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:58.332 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:58.332 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:58.332 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:58.332 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:58.332 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:58.332 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:58.332 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:58.332 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:58.332 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:58.332 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:58.332 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:58.332 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:58.332 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:58.332 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:58.332 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:58.332 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:58.332 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:58.332 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:58.332 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:58.332 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:58.332 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:58.332 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:58.332 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:58.332 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:58.332 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:58.332 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:58.332 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:58.332 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:58.332 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:58.332 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:58.332 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:58.332 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:58.332 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:58.332 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:58.333 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:58.333 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:58.333 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:58.333 Found net devices under 0000:af:00.0: cvl_0_0 00:14:58.333 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:58.333 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:58.333 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:58.333 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:58.333 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:58.333 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:58.333 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:58.333 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:58.333 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:58.333 Found net devices under 0000:af:00.1: cvl_0_1 00:14:58.333 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:58.333 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:58.333 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:14:58.333 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:58.333 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:58.333 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:58.333 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:58.333 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:58.333 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:58.333 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:58.333 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:58.333 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:58.333 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:58.333 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:58.333 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:58.333 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:58.333 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:58.333 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:58.333 15:20:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:58.333 15:20:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:58.333 15:20:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:58.333 15:20:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:58.333 15:20:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:58.333 15:20:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:58.333 15:20:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:58.333 15:20:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:58.333 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:58.333 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:14:58.333 00:14:58.333 --- 10.0.0.2 ping statistics --- 00:14:58.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.333 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:14:58.333 15:20:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:58.333 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:58.333 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:14:58.333 00:14:58.333 --- 10.0.0.1 ping statistics --- 00:14:58.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.333 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:14:58.333 15:20:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:58.333 15:20:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:14:58.333 15:20:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:58.333 15:20:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:58.333 15:20:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:58.333 15:20:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:58.333 15:20:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:58.333 15:20:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:58.333 15:20:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:58.333 15:20:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:14:58.333 15:20:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:14:58.333 15:20:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:14:58.333 15:20:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:58.333 15:20:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:58.333 15:20:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:58.333 15:20:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=2996999 00:14:58.333 15:20:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 2996999 00:14:58.333 15:20:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:14:58.333 15:20:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 2996999 ']' 00:14:58.333 15:20:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.333 15:20:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:58.333 15:20:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.333 15:20:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:58.333 15:20:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:58.333 [2024-07-15 15:20:01.380663] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:14:58.333 [2024-07-15 15:20:01.380711] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:58.333 EAL: No free 2048 kB hugepages reported on node 1 00:14:58.333 [2024-07-15 15:20:01.455688] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:58.333 [2024-07-15 15:20:01.532289] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:58.333 [2024-07-15 15:20:01.532332] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:58.333 [2024-07-15 15:20:01.532342] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:58.333 [2024-07-15 15:20:01.532351] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:58.333 [2024-07-15 15:20:01.532358] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:58.333 [2024-07-15 15:20:01.532461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:58.333 [2024-07-15 15:20:01.532543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:58.333 [2024-07-15 15:20:01.532571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:58.333 [2024-07-15 15:20:01.532572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:58.333 15:20:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:58.333 15:20:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:14:58.333 15:20:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:58.333 15:20:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:58.333 15:20:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:58.333 15:20:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:58.333 15:20:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:58.333 15:20:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.333 15:20:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:58.333 [2024-07-15 15:20:02.231695] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:58.592 15:20:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.592 15:20:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:14:58.592 15:20:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:58.592 15:20:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:58.592 15:20:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:58.592 15:20:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:14:58.592 15:20:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:14:58.592 15:20:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.592 15:20:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:58.592 Malloc0 00:14:58.592 [2024-07-15 15:20:02.298470] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:58.592 15:20:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.592 15:20:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:14:58.592 15:20:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:58.592 15:20:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:58.592 15:20:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2997218 00:14:58.592 15:20:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2997218 /var/tmp/bdevperf.sock 00:14:58.592 15:20:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 2997218 ']' 00:14:58.592 15:20:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:58.592 15:20:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:58.592 15:20:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:14:58.592 15:20:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:14:58.592 15:20:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:58.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:58.592 15:20:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:58.592 15:20:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:14:58.592 15:20:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:58.592 15:20:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:14:58.592 15:20:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:58.592 15:20:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:58.592 { 00:14:58.592 "params": { 00:14:58.592 "name": "Nvme$subsystem", 00:14:58.592 "trtype": "$TEST_TRANSPORT", 00:14:58.592 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:58.592 "adrfam": "ipv4", 00:14:58.592 "trsvcid": "$NVMF_PORT", 00:14:58.592 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:58.592 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:58.592 "hdgst": ${hdgst:-false}, 00:14:58.592 "ddgst": ${ddgst:-false} 00:14:58.592 }, 00:14:58.592 "method": "bdev_nvme_attach_controller" 00:14:58.592 } 00:14:58.592 EOF 00:14:58.592 )") 00:14:58.592 15:20:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:14:58.592 15:20:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:14:58.592 15:20:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:14:58.592 15:20:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:58.592 "params": { 00:14:58.592 "name": "Nvme0", 00:14:58.592 "trtype": "tcp", 00:14:58.592 "traddr": "10.0.0.2", 00:14:58.592 "adrfam": "ipv4", 00:14:58.592 "trsvcid": "4420", 00:14:58.592 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:58.592 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:58.592 "hdgst": false, 00:14:58.592 "ddgst": false 00:14:58.592 }, 00:14:58.592 "method": "bdev_nvme_attach_controller" 00:14:58.592 }' 00:14:58.592 [2024-07-15 15:20:02.403222] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:14:58.592 [2024-07-15 15:20:02.403269] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2997218 ] 00:14:58.592 EAL: No free 2048 kB hugepages reported on node 1 00:14:58.592 [2024-07-15 15:20:02.474018] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:58.850 [2024-07-15 15:20:02.543952] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.850 Running I/O for 10 seconds... 00:14:59.422 15:20:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:59.422 15:20:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:14:59.422 15:20:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:14:59.422 15:20:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.422 15:20:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:59.422 15:20:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.422 15:20:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:59.422 15:20:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:14:59.422 15:20:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:14:59.422 15:20:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:14:59.422 15:20:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:14:59.422 15:20:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:14:59.422 15:20:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:14:59.422 15:20:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:14:59.422 15:20:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:14:59.422 15:20:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.422 15:20:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:59.422 15:20:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:14:59.422 15:20:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.422 15:20:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:14:59.422 15:20:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:14:59.422 15:20:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:14:59.422 15:20:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:14:59.422 15:20:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:14:59.422 15:20:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:59.422 15:20:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.422 15:20:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:59.422 [2024-07-15 15:20:03.285676] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.285749] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.285764] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.285773] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.285782] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.285791] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.285800] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.285808] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.285817] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.285826] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.285840] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.285849] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.285857] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.285866] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.285874] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.285883] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.285892] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.285901] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.285910] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.285918] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.285927] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.285936] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.285944] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.285953] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.285961] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.285970] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.285979] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.285988] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.285997] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.286007] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.286016] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.286025] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.286033] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.286042] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.286059] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.286068] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.286077] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.286086] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.286094] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.286103] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.286111] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.286120] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.286128] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.286137] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.286146] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.286155] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.286163] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.286172] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.286181] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.286190] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.286198] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.286207] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.286215] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.286224] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.286232] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.286242] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.286252] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.286261] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.286270] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.286278] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.286287] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.286295] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.286304] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fc820 is same with the state(5) to be set 00:14:59.422 [2024-07-15 15:20:03.286423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.422 [2024-07-15 15:20:03.286457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.422 [2024-07-15 15:20:03.286479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.422 [2024-07-15 15:20:03.286489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.422 [2024-07-15 15:20:03.286501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.422 [2024-07-15 15:20:03.286510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.422 [2024-07-15 15:20:03.286522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.422 [2024-07-15 15:20:03.286532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.422 [2024-07-15 15:20:03.286543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.422 [2024-07-15 15:20:03.286552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.422 [2024-07-15 15:20:03.286563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.422 [2024-07-15 15:20:03.286573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.422 [2024-07-15 15:20:03.286583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.422 [2024-07-15 15:20:03.286593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.422 [2024-07-15 15:20:03.286604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.422 [2024-07-15 15:20:03.286613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.422 [2024-07-15 15:20:03.286624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.422 [2024-07-15 15:20:03.286634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.422 [2024-07-15 15:20:03.286645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.422 [2024-07-15 15:20:03.286659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.422 [2024-07-15 15:20:03.286670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.422 [2024-07-15 15:20:03.286679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.422 [2024-07-15 15:20:03.286690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.422 [2024-07-15 15:20:03.286700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.422 [2024-07-15 15:20:03.286710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.422 [2024-07-15 15:20:03.286720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.422 [2024-07-15 15:20:03.286731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.423 [2024-07-15 15:20:03.286740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.423 [2024-07-15 15:20:03.286753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.423 [2024-07-15 15:20:03.286763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.423 [2024-07-15 15:20:03.286774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.423 [2024-07-15 15:20:03.286784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.423 [2024-07-15 15:20:03.286795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.423 [2024-07-15 15:20:03.286805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.423 [2024-07-15 15:20:03.286816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.423 [2024-07-15 15:20:03.286826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.423 [2024-07-15 15:20:03.286842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.423 [2024-07-15 15:20:03.286852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.423 [2024-07-15 15:20:03.286863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.423 [2024-07-15 15:20:03.286873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.423 [2024-07-15 15:20:03.286884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.423 [2024-07-15 15:20:03.286894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.423 [2024-07-15 15:20:03.286905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.423 [2024-07-15 15:20:03.286915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.423 [2024-07-15 15:20:03.286927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.423 [2024-07-15 15:20:03.286937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.423 [2024-07-15 15:20:03.286948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.423 [2024-07-15 15:20:03.286958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.423 [2024-07-15 15:20:03.286970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.423 [2024-07-15 15:20:03.286980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.423 [2024-07-15 15:20:03.286991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.423 [2024-07-15 15:20:03.287000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.423 [2024-07-15 15:20:03.287011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.423 [2024-07-15 15:20:03.287021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.423 [2024-07-15 15:20:03.287032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.423 [2024-07-15 15:20:03.287041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.423 [2024-07-15 15:20:03.287053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.423 [2024-07-15 15:20:03.287063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.423 [2024-07-15 15:20:03.287074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.423 [2024-07-15 15:20:03.287083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.423 [2024-07-15 15:20:03.287094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.423 [2024-07-15 15:20:03.287104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.423 [2024-07-15 15:20:03.287115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.423 [2024-07-15 15:20:03.287125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.423 [2024-07-15 15:20:03.287136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.423 [2024-07-15 15:20:03.287147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.423 [2024-07-15 15:20:03.287158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.423 [2024-07-15 15:20:03.287167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.423 [2024-07-15 15:20:03.287179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.423 [2024-07-15 15:20:03.287190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.423 [2024-07-15 15:20:03.287201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.423 [2024-07-15 15:20:03.287211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.423 [2024-07-15 15:20:03.287222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.423 [2024-07-15 15:20:03.287231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.423 [2024-07-15 15:20:03.287242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.423 [2024-07-15 15:20:03.287252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.423 [2024-07-15 15:20:03.287263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.423 [2024-07-15 15:20:03.287274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.423 [2024-07-15 15:20:03.287286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.423 [2024-07-15 15:20:03.287296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.423 [2024-07-15 15:20:03.287307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.423 [2024-07-15 15:20:03.287317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.423 [2024-07-15 15:20:03.287328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.423 [2024-07-15 15:20:03.287338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.423 [2024-07-15 15:20:03.287349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.423 [2024-07-15 15:20:03.287359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.423 [2024-07-15 15:20:03.287370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.423 [2024-07-15 15:20:03.287379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.423 [2024-07-15 15:20:03.287391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.423 [2024-07-15 15:20:03.287400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.423 [2024-07-15 15:20:03.287411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.423 [2024-07-15 15:20:03.287421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.423 [2024-07-15 15:20:03.287432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.423 [2024-07-15 15:20:03.287442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.423 [2024-07-15 15:20:03.287454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.423 [2024-07-15 15:20:03.287464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.423 [2024-07-15 15:20:03.287475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.423 [2024-07-15 15:20:03.287485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.423 [2024-07-15 15:20:03.287497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.423 [2024-07-15 15:20:03.287507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.423 [2024-07-15 15:20:03.287519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.423 [2024-07-15 15:20:03.287529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.423 [2024-07-15 15:20:03.287540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.423 [2024-07-15 15:20:03.287549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.423 [2024-07-15 15:20:03.287560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.423 [2024-07-15 15:20:03.287571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.423 [2024-07-15 15:20:03.287582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.423 [2024-07-15 15:20:03.287592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.423 [2024-07-15 15:20:03.287603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.423 [2024-07-15 15:20:03.287613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.423 [2024-07-15 15:20:03.287625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.423 [2024-07-15 15:20:03.287635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.423 [2024-07-15 15:20:03.287646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.423 [2024-07-15 15:20:03.287656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.423 [2024-07-15 15:20:03.287667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.423 [2024-07-15 15:20:03.287677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.423 [2024-07-15 15:20:03.287688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.423 [2024-07-15 15:20:03.287698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.423 [2024-07-15 15:20:03.287709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.423 [2024-07-15 15:20:03.287720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.423 [2024-07-15 15:20:03.287731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.423 [2024-07-15 15:20:03.287741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.423 [2024-07-15 15:20:03.287751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.423 [2024-07-15 15:20:03.287762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.423 [2024-07-15 15:20:03.287773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.423 [2024-07-15 15:20:03.287783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.423 [2024-07-15 15:20:03.287795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.423 [2024-07-15 15:20:03.287805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.423 [2024-07-15 15:20:03.287815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2218a30 is same with the state(5) to be set 00:14:59.423 [2024-07-15 15:20:03.287874] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2218a30 was disconnected and freed. reset controller. 00:14:59.423 [2024-07-15 15:20:03.287916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:59.423 [2024-07-15 15:20:03.287927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.423 [2024-07-15 15:20:03.287938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:59.423 [2024-07-15 15:20:03.287948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.423 [2024-07-15 15:20:03.287958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:59.423 [2024-07-15 15:20:03.287968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.423 [2024-07-15 15:20:03.287978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:59.423 [2024-07-15 15:20:03.287988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:59.423 [2024-07-15 15:20:03.287998] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e07a70 is same with the state(5) to be set 00:14:59.423 [2024-07-15 15:20:03.288883] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:14:59.423 task offset: 98304 on job bdev=Nvme0n1 fails 00:14:59.423 00:14:59.423 Latency(us) 00:14:59.423 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:59.424 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:59.424 Job: Nvme0n1 ended in about 0.54 seconds with error 00:14:59.424 Verification LBA range: start 0x0 length 0x400 00:14:59.424 Nvme0n1 : 0.54 1412.59 88.29 117.72 0.00 41022.21 5845.81 40475.03 00:14:59.424 =================================================================================================================== 00:14:59.424 Total : 1412.59 88.29 117.72 0.00 41022.21 5845.81 40475.03 00:14:59.424 [2024-07-15 15:20:03.290403] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:59.424 [2024-07-15 15:20:03.290421] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e07a70 (9): Bad file descriptor 00:14:59.424 15:20:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.424 15:20:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:59.424 15:20:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.424 15:20:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:59.424 15:20:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.424 15:20:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:14:59.424 [2024-07-15 15:20:03.311448] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:00.798 15:20:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2997218 00:15:00.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2997218) - No such process 00:15:00.798 15:20:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:15:00.798 15:20:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:15:00.798 15:20:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:15:00.798 15:20:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:15:00.798 15:20:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:15:00.798 15:20:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:15:00.798 15:20:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:00.798 15:20:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:00.798 { 00:15:00.798 "params": { 00:15:00.798 "name": "Nvme$subsystem", 00:15:00.798 "trtype": "$TEST_TRANSPORT", 00:15:00.798 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:00.798 "adrfam": "ipv4", 00:15:00.798 "trsvcid": "$NVMF_PORT", 00:15:00.798 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:00.798 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:00.798 "hdgst": ${hdgst:-false}, 00:15:00.798 "ddgst": ${ddgst:-false} 00:15:00.798 }, 00:15:00.798 "method": "bdev_nvme_attach_controller" 00:15:00.798 } 00:15:00.798 EOF 00:15:00.798 )") 00:15:00.798 15:20:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:15:00.798 15:20:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:15:00.798 15:20:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:15:00.798 15:20:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:00.798 "params": { 00:15:00.798 "name": "Nvme0", 00:15:00.798 "trtype": "tcp", 00:15:00.798 "traddr": "10.0.0.2", 00:15:00.798 "adrfam": "ipv4", 00:15:00.798 "trsvcid": "4420", 00:15:00.798 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:00.798 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:15:00.798 "hdgst": false, 00:15:00.798 "ddgst": false 00:15:00.798 }, 00:15:00.798 "method": "bdev_nvme_attach_controller" 00:15:00.798 }' 00:15:00.798 [2024-07-15 15:20:04.356736] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:15:00.798 [2024-07-15 15:20:04.356788] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2997518 ] 00:15:00.798 EAL: No free 2048 kB hugepages reported on node 1 00:15:00.798 [2024-07-15 15:20:04.427574] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.798 [2024-07-15 15:20:04.496542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.798 Running I/O for 1 seconds... 00:15:02.177 00:15:02.177 Latency(us) 00:15:02.177 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:02.177 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:02.177 Verification LBA range: start 0x0 length 0x400 00:15:02.177 Nvme0n1 : 1.01 1517.53 94.85 0.00 0.00 41619.06 8231.32 39636.17 00:15:02.177 =================================================================================================================== 00:15:02.177 Total : 1517.53 94.85 0.00 0.00 41619.06 8231.32 39636.17 00:15:02.177 15:20:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:15:02.177 15:20:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:15:02.177 15:20:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:15:02.177 15:20:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:02.177 15:20:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:15:02.177 15:20:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:02.177 15:20:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:15:02.177 15:20:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:02.177 15:20:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:15:02.177 15:20:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:02.177 15:20:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:02.177 rmmod nvme_tcp 00:15:02.177 rmmod nvme_fabrics 00:15:02.177 rmmod nvme_keyring 00:15:02.177 15:20:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:02.177 15:20:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:15:02.177 15:20:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:15:02.177 15:20:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 2996999 ']' 00:15:02.177 15:20:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 2996999 00:15:02.177 15:20:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 2996999 ']' 00:15:02.177 15:20:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 2996999 00:15:02.177 15:20:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:15:02.177 15:20:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:02.177 15:20:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2996999 00:15:02.177 15:20:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:02.177 15:20:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:02.177 15:20:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2996999' 00:15:02.177 killing process with pid 2996999 00:15:02.177 15:20:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 2996999 00:15:02.177 15:20:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 2996999 00:15:02.436 [2024-07-15 15:20:06.210454] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:15:02.436 15:20:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:02.436 15:20:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:02.436 15:20:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:02.436 15:20:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:02.436 15:20:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:02.436 15:20:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:02.436 15:20:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:02.436 15:20:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:04.970 15:20:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:04.970 15:20:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:15:04.970 00:15:04.970 real 0m14.247s 00:15:04.970 user 0m22.951s 00:15:04.970 sys 0m6.826s 00:15:04.970 15:20:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:04.970 15:20:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:04.970 ************************************ 00:15:04.970 END TEST nvmf_host_management 00:15:04.970 ************************************ 00:15:04.970 15:20:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:04.970 15:20:08 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:15:04.970 15:20:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:04.970 15:20:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:04.970 15:20:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:04.970 ************************************ 00:15:04.970 START TEST nvmf_lvol 00:15:04.970 ************************************ 00:15:04.970 15:20:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:15:04.970 * Looking for test storage... 00:15:04.970 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:04.970 15:20:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:04.970 15:20:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:15:04.970 15:20:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:04.970 15:20:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:04.970 15:20:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:04.970 15:20:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:04.970 15:20:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:04.970 15:20:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:04.970 15:20:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:04.970 15:20:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:04.970 15:20:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:04.970 15:20:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:04.970 15:20:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:15:04.970 15:20:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:15:04.970 15:20:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:04.970 15:20:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:04.970 15:20:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:04.970 15:20:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:04.970 15:20:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:04.970 15:20:08 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:04.970 15:20:08 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:04.970 15:20:08 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:04.970 15:20:08 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.970 15:20:08 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.971 15:20:08 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.971 15:20:08 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:15:04.971 15:20:08 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.971 15:20:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:15:04.971 15:20:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:04.971 15:20:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:04.971 15:20:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:04.971 15:20:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:04.971 15:20:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:04.971 15:20:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:04.971 15:20:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:04.971 15:20:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:04.971 15:20:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:04.971 15:20:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:04.971 15:20:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:15:04.971 15:20:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:15:04.971 15:20:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:04.971 15:20:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:15:04.971 15:20:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:04.971 15:20:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:04.971 15:20:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:04.971 15:20:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:04.971 15:20:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:04.971 15:20:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:04.971 15:20:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:04.971 15:20:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:04.971 15:20:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:04.971 15:20:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:04.971 15:20:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:15:04.971 15:20:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:11.535 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:11.535 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:15:11.535 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:11.535 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:11.535 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:11.535 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:11.535 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:11.535 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:15:11.535 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:11.535 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:15:11.535 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:15:11.535 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:15:11.535 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:15:11.535 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:15:11.535 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:15:11.535 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:11.535 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:11.535 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:11.535 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:11.535 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:11.535 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:11.535 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:11.535 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:11.535 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:11.535 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:11.535 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:11.536 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:11.536 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:11.536 Found net devices under 0000:af:00.0: cvl_0_0 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:11.536 Found net devices under 0000:af:00.1: cvl_0_1 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:11.536 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:11.795 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:11.795 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:11.795 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:15:11.795 00:15:11.795 --- 10.0.0.2 ping statistics --- 00:15:11.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:11.795 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:15:11.795 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:11.795 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:11.795 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:15:11.795 00:15:11.795 --- 10.0.0.1 ping statistics --- 00:15:11.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:11.795 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:15:11.795 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:11.795 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:15:11.795 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:11.795 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:11.795 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:11.795 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:11.795 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:11.795 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:11.795 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:11.795 15:20:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:15:11.795 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:11.795 15:20:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:11.795 15:20:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:11.795 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=3001608 00:15:11.795 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:11.795 15:20:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 3001608 00:15:11.795 15:20:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 3001608 ']' 00:15:11.795 15:20:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:11.795 15:20:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:11.795 15:20:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:11.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:11.795 15:20:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:11.795 15:20:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:11.795 [2024-07-15 15:20:15.563134] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:15:11.795 [2024-07-15 15:20:15.563191] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:11.795 EAL: No free 2048 kB hugepages reported on node 1 00:15:11.795 [2024-07-15 15:20:15.637866] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:12.053 [2024-07-15 15:20:15.715881] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:12.054 [2024-07-15 15:20:15.715914] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:12.054 [2024-07-15 15:20:15.715923] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:12.054 [2024-07-15 15:20:15.715931] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:12.054 [2024-07-15 15:20:15.715939] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:12.054 [2024-07-15 15:20:15.715982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:12.054 [2024-07-15 15:20:15.716075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:12.054 [2024-07-15 15:20:15.716077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.620 15:20:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:12.620 15:20:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:15:12.620 15:20:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:12.620 15:20:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:12.620 15:20:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:12.620 15:20:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:12.620 15:20:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:12.878 [2024-07-15 15:20:16.556758] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:12.878 15:20:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:12.878 15:20:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:15:12.878 15:20:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:13.136 15:20:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:15:13.136 15:20:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:15:13.395 15:20:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:15:13.653 15:20:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=ef76852b-d6f9-4103-bc25-4a3417bbd9dd 00:15:13.653 15:20:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ef76852b-d6f9-4103-bc25-4a3417bbd9dd lvol 20 00:15:13.653 15:20:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=2bd86848-1445-4c2b-a656-b19e600d7220 00:15:13.653 15:20:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:13.910 15:20:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2bd86848-1445-4c2b-a656-b19e600d7220 00:15:14.168 15:20:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:14.168 [2024-07-15 15:20:18.058242] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:14.427 15:20:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:14.427 15:20:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3002011 00:15:14.427 15:20:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:15:14.427 15:20:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:15:14.427 EAL: No free 2048 kB hugepages reported on node 1 00:15:15.802 15:20:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 2bd86848-1445-4c2b-a656-b19e600d7220 MY_SNAPSHOT 00:15:15.802 15:20:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=e3914238-bda6-4be0-b5c8-6274c55e2607 00:15:15.802 15:20:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 2bd86848-1445-4c2b-a656-b19e600d7220 30 00:15:15.802 15:20:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone e3914238-bda6-4be0-b5c8-6274c55e2607 MY_CLONE 00:15:16.059 15:20:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=d8bc30e1-afa8-4083-9b36-b95944378623 00:15:16.059 15:20:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate d8bc30e1-afa8-4083-9b36-b95944378623 00:15:16.625 15:20:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3002011 00:15:24.806 Initializing NVMe Controllers 00:15:24.806 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:15:24.806 Controller IO queue size 128, less than required. 00:15:24.806 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:24.806 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:15:24.806 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:15:24.806 Initialization complete. Launching workers. 00:15:24.806 ======================================================== 00:15:24.807 Latency(us) 00:15:24.807 Device Information : IOPS MiB/s Average min max 00:15:24.807 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12502.90 48.84 10239.22 540.08 59792.45 00:15:24.807 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12391.40 48.40 10333.74 3011.77 68861.75 00:15:24.807 ======================================================== 00:15:24.807 Total : 24894.30 97.24 10286.27 540.08 68861.75 00:15:24.807 00:15:24.807 15:20:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:24.807 15:20:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2bd86848-1445-4c2b-a656-b19e600d7220 00:15:25.064 15:20:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ef76852b-d6f9-4103-bc25-4a3417bbd9dd 00:15:25.321 15:20:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:15:25.321 15:20:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:15:25.321 15:20:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:15:25.321 15:20:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:25.321 15:20:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:15:25.321 15:20:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:25.321 15:20:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:15:25.321 15:20:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:25.321 15:20:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:25.321 rmmod nvme_tcp 00:15:25.321 rmmod nvme_fabrics 00:15:25.321 rmmod nvme_keyring 00:15:25.321 15:20:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:25.321 15:20:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:15:25.321 15:20:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:15:25.321 15:20:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 3001608 ']' 00:15:25.321 15:20:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 3001608 00:15:25.321 15:20:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 3001608 ']' 00:15:25.321 15:20:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 3001608 00:15:25.321 15:20:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:15:25.321 15:20:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:25.321 15:20:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3001608 00:15:25.321 15:20:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:25.321 15:20:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:25.321 15:20:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3001608' 00:15:25.321 killing process with pid 3001608 00:15:25.321 15:20:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 3001608 00:15:25.321 15:20:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 3001608 00:15:25.578 15:20:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:25.578 15:20:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:25.578 15:20:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:25.578 15:20:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:25.578 15:20:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:25.578 15:20:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:25.578 15:20:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:25.578 15:20:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:28.136 15:20:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:28.136 00:15:28.136 real 0m23.100s 00:15:28.136 user 1m2.062s 00:15:28.136 sys 0m9.962s 00:15:28.136 15:20:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:28.136 15:20:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:28.136 ************************************ 00:15:28.136 END TEST nvmf_lvol 00:15:28.136 ************************************ 00:15:28.136 15:20:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:28.136 15:20:31 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:15:28.136 15:20:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:28.136 15:20:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:28.136 15:20:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:28.136 ************************************ 00:15:28.136 START TEST nvmf_lvs_grow 00:15:28.136 ************************************ 00:15:28.136 15:20:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:15:28.136 * Looking for test storage... 00:15:28.136 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:28.136 15:20:31 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:28.136 15:20:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:15:28.136 15:20:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:28.136 15:20:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:28.136 15:20:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:28.136 15:20:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:28.136 15:20:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:28.136 15:20:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:28.136 15:20:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:28.136 15:20:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:28.136 15:20:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:28.136 15:20:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:28.136 15:20:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:15:28.136 15:20:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:15:28.136 15:20:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:28.136 15:20:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:28.136 15:20:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:28.136 15:20:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:28.136 15:20:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:28.136 15:20:31 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:28.136 15:20:31 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:28.136 15:20:31 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:28.136 15:20:31 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.136 15:20:31 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.136 15:20:31 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.136 15:20:31 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:15:28.136 15:20:31 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.136 15:20:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:15:28.136 15:20:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:28.136 15:20:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:28.136 15:20:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:28.136 15:20:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:28.136 15:20:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:28.136 15:20:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:28.136 15:20:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:28.136 15:20:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:28.136 15:20:31 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:28.136 15:20:31 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:28.136 15:20:31 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:15:28.136 15:20:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:28.136 15:20:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:28.136 15:20:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:28.136 15:20:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:28.136 15:20:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:28.136 15:20:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:28.136 15:20:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:28.136 15:20:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:28.136 15:20:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:28.136 15:20:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:28.136 15:20:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:15:28.136 15:20:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:34.719 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:34.719 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:15:34.719 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:34.719 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:34.719 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:34.719 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:34.719 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:34.719 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:15:34.719 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:34.719 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:15:34.719 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:15:34.719 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:15:34.719 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:15:34.719 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:15:34.719 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:15:34.719 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:34.719 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:34.719 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:34.719 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:34.719 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:34.719 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:34.719 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:34.719 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:34.719 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:34.719 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:34.719 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:34.719 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:34.719 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:34.719 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:34.719 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:34.719 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:34.719 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:34.719 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:34.719 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:34.719 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:34.719 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:34.719 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:34.719 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:34.719 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:34.719 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:34.719 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:34.719 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:34.719 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:34.719 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:34.719 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:34.719 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:34.719 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:34.719 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:34.719 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:34.719 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:34.719 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:34.719 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:34.719 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:34.719 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:34.719 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:34.719 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:34.719 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:34.719 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:34.719 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:34.719 Found net devices under 0000:af:00.0: cvl_0_0 00:15:34.719 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:34.720 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:34.720 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:34.720 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:34.720 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:34.720 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:34.720 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:34.720 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:34.720 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:34.720 Found net devices under 0000:af:00.1: cvl_0_1 00:15:34.720 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:34.720 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:34.720 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:15:34.720 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:34.720 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:34.720 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:34.720 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:34.720 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:34.720 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:34.720 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:34.720 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:34.720 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:34.720 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:34.720 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:34.720 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:34.720 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:34.720 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:34.720 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:34.720 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:34.720 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:34.720 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:34.720 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:34.720 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:34.720 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:34.720 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:34.720 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:34.720 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:34.720 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms 00:15:34.720 00:15:34.720 --- 10.0.0.2 ping statistics --- 00:15:34.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.720 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:15:34.720 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:34.720 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:34.720 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.248 ms 00:15:34.720 00:15:34.720 --- 10.0.0.1 ping statistics --- 00:15:34.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.720 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:15:34.720 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:34.720 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:15:34.720 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:34.720 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:34.720 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:34.720 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:34.720 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:34.720 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:34.720 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:34.720 15:20:38 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:15:34.720 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:34.720 15:20:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:34.720 15:20:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:34.720 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=3007537 00:15:34.720 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:34.720 15:20:38 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 3007537 00:15:34.720 15:20:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 3007537 ']' 00:15:34.720 15:20:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:34.720 15:20:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:34.720 15:20:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:34.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:34.720 15:20:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:34.720 15:20:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:34.720 [2024-07-15 15:20:38.461924] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:15:34.720 [2024-07-15 15:20:38.461972] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:34.720 EAL: No free 2048 kB hugepages reported on node 1 00:15:34.720 [2024-07-15 15:20:38.534154] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:34.720 [2024-07-15 15:20:38.606671] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:34.720 [2024-07-15 15:20:38.606711] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:34.720 [2024-07-15 15:20:38.606721] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:34.720 [2024-07-15 15:20:38.606729] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:34.720 [2024-07-15 15:20:38.606736] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:34.720 [2024-07-15 15:20:38.606759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:35.656 15:20:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:35.656 15:20:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:15:35.656 15:20:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:35.656 15:20:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:35.656 15:20:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:35.656 15:20:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:35.656 15:20:39 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:35.656 [2024-07-15 15:20:39.446674] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:35.656 15:20:39 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:15:35.656 15:20:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:35.656 15:20:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:35.656 15:20:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:35.656 ************************************ 00:15:35.656 START TEST lvs_grow_clean 00:15:35.656 ************************************ 00:15:35.656 15:20:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:15:35.656 15:20:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:35.656 15:20:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:35.656 15:20:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:35.656 15:20:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:35.656 15:20:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:35.656 15:20:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:35.656 15:20:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:35.656 15:20:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:35.656 15:20:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:35.915 15:20:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:35.915 15:20:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:36.175 15:20:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=1e6b7121-0c16-4d81-b07e-8e58f8b11684 00:15:36.175 15:20:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e6b7121-0c16-4d81-b07e-8e58f8b11684 00:15:36.175 15:20:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:36.175 15:20:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:36.175 15:20:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:36.175 15:20:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1e6b7121-0c16-4d81-b07e-8e58f8b11684 lvol 150 00:15:36.433 15:20:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=d3c7a8e3-f081-4c55-85a8-d48cc0b64986 00:15:36.433 15:20:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:36.433 15:20:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:36.690 [2024-07-15 15:20:40.387229] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:36.690 [2024-07-15 15:20:40.387285] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:36.690 true 00:15:36.690 15:20:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e6b7121-0c16-4d81-b07e-8e58f8b11684 00:15:36.690 15:20:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:36.690 15:20:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:36.690 15:20:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:36.949 15:20:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d3c7a8e3-f081-4c55-85a8-d48cc0b64986 00:15:37.207 15:20:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:37.207 [2024-07-15 15:20:41.037186] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:37.207 15:20:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:37.466 15:20:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3008104 00:15:37.466 15:20:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:37.466 15:20:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3008104 /var/tmp/bdevperf.sock 00:15:37.466 15:20:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 3008104 ']' 00:15:37.466 15:20:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:37.466 15:20:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:37.466 15:20:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:37.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:37.466 15:20:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:37.466 15:20:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:15:37.466 15:20:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:37.466 [2024-07-15 15:20:41.259509] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:15:37.466 [2024-07-15 15:20:41.259561] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3008104 ] 00:15:37.466 EAL: No free 2048 kB hugepages reported on node 1 00:15:37.466 [2024-07-15 15:20:41.328808] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.724 [2024-07-15 15:20:41.404292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:38.292 15:20:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:38.292 15:20:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:15:38.292 15:20:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:38.551 Nvme0n1 00:15:38.551 15:20:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:38.551 [ 00:15:38.551 { 00:15:38.551 "name": "Nvme0n1", 00:15:38.551 "aliases": [ 00:15:38.551 "d3c7a8e3-f081-4c55-85a8-d48cc0b64986" 00:15:38.551 ], 00:15:38.551 "product_name": "NVMe disk", 00:15:38.551 "block_size": 4096, 00:15:38.551 "num_blocks": 38912, 00:15:38.551 "uuid": "d3c7a8e3-f081-4c55-85a8-d48cc0b64986", 00:15:38.551 "assigned_rate_limits": { 00:15:38.551 "rw_ios_per_sec": 0, 00:15:38.551 "rw_mbytes_per_sec": 0, 00:15:38.551 "r_mbytes_per_sec": 0, 00:15:38.551 "w_mbytes_per_sec": 0 00:15:38.551 }, 00:15:38.551 "claimed": false, 00:15:38.551 "zoned": false, 00:15:38.551 "supported_io_types": { 00:15:38.551 "read": true, 00:15:38.551 "write": true, 00:15:38.551 "unmap": true, 00:15:38.551 "flush": true, 00:15:38.551 "reset": true, 00:15:38.551 "nvme_admin": true, 00:15:38.551 "nvme_io": true, 00:15:38.551 "nvme_io_md": false, 00:15:38.551 "write_zeroes": true, 00:15:38.551 "zcopy": false, 00:15:38.551 "get_zone_info": false, 00:15:38.551 "zone_management": false, 00:15:38.551 "zone_append": false, 00:15:38.551 "compare": true, 00:15:38.551 "compare_and_write": true, 00:15:38.551 "abort": true, 00:15:38.551 "seek_hole": false, 00:15:38.551 "seek_data": false, 00:15:38.551 "copy": true, 00:15:38.551 "nvme_iov_md": false 00:15:38.551 }, 00:15:38.551 "memory_domains": [ 00:15:38.551 { 00:15:38.551 "dma_device_id": "system", 00:15:38.551 "dma_device_type": 1 00:15:38.551 } 00:15:38.551 ], 00:15:38.551 "driver_specific": { 00:15:38.551 "nvme": [ 00:15:38.551 { 00:15:38.551 "trid": { 00:15:38.551 "trtype": "TCP", 00:15:38.551 "adrfam": "IPv4", 00:15:38.551 "traddr": "10.0.0.2", 00:15:38.551 "trsvcid": "4420", 00:15:38.551 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:15:38.551 }, 00:15:38.551 "ctrlr_data": { 00:15:38.551 "cntlid": 1, 00:15:38.551 "vendor_id": "0x8086", 00:15:38.551 "model_number": "SPDK bdev Controller", 00:15:38.551 "serial_number": "SPDK0", 00:15:38.551 "firmware_revision": "24.09", 00:15:38.551 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:38.551 "oacs": { 00:15:38.551 "security": 0, 00:15:38.551 "format": 0, 00:15:38.551 "firmware": 0, 00:15:38.551 "ns_manage": 0 00:15:38.551 }, 00:15:38.551 "multi_ctrlr": true, 00:15:38.551 "ana_reporting": false 00:15:38.551 }, 00:15:38.551 "vs": { 00:15:38.551 "nvme_version": "1.3" 00:15:38.551 }, 00:15:38.551 "ns_data": { 00:15:38.551 "id": 1, 00:15:38.551 "can_share": true 00:15:38.551 } 00:15:38.551 } 00:15:38.551 ], 00:15:38.551 "mp_policy": "active_passive" 00:15:38.551 } 00:15:38.551 } 00:15:38.551 ] 00:15:38.551 15:20:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3008371 00:15:38.551 15:20:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:38.551 15:20:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:38.810 Running I/O for 10 seconds... 00:15:39.751 Latency(us) 00:15:39.751 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:39.751 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:39.751 Nvme0n1 : 1.00 23910.00 93.40 0.00 0.00 0.00 0.00 0.00 00:15:39.751 =================================================================================================================== 00:15:39.751 Total : 23910.00 93.40 0.00 0.00 0.00 0.00 0.00 00:15:39.751 00:15:40.684 15:20:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1e6b7121-0c16-4d81-b07e-8e58f8b11684 00:15:40.684 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:40.684 Nvme0n1 : 2.00 24044.50 93.92 0.00 0.00 0.00 0.00 0.00 00:15:40.684 =================================================================================================================== 00:15:40.685 Total : 24044.50 93.92 0.00 0.00 0.00 0.00 0.00 00:15:40.685 00:15:40.942 true 00:15:40.942 15:20:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e6b7121-0c16-4d81-b07e-8e58f8b11684 00:15:40.942 15:20:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:40.942 15:20:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:40.942 15:20:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:40.942 15:20:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3008371 00:15:41.877 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:41.877 Nvme0n1 : 3.00 24055.33 93.97 0.00 0.00 0.00 0.00 0.00 00:15:41.877 =================================================================================================================== 00:15:41.877 Total : 24055.33 93.97 0.00 0.00 0.00 0.00 0.00 00:15:41.877 00:15:42.812 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:42.812 Nvme0n1 : 4.00 24134.25 94.27 0.00 0.00 0.00 0.00 0.00 00:15:42.812 =================================================================================================================== 00:15:42.812 Total : 24134.25 94.27 0.00 0.00 0.00 0.00 0.00 00:15:42.812 00:15:43.747 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:43.747 Nvme0n1 : 5.00 24174.00 94.43 0.00 0.00 0.00 0.00 0.00 00:15:43.747 =================================================================================================================== 00:15:43.747 Total : 24174.00 94.43 0.00 0.00 0.00 0.00 0.00 00:15:43.747 00:15:44.682 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:44.682 Nvme0n1 : 6.00 24209.00 94.57 0.00 0.00 0.00 0.00 0.00 00:15:44.682 =================================================================================================================== 00:15:44.682 Total : 24209.00 94.57 0.00 0.00 0.00 0.00 0.00 00:15:44.682 00:15:46.057 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:46.057 Nvme0n1 : 7.00 24236.43 94.67 0.00 0.00 0.00 0.00 0.00 00:15:46.057 =================================================================================================================== 00:15:46.057 Total : 24236.43 94.67 0.00 0.00 0.00 0.00 0.00 00:15:46.057 00:15:46.673 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:46.673 Nvme0n1 : 8.00 24203.12 94.54 0.00 0.00 0.00 0.00 0.00 00:15:46.673 =================================================================================================================== 00:15:46.673 Total : 24203.12 94.54 0.00 0.00 0.00 0.00 0.00 00:15:46.673 00:15:48.047 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:48.047 Nvme0n1 : 9.00 24216.11 94.59 0.00 0.00 0.00 0.00 0.00 00:15:48.047 =================================================================================================================== 00:15:48.047 Total : 24216.11 94.59 0.00 0.00 0.00 0.00 0.00 00:15:48.047 00:15:48.980 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:48.981 Nvme0n1 : 10.00 24240.60 94.69 0.00 0.00 0.00 0.00 0.00 00:15:48.981 =================================================================================================================== 00:15:48.981 Total : 24240.60 94.69 0.00 0.00 0.00 0.00 0.00 00:15:48.981 00:15:48.981 00:15:48.981 Latency(us) 00:15:48.981 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:48.981 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:48.981 Nvme0n1 : 10.00 24237.52 94.68 0.00 0.00 5277.62 2608.33 9751.76 00:15:48.981 =================================================================================================================== 00:15:48.981 Total : 24237.52 94.68 0.00 0.00 5277.62 2608.33 9751.76 00:15:48.981 0 00:15:48.981 15:20:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3008104 00:15:48.981 15:20:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 3008104 ']' 00:15:48.981 15:20:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 3008104 00:15:48.981 15:20:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:15:48.981 15:20:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:48.981 15:20:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3008104 00:15:48.981 15:20:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:48.981 15:20:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:48.981 15:20:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3008104' 00:15:48.981 killing process with pid 3008104 00:15:48.981 15:20:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 3008104 00:15:48.981 Received shutdown signal, test time was about 10.000000 seconds 00:15:48.981 00:15:48.981 Latency(us) 00:15:48.981 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:48.981 =================================================================================================================== 00:15:48.981 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:48.981 15:20:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 3008104 00:15:48.981 15:20:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:49.240 15:20:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:49.498 15:20:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e6b7121-0c16-4d81-b07e-8e58f8b11684 00:15:49.498 15:20:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:15:49.498 15:20:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:15:49.498 15:20:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:15:49.498 15:20:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:49.757 [2024-07-15 15:20:53.495400] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:49.757 15:20:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e6b7121-0c16-4d81-b07e-8e58f8b11684 00:15:49.757 15:20:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:15:49.757 15:20:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e6b7121-0c16-4d81-b07e-8e58f8b11684 00:15:49.757 15:20:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:49.757 15:20:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:49.757 15:20:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:49.757 15:20:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:49.757 15:20:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:49.757 15:20:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:49.757 15:20:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:49.757 15:20:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:49.757 15:20:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e6b7121-0c16-4d81-b07e-8e58f8b11684 00:15:50.016 request: 00:15:50.016 { 00:15:50.016 "uuid": "1e6b7121-0c16-4d81-b07e-8e58f8b11684", 00:15:50.016 "method": "bdev_lvol_get_lvstores", 00:15:50.016 "req_id": 1 00:15:50.016 } 00:15:50.016 Got JSON-RPC error response 00:15:50.016 response: 00:15:50.016 { 00:15:50.016 "code": -19, 00:15:50.016 "message": "No such device" 00:15:50.016 } 00:15:50.016 15:20:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:15:50.016 15:20:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:50.016 15:20:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:50.016 15:20:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:50.016 15:20:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:50.016 aio_bdev 00:15:50.016 15:20:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d3c7a8e3-f081-4c55-85a8-d48cc0b64986 00:15:50.016 15:20:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=d3c7a8e3-f081-4c55-85a8-d48cc0b64986 00:15:50.016 15:20:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:50.016 15:20:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:15:50.016 15:20:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:50.016 15:20:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:50.016 15:20:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:50.275 15:20:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d3c7a8e3-f081-4c55-85a8-d48cc0b64986 -t 2000 00:15:50.275 [ 00:15:50.275 { 00:15:50.275 "name": "d3c7a8e3-f081-4c55-85a8-d48cc0b64986", 00:15:50.275 "aliases": [ 00:15:50.275 "lvs/lvol" 00:15:50.275 ], 00:15:50.275 "product_name": "Logical Volume", 00:15:50.275 "block_size": 4096, 00:15:50.275 "num_blocks": 38912, 00:15:50.275 "uuid": "d3c7a8e3-f081-4c55-85a8-d48cc0b64986", 00:15:50.275 "assigned_rate_limits": { 00:15:50.275 "rw_ios_per_sec": 0, 00:15:50.275 "rw_mbytes_per_sec": 0, 00:15:50.275 "r_mbytes_per_sec": 0, 00:15:50.275 "w_mbytes_per_sec": 0 00:15:50.275 }, 00:15:50.275 "claimed": false, 00:15:50.275 "zoned": false, 00:15:50.275 "supported_io_types": { 00:15:50.275 "read": true, 00:15:50.275 "write": true, 00:15:50.275 "unmap": true, 00:15:50.275 "flush": false, 00:15:50.275 "reset": true, 00:15:50.275 "nvme_admin": false, 00:15:50.275 "nvme_io": false, 00:15:50.275 "nvme_io_md": false, 00:15:50.275 "write_zeroes": true, 00:15:50.275 "zcopy": false, 00:15:50.275 "get_zone_info": false, 00:15:50.275 "zone_management": false, 00:15:50.275 "zone_append": false, 00:15:50.275 "compare": false, 00:15:50.275 "compare_and_write": false, 00:15:50.275 "abort": false, 00:15:50.275 "seek_hole": true, 00:15:50.275 "seek_data": true, 00:15:50.275 "copy": false, 00:15:50.275 "nvme_iov_md": false 00:15:50.275 }, 00:15:50.275 "driver_specific": { 00:15:50.275 "lvol": { 00:15:50.275 "lvol_store_uuid": "1e6b7121-0c16-4d81-b07e-8e58f8b11684", 00:15:50.275 "base_bdev": "aio_bdev", 00:15:50.275 "thin_provision": false, 00:15:50.275 "num_allocated_clusters": 38, 00:15:50.275 "snapshot": false, 00:15:50.275 "clone": false, 00:15:50.275 "esnap_clone": false 00:15:50.275 } 00:15:50.275 } 00:15:50.275 } 00:15:50.275 ] 00:15:50.534 15:20:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:15:50.534 15:20:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e6b7121-0c16-4d81-b07e-8e58f8b11684 00:15:50.534 15:20:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:15:50.534 15:20:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:15:50.534 15:20:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e6b7121-0c16-4d81-b07e-8e58f8b11684 00:15:50.534 15:20:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:15:50.804 15:20:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:15:50.804 15:20:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d3c7a8e3-f081-4c55-85a8-d48cc0b64986 00:15:50.804 15:20:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1e6b7121-0c16-4d81-b07e-8e58f8b11684 00:15:51.063 15:20:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:51.322 15:20:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:51.322 00:15:51.322 real 0m15.551s 00:15:51.322 user 0m14.609s 00:15:51.322 sys 0m2.018s 00:15:51.322 15:20:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:51.322 15:20:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:15:51.322 ************************************ 00:15:51.322 END TEST lvs_grow_clean 00:15:51.322 ************************************ 00:15:51.322 15:20:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:15:51.322 15:20:55 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:15:51.322 15:20:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:51.322 15:20:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:51.322 15:20:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:51.322 ************************************ 00:15:51.322 START TEST lvs_grow_dirty 00:15:51.322 ************************************ 00:15:51.322 15:20:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:15:51.322 15:20:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:51.322 15:20:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:51.322 15:20:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:51.322 15:20:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:51.322 15:20:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:51.322 15:20:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:51.322 15:20:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:51.322 15:20:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:51.322 15:20:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:51.581 15:20:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:51.581 15:20:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:51.840 15:20:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=4bb94e0b-f55b-4882-8439-751dbb2d3903 00:15:51.840 15:20:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bb94e0b-f55b-4882-8439-751dbb2d3903 00:15:51.840 15:20:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:51.840 15:20:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:51.840 15:20:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:51.840 15:20:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4bb94e0b-f55b-4882-8439-751dbb2d3903 lvol 150 00:15:52.098 15:20:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=6cfc9cfb-091f-4860-9c4a-d843c53eb37c 00:15:52.098 15:20:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:52.098 15:20:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:52.358 [2024-07-15 15:20:56.009494] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:52.358 [2024-07-15 15:20:56.009550] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:52.358 true 00:15:52.358 15:20:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bb94e0b-f55b-4882-8439-751dbb2d3903 00:15:52.358 15:20:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:52.358 15:20:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:52.358 15:20:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:52.616 15:20:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6cfc9cfb-091f-4860-9c4a-d843c53eb37c 00:15:52.875 15:20:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:52.875 [2024-07-15 15:20:56.711580] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:52.875 15:20:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:53.134 15:20:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:53.134 15:20:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3010802 00:15:53.134 15:20:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:53.134 15:20:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3010802 /var/tmp/bdevperf.sock 00:15:53.134 15:20:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 3010802 ']' 00:15:53.134 15:20:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:53.134 15:20:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:53.134 15:20:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:53.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:53.134 15:20:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:53.134 15:20:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:53.134 [2024-07-15 15:20:56.934070] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:15:53.134 [2024-07-15 15:20:56.934121] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3010802 ] 00:15:53.134 EAL: No free 2048 kB hugepages reported on node 1 00:15:53.134 [2024-07-15 15:20:57.001530] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.392 [2024-07-15 15:20:57.071039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:53.959 15:20:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:53.959 15:20:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:15:53.959 15:20:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:54.216 Nvme0n1 00:15:54.216 15:20:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:54.474 [ 00:15:54.474 { 00:15:54.474 "name": "Nvme0n1", 00:15:54.474 "aliases": [ 00:15:54.474 "6cfc9cfb-091f-4860-9c4a-d843c53eb37c" 00:15:54.475 ], 00:15:54.475 "product_name": "NVMe disk", 00:15:54.475 "block_size": 4096, 00:15:54.475 "num_blocks": 38912, 00:15:54.475 "uuid": "6cfc9cfb-091f-4860-9c4a-d843c53eb37c", 00:15:54.475 "assigned_rate_limits": { 00:15:54.475 "rw_ios_per_sec": 0, 00:15:54.475 "rw_mbytes_per_sec": 0, 00:15:54.475 "r_mbytes_per_sec": 0, 00:15:54.475 "w_mbytes_per_sec": 0 00:15:54.475 }, 00:15:54.475 "claimed": false, 00:15:54.475 "zoned": false, 00:15:54.475 "supported_io_types": { 00:15:54.475 "read": true, 00:15:54.475 "write": true, 00:15:54.475 "unmap": true, 00:15:54.475 "flush": true, 00:15:54.475 "reset": true, 00:15:54.475 "nvme_admin": true, 00:15:54.475 "nvme_io": true, 00:15:54.475 "nvme_io_md": false, 00:15:54.475 "write_zeroes": true, 00:15:54.475 "zcopy": false, 00:15:54.475 "get_zone_info": false, 00:15:54.475 "zone_management": false, 00:15:54.475 "zone_append": false, 00:15:54.475 "compare": true, 00:15:54.475 "compare_and_write": true, 00:15:54.475 "abort": true, 00:15:54.475 "seek_hole": false, 00:15:54.475 "seek_data": false, 00:15:54.475 "copy": true, 00:15:54.475 "nvme_iov_md": false 00:15:54.475 }, 00:15:54.475 "memory_domains": [ 00:15:54.475 { 00:15:54.475 "dma_device_id": "system", 00:15:54.475 "dma_device_type": 1 00:15:54.475 } 00:15:54.475 ], 00:15:54.475 "driver_specific": { 00:15:54.475 "nvme": [ 00:15:54.475 { 00:15:54.475 "trid": { 00:15:54.475 "trtype": "TCP", 00:15:54.475 "adrfam": "IPv4", 00:15:54.475 "traddr": "10.0.0.2", 00:15:54.475 "trsvcid": "4420", 00:15:54.475 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:15:54.475 }, 00:15:54.475 "ctrlr_data": { 00:15:54.475 "cntlid": 1, 00:15:54.475 "vendor_id": "0x8086", 00:15:54.475 "model_number": "SPDK bdev Controller", 00:15:54.475 "serial_number": "SPDK0", 00:15:54.475 "firmware_revision": "24.09", 00:15:54.475 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:54.475 "oacs": { 00:15:54.475 "security": 0, 00:15:54.475 "format": 0, 00:15:54.475 "firmware": 0, 00:15:54.475 "ns_manage": 0 00:15:54.475 }, 00:15:54.475 "multi_ctrlr": true, 00:15:54.475 "ana_reporting": false 00:15:54.475 }, 00:15:54.475 "vs": { 00:15:54.475 "nvme_version": "1.3" 00:15:54.475 }, 00:15:54.475 "ns_data": { 00:15:54.475 "id": 1, 00:15:54.475 "can_share": true 00:15:54.475 } 00:15:54.475 } 00:15:54.475 ], 00:15:54.475 "mp_policy": "active_passive" 00:15:54.475 } 00:15:54.475 } 00:15:54.475 ] 00:15:54.475 15:20:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:54.475 15:20:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3011068 00:15:54.475 15:20:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:54.475 Running I/O for 10 seconds... 00:15:55.853 Latency(us) 00:15:55.853 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:55.853 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:55.853 Nvme0n1 : 1.00 22863.00 89.31 0.00 0.00 0.00 0.00 0.00 00:15:55.853 =================================================================================================================== 00:15:55.853 Total : 22863.00 89.31 0.00 0.00 0.00 0.00 0.00 00:15:55.853 00:15:56.420 15:21:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4bb94e0b-f55b-4882-8439-751dbb2d3903 00:15:56.678 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:56.678 Nvme0n1 : 2.00 22987.50 89.79 0.00 0.00 0.00 0.00 0.00 00:15:56.678 =================================================================================================================== 00:15:56.678 Total : 22987.50 89.79 0.00 0.00 0.00 0.00 0.00 00:15:56.678 00:15:56.678 true 00:15:56.678 15:21:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bb94e0b-f55b-4882-8439-751dbb2d3903 00:15:56.678 15:21:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:56.936 15:21:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:56.936 15:21:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:56.936 15:21:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3011068 00:15:57.516 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:57.516 Nvme0n1 : 3.00 22933.00 89.58 0.00 0.00 0.00 0.00 0.00 00:15:57.516 =================================================================================================================== 00:15:57.516 Total : 22933.00 89.58 0.00 0.00 0.00 0.00 0.00 00:15:57.516 00:15:58.467 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:58.467 Nvme0n1 : 4.00 23049.75 90.04 0.00 0.00 0.00 0.00 0.00 00:15:58.467 =================================================================================================================== 00:15:58.467 Total : 23049.75 90.04 0.00 0.00 0.00 0.00 0.00 00:15:58.467 00:15:59.845 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:59.845 Nvme0n1 : 5.00 23137.40 90.38 0.00 0.00 0.00 0.00 0.00 00:15:59.845 =================================================================================================================== 00:15:59.845 Total : 23137.40 90.38 0.00 0.00 0.00 0.00 0.00 00:15:59.845 00:16:00.777 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:00.777 Nvme0n1 : 6.00 23186.50 90.57 0.00 0.00 0.00 0.00 0.00 00:16:00.777 =================================================================================================================== 00:16:00.777 Total : 23186.50 90.57 0.00 0.00 0.00 0.00 0.00 00:16:00.777 00:16:01.711 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:01.711 Nvme0n1 : 7.00 23231.86 90.75 0.00 0.00 0.00 0.00 0.00 00:16:01.711 =================================================================================================================== 00:16:01.711 Total : 23231.86 90.75 0.00 0.00 0.00 0.00 0.00 00:16:01.711 00:16:02.646 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:02.646 Nvme0n1 : 8.00 23234.88 90.76 0.00 0.00 0.00 0.00 0.00 00:16:02.646 =================================================================================================================== 00:16:02.646 Total : 23234.88 90.76 0.00 0.00 0.00 0.00 0.00 00:16:02.646 00:16:03.581 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:03.581 Nvme0n1 : 9.00 23271.89 90.91 0.00 0.00 0.00 0.00 0.00 00:16:03.581 =================================================================================================================== 00:16:03.581 Total : 23271.89 90.91 0.00 0.00 0.00 0.00 0.00 00:16:03.581 00:16:04.530 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:04.530 Nvme0n1 : 10.00 23301.50 91.02 0.00 0.00 0.00 0.00 0.00 00:16:04.530 =================================================================================================================== 00:16:04.530 Total : 23301.50 91.02 0.00 0.00 0.00 0.00 0.00 00:16:04.530 00:16:04.530 00:16:04.530 Latency(us) 00:16:04.530 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:04.530 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:04.530 Nvme0n1 : 10.01 23301.65 91.02 0.00 0.00 5489.32 4168.09 15518.92 00:16:04.530 =================================================================================================================== 00:16:04.530 Total : 23301.65 91.02 0.00 0.00 5489.32 4168.09 15518.92 00:16:04.530 0 00:16:04.530 15:21:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3010802 00:16:04.530 15:21:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 3010802 ']' 00:16:04.530 15:21:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 3010802 00:16:04.530 15:21:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:16:04.530 15:21:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:04.530 15:21:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3010802 00:16:04.807 15:21:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:04.807 15:21:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:04.807 15:21:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3010802' 00:16:04.807 killing process with pid 3010802 00:16:04.807 15:21:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 3010802 00:16:04.807 Received shutdown signal, test time was about 10.000000 seconds 00:16:04.807 00:16:04.807 Latency(us) 00:16:04.807 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:04.807 =================================================================================================================== 00:16:04.807 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:04.807 15:21:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 3010802 00:16:04.807 15:21:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:05.066 15:21:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:05.327 15:21:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bb94e0b-f55b-4882-8439-751dbb2d3903 00:16:05.327 15:21:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:16:05.327 15:21:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:16:05.327 15:21:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:16:05.327 15:21:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3007537 00:16:05.327 15:21:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3007537 00:16:05.327 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3007537 Killed "${NVMF_APP[@]}" "$@" 00:16:05.327 15:21:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:16:05.327 15:21:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:16:05.327 15:21:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:05.327 15:21:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:05.327 15:21:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:05.327 15:21:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=3013462 00:16:05.327 15:21:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 3013462 00:16:05.327 15:21:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:05.327 15:21:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 3013462 ']' 00:16:05.327 15:21:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:05.327 15:21:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:05.327 15:21:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:05.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:05.327 15:21:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:05.327 15:21:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:05.588 [2024-07-15 15:21:09.257013] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:16:05.588 [2024-07-15 15:21:09.257084] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:05.588 EAL: No free 2048 kB hugepages reported on node 1 00:16:05.588 [2024-07-15 15:21:09.332503] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:05.588 [2024-07-15 15:21:09.404561] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:05.588 [2024-07-15 15:21:09.404598] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:05.588 [2024-07-15 15:21:09.404610] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:05.588 [2024-07-15 15:21:09.404619] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:05.588 [2024-07-15 15:21:09.404626] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:05.588 [2024-07-15 15:21:09.404646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:06.155 15:21:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:06.155 15:21:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:16:06.155 15:21:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:06.155 15:21:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:06.155 15:21:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:06.414 15:21:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:06.414 15:21:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:06.414 [2024-07-15 15:21:10.256650] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:16:06.414 [2024-07-15 15:21:10.256735] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:16:06.414 [2024-07-15 15:21:10.256760] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:16:06.414 15:21:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:16:06.414 15:21:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 6cfc9cfb-091f-4860-9c4a-d843c53eb37c 00:16:06.414 15:21:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=6cfc9cfb-091f-4860-9c4a-d843c53eb37c 00:16:06.414 15:21:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:06.414 15:21:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:16:06.414 15:21:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:06.414 15:21:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:06.414 15:21:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:06.673 15:21:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6cfc9cfb-091f-4860-9c4a-d843c53eb37c -t 2000 00:16:06.933 [ 00:16:06.933 { 00:16:06.933 "name": "6cfc9cfb-091f-4860-9c4a-d843c53eb37c", 00:16:06.933 "aliases": [ 00:16:06.933 "lvs/lvol" 00:16:06.933 ], 00:16:06.933 "product_name": "Logical Volume", 00:16:06.933 "block_size": 4096, 00:16:06.933 "num_blocks": 38912, 00:16:06.933 "uuid": "6cfc9cfb-091f-4860-9c4a-d843c53eb37c", 00:16:06.933 "assigned_rate_limits": { 00:16:06.933 "rw_ios_per_sec": 0, 00:16:06.933 "rw_mbytes_per_sec": 0, 00:16:06.933 "r_mbytes_per_sec": 0, 00:16:06.933 "w_mbytes_per_sec": 0 00:16:06.933 }, 00:16:06.933 "claimed": false, 00:16:06.933 "zoned": false, 00:16:06.933 "supported_io_types": { 00:16:06.933 "read": true, 00:16:06.933 "write": true, 00:16:06.933 "unmap": true, 00:16:06.933 "flush": false, 00:16:06.933 "reset": true, 00:16:06.933 "nvme_admin": false, 00:16:06.933 "nvme_io": false, 00:16:06.933 "nvme_io_md": false, 00:16:06.933 "write_zeroes": true, 00:16:06.933 "zcopy": false, 00:16:06.933 "get_zone_info": false, 00:16:06.933 "zone_management": false, 00:16:06.933 "zone_append": false, 00:16:06.933 "compare": false, 00:16:06.933 "compare_and_write": false, 00:16:06.933 "abort": false, 00:16:06.933 "seek_hole": true, 00:16:06.933 "seek_data": true, 00:16:06.933 "copy": false, 00:16:06.933 "nvme_iov_md": false 00:16:06.933 }, 00:16:06.933 "driver_specific": { 00:16:06.933 "lvol": { 00:16:06.933 "lvol_store_uuid": "4bb94e0b-f55b-4882-8439-751dbb2d3903", 00:16:06.933 "base_bdev": "aio_bdev", 00:16:06.933 "thin_provision": false, 00:16:06.933 "num_allocated_clusters": 38, 00:16:06.933 "snapshot": false, 00:16:06.933 "clone": false, 00:16:06.933 "esnap_clone": false 00:16:06.933 } 00:16:06.933 } 00:16:06.933 } 00:16:06.933 ] 00:16:06.933 15:21:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:16:06.933 15:21:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:16:06.933 15:21:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bb94e0b-f55b-4882-8439-751dbb2d3903 00:16:06.933 15:21:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:16:06.933 15:21:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bb94e0b-f55b-4882-8439-751dbb2d3903 00:16:06.933 15:21:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:16:07.192 15:21:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:16:07.192 15:21:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:07.192 [2024-07-15 15:21:11.088892] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:16:07.450 15:21:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bb94e0b-f55b-4882-8439-751dbb2d3903 00:16:07.450 15:21:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:16:07.450 15:21:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bb94e0b-f55b-4882-8439-751dbb2d3903 00:16:07.450 15:21:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:07.450 15:21:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:07.450 15:21:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:07.450 15:21:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:07.450 15:21:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:07.450 15:21:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:07.450 15:21:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:07.450 15:21:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:07.450 15:21:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bb94e0b-f55b-4882-8439-751dbb2d3903 00:16:07.450 request: 00:16:07.450 { 00:16:07.451 "uuid": "4bb94e0b-f55b-4882-8439-751dbb2d3903", 00:16:07.451 "method": "bdev_lvol_get_lvstores", 00:16:07.451 "req_id": 1 00:16:07.451 } 00:16:07.451 Got JSON-RPC error response 00:16:07.451 response: 00:16:07.451 { 00:16:07.451 "code": -19, 00:16:07.451 "message": "No such device" 00:16:07.451 } 00:16:07.451 15:21:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:16:07.451 15:21:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:07.451 15:21:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:07.451 15:21:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:07.451 15:21:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:07.709 aio_bdev 00:16:07.709 15:21:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6cfc9cfb-091f-4860-9c4a-d843c53eb37c 00:16:07.709 15:21:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=6cfc9cfb-091f-4860-9c4a-d843c53eb37c 00:16:07.709 15:21:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:07.709 15:21:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:16:07.709 15:21:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:07.709 15:21:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:07.709 15:21:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:07.968 15:21:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6cfc9cfb-091f-4860-9c4a-d843c53eb37c -t 2000 00:16:07.968 [ 00:16:07.968 { 00:16:07.968 "name": "6cfc9cfb-091f-4860-9c4a-d843c53eb37c", 00:16:07.968 "aliases": [ 00:16:07.968 "lvs/lvol" 00:16:07.968 ], 00:16:07.968 "product_name": "Logical Volume", 00:16:07.968 "block_size": 4096, 00:16:07.968 "num_blocks": 38912, 00:16:07.968 "uuid": "6cfc9cfb-091f-4860-9c4a-d843c53eb37c", 00:16:07.968 "assigned_rate_limits": { 00:16:07.968 "rw_ios_per_sec": 0, 00:16:07.968 "rw_mbytes_per_sec": 0, 00:16:07.968 "r_mbytes_per_sec": 0, 00:16:07.968 "w_mbytes_per_sec": 0 00:16:07.968 }, 00:16:07.968 "claimed": false, 00:16:07.968 "zoned": false, 00:16:07.968 "supported_io_types": { 00:16:07.968 "read": true, 00:16:07.968 "write": true, 00:16:07.968 "unmap": true, 00:16:07.968 "flush": false, 00:16:07.968 "reset": true, 00:16:07.968 "nvme_admin": false, 00:16:07.968 "nvme_io": false, 00:16:07.968 "nvme_io_md": false, 00:16:07.968 "write_zeroes": true, 00:16:07.968 "zcopy": false, 00:16:07.968 "get_zone_info": false, 00:16:07.968 "zone_management": false, 00:16:07.968 "zone_append": false, 00:16:07.968 "compare": false, 00:16:07.968 "compare_and_write": false, 00:16:07.968 "abort": false, 00:16:07.968 "seek_hole": true, 00:16:07.968 "seek_data": true, 00:16:07.968 "copy": false, 00:16:07.968 "nvme_iov_md": false 00:16:07.968 }, 00:16:07.968 "driver_specific": { 00:16:07.968 "lvol": { 00:16:07.968 "lvol_store_uuid": "4bb94e0b-f55b-4882-8439-751dbb2d3903", 00:16:07.968 "base_bdev": "aio_bdev", 00:16:07.968 "thin_provision": false, 00:16:07.968 "num_allocated_clusters": 38, 00:16:07.968 "snapshot": false, 00:16:07.968 "clone": false, 00:16:07.968 "esnap_clone": false 00:16:07.968 } 00:16:07.968 } 00:16:07.968 } 00:16:07.968 ] 00:16:07.968 15:21:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:16:07.968 15:21:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bb94e0b-f55b-4882-8439-751dbb2d3903 00:16:07.968 15:21:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:16:08.227 15:21:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:16:08.227 15:21:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bb94e0b-f55b-4882-8439-751dbb2d3903 00:16:08.227 15:21:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:16:08.227 15:21:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:16:08.227 15:21:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6cfc9cfb-091f-4860-9c4a-d843c53eb37c 00:16:08.485 15:21:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4bb94e0b-f55b-4882-8439-751dbb2d3903 00:16:08.743 15:21:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:08.743 15:21:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:09.002 00:16:09.002 real 0m17.545s 00:16:09.002 user 0m43.584s 00:16:09.002 sys 0m5.125s 00:16:09.002 15:21:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:09.002 15:21:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:09.002 ************************************ 00:16:09.003 END TEST lvs_grow_dirty 00:16:09.003 ************************************ 00:16:09.003 15:21:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:16:09.003 15:21:12 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:16:09.003 15:21:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:16:09.003 15:21:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:16:09.003 15:21:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:16:09.003 15:21:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:09.003 15:21:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:16:09.003 15:21:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:16:09.003 15:21:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:16:09.003 15:21:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:09.003 nvmf_trace.0 00:16:09.003 15:21:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:16:09.003 15:21:12 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:16:09.003 15:21:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:09.003 15:21:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:16:09.003 15:21:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:09.003 15:21:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:16:09.003 15:21:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:09.003 15:21:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:09.003 rmmod nvme_tcp 00:16:09.003 rmmod nvme_fabrics 00:16:09.003 rmmod nvme_keyring 00:16:09.003 15:21:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:09.003 15:21:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:16:09.003 15:21:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:16:09.003 15:21:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 3013462 ']' 00:16:09.003 15:21:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 3013462 00:16:09.003 15:21:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 3013462 ']' 00:16:09.003 15:21:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 3013462 00:16:09.003 15:21:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:16:09.003 15:21:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:09.003 15:21:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3013462 00:16:09.003 15:21:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:09.003 15:21:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:09.003 15:21:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3013462' 00:16:09.003 killing process with pid 3013462 00:16:09.003 15:21:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 3013462 00:16:09.003 15:21:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 3013462 00:16:09.262 15:21:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:09.262 15:21:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:09.262 15:21:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:09.262 15:21:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:09.262 15:21:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:09.262 15:21:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:09.262 15:21:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:09.262 15:21:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:11.792 15:21:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:11.792 00:16:11.792 real 0m43.547s 00:16:11.792 user 1m4.100s 00:16:11.792 sys 0m12.790s 00:16:11.792 15:21:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:11.792 15:21:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:11.792 ************************************ 00:16:11.792 END TEST nvmf_lvs_grow 00:16:11.792 ************************************ 00:16:11.792 15:21:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:11.792 15:21:15 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:16:11.792 15:21:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:11.792 15:21:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:11.792 15:21:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:11.792 ************************************ 00:16:11.792 START TEST nvmf_bdev_io_wait 00:16:11.792 ************************************ 00:16:11.792 15:21:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:16:11.792 * Looking for test storage... 00:16:11.792 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:11.792 15:21:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:11.792 15:21:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:16:11.792 15:21:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:11.792 15:21:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:11.792 15:21:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:11.792 15:21:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:11.792 15:21:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:11.792 15:21:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:11.792 15:21:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:11.792 15:21:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:11.792 15:21:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:11.792 15:21:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:11.792 15:21:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:11.792 15:21:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:16:11.792 15:21:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:11.792 15:21:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:11.792 15:21:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:11.792 15:21:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:11.792 15:21:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:11.792 15:21:15 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:11.792 15:21:15 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:11.792 15:21:15 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:11.792 15:21:15 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.792 15:21:15 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.792 15:21:15 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.792 15:21:15 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:16:11.792 15:21:15 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.792 15:21:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:16:11.792 15:21:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:11.792 15:21:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:11.792 15:21:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:11.792 15:21:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:11.792 15:21:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:11.792 15:21:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:11.792 15:21:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:11.792 15:21:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:11.792 15:21:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:11.792 15:21:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:11.792 15:21:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:16:11.792 15:21:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:11.792 15:21:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:11.792 15:21:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:11.792 15:21:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:11.792 15:21:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:11.792 15:21:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:11.792 15:21:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:11.792 15:21:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:11.792 15:21:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:11.792 15:21:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:11.792 15:21:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:16:11.792 15:21:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:18.352 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:18.352 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:16:18.352 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:18.352 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:18.352 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:18.352 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:18.352 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:18.353 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:18.353 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:18.353 Found net devices under 0000:af:00.0: cvl_0_0 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:18.353 Found net devices under 0000:af:00.1: cvl_0_1 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:18.353 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:18.353 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:16:18.353 00:16:18.353 --- 10.0.0.2 ping statistics --- 00:16:18.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:18.353 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:18.353 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:18.353 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:16:18.353 00:16:18.353 --- 10.0.0.1 ping statistics --- 00:16:18.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:18.353 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=3017734 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 3017734 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 3017734 ']' 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:18.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:18.353 15:21:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:18.353 [2024-07-15 15:21:21.819354] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:16:18.353 [2024-07-15 15:21:21.819400] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:18.353 EAL: No free 2048 kB hugepages reported on node 1 00:16:18.353 [2024-07-15 15:21:21.893228] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:18.353 [2024-07-15 15:21:21.966749] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:18.353 [2024-07-15 15:21:21.966790] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:18.353 [2024-07-15 15:21:21.966799] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:18.354 [2024-07-15 15:21:21.966807] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:18.354 [2024-07-15 15:21:21.966814] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:18.354 [2024-07-15 15:21:21.966869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:18.354 [2024-07-15 15:21:21.966965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:18.354 [2024-07-15 15:21:21.967050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:18.354 [2024-07-15 15:21:21.967052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.921 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:18.921 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:16:18.921 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:18.921 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:18.921 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:18.921 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:18.921 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:16:18.921 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:18.922 [2024-07-15 15:21:22.741013] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:18.922 Malloc0 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:18.922 [2024-07-15 15:21:22.801885] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3017948 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3017951 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:18.922 { 00:16:18.922 "params": { 00:16:18.922 "name": "Nvme$subsystem", 00:16:18.922 "trtype": "$TEST_TRANSPORT", 00:16:18.922 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:18.922 "adrfam": "ipv4", 00:16:18.922 "trsvcid": "$NVMF_PORT", 00:16:18.922 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:18.922 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:18.922 "hdgst": ${hdgst:-false}, 00:16:18.922 "ddgst": ${ddgst:-false} 00:16:18.922 }, 00:16:18.922 "method": "bdev_nvme_attach_controller" 00:16:18.922 } 00:16:18.922 EOF 00:16:18.922 )") 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3017953 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:18.922 { 00:16:18.922 "params": { 00:16:18.922 "name": "Nvme$subsystem", 00:16:18.922 "trtype": "$TEST_TRANSPORT", 00:16:18.922 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:18.922 "adrfam": "ipv4", 00:16:18.922 "trsvcid": "$NVMF_PORT", 00:16:18.922 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:18.922 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:18.922 "hdgst": ${hdgst:-false}, 00:16:18.922 "ddgst": ${ddgst:-false} 00:16:18.922 }, 00:16:18.922 "method": "bdev_nvme_attach_controller" 00:16:18.922 } 00:16:18.922 EOF 00:16:18.922 )") 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3017957 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:18.922 { 00:16:18.922 "params": { 00:16:18.922 "name": "Nvme$subsystem", 00:16:18.922 "trtype": "$TEST_TRANSPORT", 00:16:18.922 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:18.922 "adrfam": "ipv4", 00:16:18.922 "trsvcid": "$NVMF_PORT", 00:16:18.922 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:18.922 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:18.922 "hdgst": ${hdgst:-false}, 00:16:18.922 "ddgst": ${ddgst:-false} 00:16:18.922 }, 00:16:18.922 "method": "bdev_nvme_attach_controller" 00:16:18.922 } 00:16:18.922 EOF 00:16:18.922 )") 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:18.922 { 00:16:18.922 "params": { 00:16:18.922 "name": "Nvme$subsystem", 00:16:18.922 "trtype": "$TEST_TRANSPORT", 00:16:18.922 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:18.922 "adrfam": "ipv4", 00:16:18.922 "trsvcid": "$NVMF_PORT", 00:16:18.922 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:18.922 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:18.922 "hdgst": ${hdgst:-false}, 00:16:18.922 "ddgst": ${ddgst:-false} 00:16:18.922 }, 00:16:18.922 "method": "bdev_nvme_attach_controller" 00:16:18.922 } 00:16:18.922 EOF 00:16:18.922 )") 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3017948 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:18.922 "params": { 00:16:18.922 "name": "Nvme1", 00:16:18.922 "trtype": "tcp", 00:16:18.922 "traddr": "10.0.0.2", 00:16:18.922 "adrfam": "ipv4", 00:16:18.922 "trsvcid": "4420", 00:16:18.922 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:18.922 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:18.922 "hdgst": false, 00:16:18.922 "ddgst": false 00:16:18.922 }, 00:16:18.922 "method": "bdev_nvme_attach_controller" 00:16:18.922 }' 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:18.922 "params": { 00:16:18.922 "name": "Nvme1", 00:16:18.922 "trtype": "tcp", 00:16:18.922 "traddr": "10.0.0.2", 00:16:18.922 "adrfam": "ipv4", 00:16:18.922 "trsvcid": "4420", 00:16:18.922 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:18.922 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:18.922 "hdgst": false, 00:16:18.922 "ddgst": false 00:16:18.922 }, 00:16:18.922 "method": "bdev_nvme_attach_controller" 00:16:18.922 }' 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:18.922 "params": { 00:16:18.922 "name": "Nvme1", 00:16:18.922 "trtype": "tcp", 00:16:18.922 "traddr": "10.0.0.2", 00:16:18.922 "adrfam": "ipv4", 00:16:18.922 "trsvcid": "4420", 00:16:18.922 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:18.922 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:18.922 "hdgst": false, 00:16:18.922 "ddgst": false 00:16:18.922 }, 00:16:18.922 "method": "bdev_nvme_attach_controller" 00:16:18.922 }' 00:16:18.922 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:19.181 15:21:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:19.181 "params": { 00:16:19.181 "name": "Nvme1", 00:16:19.181 "trtype": "tcp", 00:16:19.181 "traddr": "10.0.0.2", 00:16:19.181 "adrfam": "ipv4", 00:16:19.182 "trsvcid": "4420", 00:16:19.182 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:19.182 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:19.182 "hdgst": false, 00:16:19.182 "ddgst": false 00:16:19.182 }, 00:16:19.182 "method": "bdev_nvme_attach_controller" 00:16:19.182 }' 00:16:19.182 [2024-07-15 15:21:22.852759] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:16:19.182 [2024-07-15 15:21:22.852814] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:16:19.182 [2024-07-15 15:21:22.856907] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:16:19.182 [2024-07-15 15:21:22.856958] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:16:19.182 [2024-07-15 15:21:22.857742] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:16:19.182 [2024-07-15 15:21:22.857790] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:16:19.182 [2024-07-15 15:21:22.858724] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:16:19.182 [2024-07-15 15:21:22.858768] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:16:19.182 EAL: No free 2048 kB hugepages reported on node 1 00:16:19.182 EAL: No free 2048 kB hugepages reported on node 1 00:16:19.182 [2024-07-15 15:21:23.041922] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.182 EAL: No free 2048 kB hugepages reported on node 1 00:16:19.441 [2024-07-15 15:21:23.093727] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.441 [2024-07-15 15:21:23.117805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:16:19.441 EAL: No free 2048 kB hugepages reported on node 1 00:16:19.441 [2024-07-15 15:21:23.162985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:16:19.441 [2024-07-15 15:21:23.185176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.441 [2024-07-15 15:21:23.232960] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.441 [2024-07-15 15:21:23.281203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:16:19.441 [2024-07-15 15:21:23.308964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:16:19.699 Running I/O for 1 seconds... 00:16:19.699 Running I/O for 1 seconds... 00:16:19.699 Running I/O for 1 seconds... 00:16:19.699 Running I/O for 1 seconds... 00:16:20.634 00:16:20.634 Latency(us) 00:16:20.634 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:20.634 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:16:20.634 Nvme1n1 : 1.00 257157.40 1004.52 0.00 0.00 495.48 201.52 652.08 00:16:20.634 =================================================================================================================== 00:16:20.634 Total : 257157.40 1004.52 0.00 0.00 495.48 201.52 652.08 00:16:20.634 00:16:20.634 Latency(us) 00:16:20.634 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:20.634 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:16:20.634 Nvme1n1 : 1.01 12069.89 47.15 0.00 0.00 10571.42 5740.95 19188.94 00:16:20.634 =================================================================================================================== 00:16:20.634 Total : 12069.89 47.15 0.00 0.00 10571.42 5740.95 19188.94 00:16:20.634 00:16:20.634 Latency(us) 00:16:20.634 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:20.634 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:16:20.634 Nvme1n1 : 1.01 11504.23 44.94 0.00 0.00 11091.76 6370.10 22124.95 00:16:20.634 =================================================================================================================== 00:16:20.634 Total : 11504.23 44.94 0.00 0.00 11091.76 6370.10 22124.95 00:16:20.893 00:16:20.893 Latency(us) 00:16:20.893 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:20.893 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:16:20.893 Nvme1n1 : 1.01 10442.58 40.79 0.00 0.00 12217.95 5819.60 24851.25 00:16:20.893 =================================================================================================================== 00:16:20.893 Total : 10442.58 40.79 0.00 0.00 12217.95 5819.60 24851.25 00:16:21.153 15:21:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3017951 00:16:21.153 15:21:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3017953 00:16:21.153 15:21:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3017957 00:16:21.153 15:21:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:21.153 15:21:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.153 15:21:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:21.153 15:21:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.153 15:21:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:16:21.153 15:21:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:16:21.153 15:21:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:21.153 15:21:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:16:21.153 15:21:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:21.153 15:21:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:16:21.153 15:21:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:21.153 15:21:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:21.153 rmmod nvme_tcp 00:16:21.153 rmmod nvme_fabrics 00:16:21.153 rmmod nvme_keyring 00:16:21.153 15:21:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:21.153 15:21:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:16:21.153 15:21:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:16:21.153 15:21:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 3017734 ']' 00:16:21.153 15:21:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 3017734 00:16:21.153 15:21:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 3017734 ']' 00:16:21.153 15:21:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 3017734 00:16:21.153 15:21:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:16:21.153 15:21:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:21.153 15:21:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3017734 00:16:21.153 15:21:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:21.153 15:21:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:21.153 15:21:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3017734' 00:16:21.153 killing process with pid 3017734 00:16:21.153 15:21:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 3017734 00:16:21.153 15:21:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 3017734 00:16:21.412 15:21:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:21.413 15:21:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:21.413 15:21:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:21.413 15:21:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:21.413 15:21:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:21.413 15:21:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:21.413 15:21:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:21.413 15:21:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.949 15:21:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:23.949 00:16:23.949 real 0m12.024s 00:16:23.949 user 0m19.841s 00:16:23.949 sys 0m6.906s 00:16:23.949 15:21:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:23.949 15:21:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:23.949 ************************************ 00:16:23.949 END TEST nvmf_bdev_io_wait 00:16:23.949 ************************************ 00:16:23.949 15:21:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:23.949 15:21:27 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:16:23.949 15:21:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:23.949 15:21:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:23.949 15:21:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:23.949 ************************************ 00:16:23.949 START TEST nvmf_queue_depth 00:16:23.949 ************************************ 00:16:23.949 15:21:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:16:23.949 * Looking for test storage... 00:16:23.949 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:23.949 15:21:27 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:23.949 15:21:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:16:23.949 15:21:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:23.949 15:21:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:23.949 15:21:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:23.949 15:21:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:23.949 15:21:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:23.949 15:21:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:23.949 15:21:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:23.949 15:21:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:23.949 15:21:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:23.949 15:21:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:23.949 15:21:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:23.949 15:21:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:16:23.949 15:21:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:23.949 15:21:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:23.949 15:21:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:23.949 15:21:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:23.949 15:21:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:23.949 15:21:27 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:23.949 15:21:27 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:23.949 15:21:27 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:23.949 15:21:27 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.949 15:21:27 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.949 15:21:27 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.949 15:21:27 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:16:23.949 15:21:27 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.949 15:21:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:16:23.949 15:21:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:23.949 15:21:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:23.949 15:21:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:23.949 15:21:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:23.949 15:21:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:23.949 15:21:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:23.949 15:21:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:23.949 15:21:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:23.949 15:21:27 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:16:23.949 15:21:27 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:16:23.949 15:21:27 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:23.949 15:21:27 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:16:23.949 15:21:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:23.949 15:21:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:23.949 15:21:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:23.949 15:21:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:23.949 15:21:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:23.949 15:21:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:23.949 15:21:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:23.949 15:21:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.949 15:21:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:23.949 15:21:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:23.949 15:21:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:16:23.949 15:21:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:30.534 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:30.534 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:16:30.534 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:30.534 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:30.534 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:30.534 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:30.534 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:30.534 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:16:30.534 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:30.534 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:16:30.534 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:16:30.534 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:16:30.534 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:16:30.534 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:16:30.534 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:16:30.534 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:30.534 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:30.534 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:30.534 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:30.534 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:30.534 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:30.534 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:30.534 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:30.534 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:30.534 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:30.534 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:30.534 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:30.534 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:30.534 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:30.534 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:30.534 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:30.534 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:30.534 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:30.534 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:30.534 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:30.534 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:30.534 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:30.534 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:30.534 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:30.534 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:30.534 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:30.534 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:30.534 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:30.534 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:30.534 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:30.534 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:30.534 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:30.534 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:30.534 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:30.534 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:30.534 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:30.534 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:30.534 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:30.534 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:30.534 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:30.534 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:30.534 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:30.534 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:30.534 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:30.534 Found net devices under 0000:af:00.0: cvl_0_0 00:16:30.534 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:30.535 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:30.535 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:30.535 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:30.535 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:30.535 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:30.535 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:30.535 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:30.535 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:30.535 Found net devices under 0000:af:00.1: cvl_0_1 00:16:30.535 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:30.535 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:30.535 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:16:30.535 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:30.535 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:30.535 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:30.535 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:30.535 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:30.535 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:30.535 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:30.535 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:30.535 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:30.535 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:30.535 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:30.535 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:30.535 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:30.535 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:30.535 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:30.535 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:30.535 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:30.535 15:21:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:30.535 15:21:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:30.535 15:21:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:30.535 15:21:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:30.535 15:21:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:30.535 15:21:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:30.535 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:30.535 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:16:30.535 00:16:30.535 --- 10.0.0.2 ping statistics --- 00:16:30.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.535 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:16:30.535 15:21:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:30.535 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:30.535 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:16:30.535 00:16:30.535 --- 10.0.0.1 ping statistics --- 00:16:30.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.535 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:16:30.535 15:21:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:30.535 15:21:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:16:30.535 15:21:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:30.535 15:21:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:30.535 15:21:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:30.535 15:21:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:30.535 15:21:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:30.535 15:21:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:30.535 15:21:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:30.535 15:21:34 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:16:30.535 15:21:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:30.535 15:21:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:30.535 15:21:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:30.535 15:21:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=3021991 00:16:30.535 15:21:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:30.535 15:21:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 3021991 00:16:30.535 15:21:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 3021991 ']' 00:16:30.535 15:21:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.535 15:21:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:30.535 15:21:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:30.535 15:21:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:30.535 15:21:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:30.535 [2024-07-15 15:21:34.244991] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:16:30.535 [2024-07-15 15:21:34.245041] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:30.535 EAL: No free 2048 kB hugepages reported on node 1 00:16:30.535 [2024-07-15 15:21:34.319190] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.535 [2024-07-15 15:21:34.391068] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:30.535 [2024-07-15 15:21:34.391103] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:30.535 [2024-07-15 15:21:34.391112] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:30.535 [2024-07-15 15:21:34.391120] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:30.535 [2024-07-15 15:21:34.391127] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:30.535 [2024-07-15 15:21:34.391154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:31.174 15:21:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:31.174 15:21:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:16:31.174 15:21:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:31.174 15:21:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:31.174 15:21:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:31.434 15:21:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:31.434 15:21:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:31.434 15:21:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.434 15:21:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:31.434 [2024-07-15 15:21:35.088519] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:31.434 15:21:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.434 15:21:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:31.434 15:21:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.434 15:21:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:31.434 Malloc0 00:16:31.434 15:21:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.434 15:21:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:31.434 15:21:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.434 15:21:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:31.434 15:21:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.434 15:21:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:31.434 15:21:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.434 15:21:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:31.434 15:21:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.434 15:21:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:31.434 15:21:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.434 15:21:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:31.434 [2024-07-15 15:21:35.155158] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:31.434 15:21:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.434 15:21:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3022141 00:16:31.434 15:21:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:16:31.434 15:21:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:31.434 15:21:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3022141 /var/tmp/bdevperf.sock 00:16:31.434 15:21:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 3022141 ']' 00:16:31.434 15:21:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:31.434 15:21:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:31.434 15:21:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:31.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:31.434 15:21:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:31.434 15:21:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:31.434 [2024-07-15 15:21:35.206468] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:16:31.434 [2024-07-15 15:21:35.206515] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3022141 ] 00:16:31.434 EAL: No free 2048 kB hugepages reported on node 1 00:16:31.434 [2024-07-15 15:21:35.276942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.694 [2024-07-15 15:21:35.353564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:32.261 15:21:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:32.261 15:21:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:16:32.261 15:21:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:32.261 15:21:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.261 15:21:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:32.520 NVMe0n1 00:16:32.520 15:21:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.520 15:21:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:32.520 Running I/O for 10 seconds... 00:16:44.751 00:16:44.751 Latency(us) 00:16:44.751 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:44.751 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:16:44.751 Verification LBA range: start 0x0 length 0x4000 00:16:44.751 NVMe0n1 : 10.05 13167.01 51.43 0.00 0.00 77527.81 7864.32 52219.08 00:16:44.751 =================================================================================================================== 00:16:44.751 Total : 13167.01 51.43 0.00 0.00 77527.81 7864.32 52219.08 00:16:44.751 0 00:16:44.751 15:21:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3022141 00:16:44.751 15:21:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 3022141 ']' 00:16:44.751 15:21:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 3022141 00:16:44.751 15:21:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:16:44.751 15:21:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:44.751 15:21:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3022141 00:16:44.751 15:21:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:44.751 15:21:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:44.751 15:21:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3022141' 00:16:44.751 killing process with pid 3022141 00:16:44.751 15:21:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 3022141 00:16:44.751 Received shutdown signal, test time was about 10.000000 seconds 00:16:44.751 00:16:44.751 Latency(us) 00:16:44.751 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:44.751 =================================================================================================================== 00:16:44.751 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:44.751 15:21:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 3022141 00:16:44.751 15:21:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:44.751 15:21:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:16:44.751 15:21:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:44.751 15:21:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:16:44.751 15:21:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:44.751 15:21:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:16:44.751 15:21:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:44.751 15:21:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:44.751 rmmod nvme_tcp 00:16:44.751 rmmod nvme_fabrics 00:16:44.751 rmmod nvme_keyring 00:16:44.751 15:21:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:44.751 15:21:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:16:44.751 15:21:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:16:44.751 15:21:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 3021991 ']' 00:16:44.751 15:21:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 3021991 00:16:44.751 15:21:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 3021991 ']' 00:16:44.751 15:21:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 3021991 00:16:44.751 15:21:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:16:44.751 15:21:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:44.751 15:21:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3021991 00:16:44.751 15:21:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:44.751 15:21:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:44.751 15:21:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3021991' 00:16:44.751 killing process with pid 3021991 00:16:44.751 15:21:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 3021991 00:16:44.751 15:21:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 3021991 00:16:44.751 15:21:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:44.751 15:21:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:44.751 15:21:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:44.751 15:21:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:44.751 15:21:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:44.751 15:21:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:44.751 15:21:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:44.751 15:21:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:45.321 15:21:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:45.321 00:16:45.321 real 0m21.739s 00:16:45.321 user 0m25.009s 00:16:45.321 sys 0m7.130s 00:16:45.321 15:21:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:45.321 15:21:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:45.321 ************************************ 00:16:45.321 END TEST nvmf_queue_depth 00:16:45.321 ************************************ 00:16:45.321 15:21:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:45.321 15:21:49 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:16:45.321 15:21:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:45.321 15:21:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:45.321 15:21:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:45.321 ************************************ 00:16:45.321 START TEST nvmf_target_multipath 00:16:45.321 ************************************ 00:16:45.321 15:21:49 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:16:45.580 * Looking for test storage... 00:16:45.580 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:45.580 15:21:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:45.580 15:21:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:16:45.580 15:21:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:45.580 15:21:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:45.580 15:21:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:45.580 15:21:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:45.580 15:21:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:45.580 15:21:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:45.580 15:21:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:45.580 15:21:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:45.580 15:21:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:45.580 15:21:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:45.580 15:21:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:45.580 15:21:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:16:45.580 15:21:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:45.580 15:21:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:45.580 15:21:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:45.580 15:21:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:45.580 15:21:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:45.580 15:21:49 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:45.580 15:21:49 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:45.580 15:21:49 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:45.580 15:21:49 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.581 15:21:49 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.581 15:21:49 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.581 15:21:49 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:16:45.581 15:21:49 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.581 15:21:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:16:45.581 15:21:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:45.581 15:21:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:45.581 15:21:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:45.581 15:21:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:45.581 15:21:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:45.581 15:21:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:45.581 15:21:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:45.581 15:21:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:45.581 15:21:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:45.581 15:21:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:45.581 15:21:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:16:45.581 15:21:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:45.581 15:21:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:16:45.581 15:21:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:45.581 15:21:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:45.581 15:21:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:45.581 15:21:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:45.581 15:21:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:45.581 15:21:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:45.581 15:21:49 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:45.581 15:21:49 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:45.581 15:21:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:45.581 15:21:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:45.581 15:21:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:16:45.581 15:21:49 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:52.149 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:52.149 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:16:52.149 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:52.149 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:52.149 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:52.149 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:52.149 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:52.149 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:16:52.149 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:52.149 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:16:52.149 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:16:52.149 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:16:52.149 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:16:52.149 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:16:52.149 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:16:52.149 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:52.149 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:52.149 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:52.149 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:52.149 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:52.149 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:52.149 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:52.149 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:52.149 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:52.149 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:52.149 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:52.149 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:52.149 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:52.149 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:52.149 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:52.149 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:52.149 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:52.149 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:52.149 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:52.149 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:52.149 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:52.149 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:52.149 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:52.149 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:52.149 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:52.149 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:52.149 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:52.149 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:52.149 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:52.149 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:52.149 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:52.149 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:52.149 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:52.149 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:52.149 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:52.149 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:52.149 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:52.149 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:52.149 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:52.149 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:52.149 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:52.149 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:52.149 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:52.149 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:52.149 Found net devices under 0000:af:00.0: cvl_0_0 00:16:52.149 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:52.149 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:52.150 Found net devices under 0000:af:00.1: cvl_0_1 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:52.150 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:52.150 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:16:52.150 00:16:52.150 --- 10.0.0.2 ping statistics --- 00:16:52.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:52.150 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:52.150 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:52.150 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:16:52.150 00:16:52.150 --- 10.0.0.1 ping statistics --- 00:16:52.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:52.150 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:16:52.150 only one NIC for nvmf test 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:52.150 rmmod nvme_tcp 00:16:52.150 rmmod nvme_fabrics 00:16:52.150 rmmod nvme_keyring 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:52.150 15:21:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:54.686 15:21:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:54.686 15:21:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:16:54.686 15:21:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:16:54.686 15:21:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:54.686 15:21:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:16:54.686 15:21:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:54.686 15:21:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:16:54.686 15:21:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:54.686 15:21:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:54.686 15:21:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:54.686 15:21:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:16:54.686 15:21:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:16:54.686 15:21:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:54.686 15:21:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:54.686 15:21:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:54.686 15:21:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:54.686 15:21:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:54.686 15:21:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:54.686 15:21:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:54.686 15:21:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:54.686 15:21:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:54.686 15:21:58 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:54.686 00:16:54.686 real 0m8.930s 00:16:54.686 user 0m1.886s 00:16:54.686 sys 0m5.063s 00:16:54.686 15:21:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:54.686 15:21:58 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:54.686 ************************************ 00:16:54.686 END TEST nvmf_target_multipath 00:16:54.686 ************************************ 00:16:54.686 15:21:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:54.686 15:21:58 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:54.686 15:21:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:54.686 15:21:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:54.686 15:21:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:54.686 ************************************ 00:16:54.686 START TEST nvmf_zcopy 00:16:54.686 ************************************ 00:16:54.686 15:21:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:54.686 * Looking for test storage... 00:16:54.686 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:54.686 15:21:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:54.686 15:21:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:16:54.686 15:21:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:54.686 15:21:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:54.686 15:21:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:54.686 15:21:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:54.686 15:21:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:54.686 15:21:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:54.686 15:21:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:54.686 15:21:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:54.686 15:21:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:54.686 15:21:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:54.686 15:21:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:54.686 15:21:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:16:54.686 15:21:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:54.686 15:21:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:54.686 15:21:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:54.686 15:21:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:54.686 15:21:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:54.686 15:21:58 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:54.686 15:21:58 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:54.686 15:21:58 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:54.686 15:21:58 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.686 15:21:58 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.686 15:21:58 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.686 15:21:58 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:16:54.686 15:21:58 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.686 15:21:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:16:54.686 15:21:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:54.686 15:21:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:54.686 15:21:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:54.686 15:21:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:54.686 15:21:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:54.686 15:21:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:54.686 15:21:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:54.686 15:21:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:54.686 15:21:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:16:54.686 15:21:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:54.686 15:21:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:54.686 15:21:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:54.686 15:21:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:54.686 15:21:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:54.686 15:21:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:54.686 15:21:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:54.686 15:21:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:54.686 15:21:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:54.686 15:21:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:54.686 15:21:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:16:54.686 15:21:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:01.254 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:01.254 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:17:01.254 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:01.254 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:01.254 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:01.254 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:01.254 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:01.254 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:17:01.254 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:01.254 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:17:01.254 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:17:01.254 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:17:01.254 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:17:01.254 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:17:01.254 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:17:01.254 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:01.254 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:01.254 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:01.254 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:01.254 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:01.254 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:01.254 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:01.254 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:01.254 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:01.254 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:01.254 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:01.254 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:01.254 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:01.254 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:01.254 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:01.254 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:01.254 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:01.254 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:01.254 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:01.254 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:01.254 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:01.254 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:01.254 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:01.254 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:01.254 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:01.254 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:01.254 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:01.254 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:01.254 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:01.254 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:01.254 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:01.254 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:01.254 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:01.254 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:01.254 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:01.254 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:01.254 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:01.254 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:01.255 Found net devices under 0000:af:00.0: cvl_0_0 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:01.255 Found net devices under 0000:af:00.1: cvl_0_1 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:01.255 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:01.255 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:17:01.255 00:17:01.255 --- 10.0.0.2 ping statistics --- 00:17:01.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.255 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:01.255 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:01.255 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:17:01.255 00:17:01.255 --- 10.0.0.1 ping statistics --- 00:17:01.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.255 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=3031217 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 3031217 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 3031217 ']' 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:01.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:01.255 15:22:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:01.255 [2024-07-15 15:22:04.643571] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:17:01.255 [2024-07-15 15:22:04.643622] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:01.255 EAL: No free 2048 kB hugepages reported on node 1 00:17:01.255 [2024-07-15 15:22:04.718310] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.255 [2024-07-15 15:22:04.790843] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:01.255 [2024-07-15 15:22:04.790882] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:01.255 [2024-07-15 15:22:04.790893] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:01.255 [2024-07-15 15:22:04.790902] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:01.255 [2024-07-15 15:22:04.790909] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:01.255 [2024-07-15 15:22:04.790930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:01.823 15:22:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:01.823 15:22:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:17:01.823 15:22:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:01.823 15:22:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:01.823 15:22:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:01.823 15:22:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:01.823 15:22:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:17:01.823 15:22:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:17:01.823 15:22:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.823 15:22:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:01.823 [2024-07-15 15:22:05.489388] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:01.823 15:22:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.823 15:22:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:01.823 15:22:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.823 15:22:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:01.823 15:22:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.823 15:22:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:01.823 15:22:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.823 15:22:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:01.823 [2024-07-15 15:22:05.509542] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:01.823 15:22:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.823 15:22:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:01.823 15:22:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.823 15:22:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:01.823 15:22:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.823 15:22:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:17:01.823 15:22:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.823 15:22:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:01.823 malloc0 00:17:01.823 15:22:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.823 15:22:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:01.823 15:22:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.823 15:22:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:01.823 15:22:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.823 15:22:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:17:01.823 15:22:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:17:01.823 15:22:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:17:01.823 15:22:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:17:01.823 15:22:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:01.823 15:22:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:01.823 { 00:17:01.823 "params": { 00:17:01.823 "name": "Nvme$subsystem", 00:17:01.823 "trtype": "$TEST_TRANSPORT", 00:17:01.823 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:01.823 "adrfam": "ipv4", 00:17:01.823 "trsvcid": "$NVMF_PORT", 00:17:01.823 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:01.823 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:01.823 "hdgst": ${hdgst:-false}, 00:17:01.823 "ddgst": ${ddgst:-false} 00:17:01.823 }, 00:17:01.823 "method": "bdev_nvme_attach_controller" 00:17:01.823 } 00:17:01.823 EOF 00:17:01.823 )") 00:17:01.823 15:22:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:17:01.823 15:22:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:17:01.823 15:22:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:17:01.823 15:22:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:01.823 "params": { 00:17:01.823 "name": "Nvme1", 00:17:01.823 "trtype": "tcp", 00:17:01.823 "traddr": "10.0.0.2", 00:17:01.823 "adrfam": "ipv4", 00:17:01.823 "trsvcid": "4420", 00:17:01.823 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:01.823 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:01.823 "hdgst": false, 00:17:01.823 "ddgst": false 00:17:01.823 }, 00:17:01.823 "method": "bdev_nvme_attach_controller" 00:17:01.823 }' 00:17:01.823 [2024-07-15 15:22:05.596292] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:17:01.823 [2024-07-15 15:22:05.596340] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3031495 ] 00:17:01.823 EAL: No free 2048 kB hugepages reported on node 1 00:17:01.823 [2024-07-15 15:22:05.664920] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:02.095 [2024-07-15 15:22:05.734614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.095 Running I/O for 10 seconds... 00:17:12.138 00:17:12.138 Latency(us) 00:17:12.138 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:12.138 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:17:12.138 Verification LBA range: start 0x0 length 0x1000 00:17:12.138 Nvme1n1 : 10.01 8935.78 69.81 0.00 0.00 14284.58 2503.48 35022.44 00:17:12.138 =================================================================================================================== 00:17:12.138 Total : 8935.78 69.81 0.00 0.00 14284.58 2503.48 35022.44 00:17:12.398 15:22:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3033320 00:17:12.398 15:22:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:17:12.398 15:22:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:12.398 15:22:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:17:12.398 15:22:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:17:12.398 15:22:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:17:12.398 15:22:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:17:12.398 15:22:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:12.398 15:22:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:12.398 { 00:17:12.398 "params": { 00:17:12.398 "name": "Nvme$subsystem", 00:17:12.398 "trtype": "$TEST_TRANSPORT", 00:17:12.398 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:12.398 "adrfam": "ipv4", 00:17:12.398 "trsvcid": "$NVMF_PORT", 00:17:12.398 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:12.398 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:12.398 "hdgst": ${hdgst:-false}, 00:17:12.398 "ddgst": ${ddgst:-false} 00:17:12.398 }, 00:17:12.398 "method": "bdev_nvme_attach_controller" 00:17:12.398 } 00:17:12.398 EOF 00:17:12.398 )") 00:17:12.398 [2024-07-15 15:22:16.150085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.398 [2024-07-15 15:22:16.150123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.398 15:22:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:17:12.398 15:22:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:17:12.398 15:22:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:17:12.398 15:22:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:12.398 "params": { 00:17:12.398 "name": "Nvme1", 00:17:12.398 "trtype": "tcp", 00:17:12.398 "traddr": "10.0.0.2", 00:17:12.398 "adrfam": "ipv4", 00:17:12.398 "trsvcid": "4420", 00:17:12.398 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:12.398 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:12.398 "hdgst": false, 00:17:12.398 "ddgst": false 00:17:12.398 }, 00:17:12.398 "method": "bdev_nvme_attach_controller" 00:17:12.398 }' 00:17:12.398 [2024-07-15 15:22:16.162086] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.398 [2024-07-15 15:22:16.162101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.398 [2024-07-15 15:22:16.174111] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.398 [2024-07-15 15:22:16.174123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.398 [2024-07-15 15:22:16.186145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.398 [2024-07-15 15:22:16.186156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.398 [2024-07-15 15:22:16.191830] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:17:12.398 [2024-07-15 15:22:16.191882] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3033320 ] 00:17:12.398 [2024-07-15 15:22:16.198177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.398 [2024-07-15 15:22:16.198189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.398 [2024-07-15 15:22:16.210206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.398 [2024-07-15 15:22:16.210218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.398 [2024-07-15 15:22:16.222238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.398 [2024-07-15 15:22:16.222250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.398 EAL: No free 2048 kB hugepages reported on node 1 00:17:12.398 [2024-07-15 15:22:16.234271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.398 [2024-07-15 15:22:16.234284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.398 [2024-07-15 15:22:16.246304] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.398 [2024-07-15 15:22:16.246315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.398 [2024-07-15 15:22:16.258334] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.398 [2024-07-15 15:22:16.258346] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.398 [2024-07-15 15:22:16.261094] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.398 [2024-07-15 15:22:16.270366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.398 [2024-07-15 15:22:16.270379] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.398 [2024-07-15 15:22:16.282395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.398 [2024-07-15 15:22:16.282407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.398 [2024-07-15 15:22:16.294427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.398 [2024-07-15 15:22:16.294439] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.658 [2024-07-15 15:22:16.306464] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.658 [2024-07-15 15:22:16.306486] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.658 [2024-07-15 15:22:16.318490] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.658 [2024-07-15 15:22:16.318501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.658 [2024-07-15 15:22:16.330524] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.658 [2024-07-15 15:22:16.330536] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.658 [2024-07-15 15:22:16.331835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.658 [2024-07-15 15:22:16.342563] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.658 [2024-07-15 15:22:16.342580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.658 [2024-07-15 15:22:16.354597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.658 [2024-07-15 15:22:16.354617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.658 [2024-07-15 15:22:16.366624] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.658 [2024-07-15 15:22:16.366638] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.658 [2024-07-15 15:22:16.378653] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.658 [2024-07-15 15:22:16.378665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.658 [2024-07-15 15:22:16.390688] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.658 [2024-07-15 15:22:16.390701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.658 [2024-07-15 15:22:16.402715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.658 [2024-07-15 15:22:16.402727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.658 [2024-07-15 15:22:16.414746] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.658 [2024-07-15 15:22:16.414758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.658 [2024-07-15 15:22:16.426796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.658 [2024-07-15 15:22:16.426816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.658 [2024-07-15 15:22:16.438821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.658 [2024-07-15 15:22:16.438842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.658 [2024-07-15 15:22:16.450896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.658 [2024-07-15 15:22:16.450914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.658 [2024-07-15 15:22:16.462917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.658 [2024-07-15 15:22:16.462933] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.658 [2024-07-15 15:22:16.474950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.658 [2024-07-15 15:22:16.474963] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.658 [2024-07-15 15:22:16.486984] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.658 [2024-07-15 15:22:16.487002] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.658 Running I/O for 5 seconds... 00:17:12.658 [2024-07-15 15:22:16.499013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.658 [2024-07-15 15:22:16.499025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.658 [2024-07-15 15:22:16.515702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.658 [2024-07-15 15:22:16.515723] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.658 [2024-07-15 15:22:16.531323] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.658 [2024-07-15 15:22:16.531344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.658 [2024-07-15 15:22:16.545182] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.658 [2024-07-15 15:22:16.545203] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.658 [2024-07-15 15:22:16.560486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.658 [2024-07-15 15:22:16.560507] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.917 [2024-07-15 15:22:16.574606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.917 [2024-07-15 15:22:16.574630] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.917 [2024-07-15 15:22:16.588632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.917 [2024-07-15 15:22:16.588653] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.917 [2024-07-15 15:22:16.602477] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.917 [2024-07-15 15:22:16.602497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.917 [2024-07-15 15:22:16.614141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.917 [2024-07-15 15:22:16.614162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.917 [2024-07-15 15:22:16.627556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.917 [2024-07-15 15:22:16.627576] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.917 [2024-07-15 15:22:16.641042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.917 [2024-07-15 15:22:16.641066] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.917 [2024-07-15 15:22:16.654770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.917 [2024-07-15 15:22:16.654791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.917 [2024-07-15 15:22:16.668459] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.917 [2024-07-15 15:22:16.668480] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.917 [2024-07-15 15:22:16.681757] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.917 [2024-07-15 15:22:16.681781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.917 [2024-07-15 15:22:16.695150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.917 [2024-07-15 15:22:16.695171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.917 [2024-07-15 15:22:16.708928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.917 [2024-07-15 15:22:16.708948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.917 [2024-07-15 15:22:16.722496] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.917 [2024-07-15 15:22:16.722517] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.917 [2024-07-15 15:22:16.736105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.917 [2024-07-15 15:22:16.736125] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.918 [2024-07-15 15:22:16.749600] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.918 [2024-07-15 15:22:16.749621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.918 [2024-07-15 15:22:16.762650] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.918 [2024-07-15 15:22:16.762671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.918 [2024-07-15 15:22:16.776483] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.918 [2024-07-15 15:22:16.776503] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.918 [2024-07-15 15:22:16.789891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.918 [2024-07-15 15:22:16.789911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.918 [2024-07-15 15:22:16.803141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.918 [2024-07-15 15:22:16.803161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.918 [2024-07-15 15:22:16.816946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.918 [2024-07-15 15:22:16.816966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.177 [2024-07-15 15:22:16.830916] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.177 [2024-07-15 15:22:16.830936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.177 [2024-07-15 15:22:16.846770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.177 [2024-07-15 15:22:16.846790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.177 [2024-07-15 15:22:16.860455] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.177 [2024-07-15 15:22:16.860475] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.177 [2024-07-15 15:22:16.873616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.177 [2024-07-15 15:22:16.873636] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.177 [2024-07-15 15:22:16.887454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.177 [2024-07-15 15:22:16.887474] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.177 [2024-07-15 15:22:16.900687] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.177 [2024-07-15 15:22:16.900708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.177 [2024-07-15 15:22:16.913995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.177 [2024-07-15 15:22:16.914015] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.177 [2024-07-15 15:22:16.927472] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.177 [2024-07-15 15:22:16.927492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.177 [2024-07-15 15:22:16.940821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.177 [2024-07-15 15:22:16.940850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.177 [2024-07-15 15:22:16.954139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.177 [2024-07-15 15:22:16.954159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.177 [2024-07-15 15:22:16.967882] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.177 [2024-07-15 15:22:16.967902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.177 [2024-07-15 15:22:16.981462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.177 [2024-07-15 15:22:16.981482] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.177 [2024-07-15 15:22:16.994903] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.177 [2024-07-15 15:22:16.994924] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.177 [2024-07-15 15:22:17.008025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.177 [2024-07-15 15:22:17.008045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.177 [2024-07-15 15:22:17.021507] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.177 [2024-07-15 15:22:17.021527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.177 [2024-07-15 15:22:17.035160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.177 [2024-07-15 15:22:17.035181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.177 [2024-07-15 15:22:17.048710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.177 [2024-07-15 15:22:17.048730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.177 [2024-07-15 15:22:17.062186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.177 [2024-07-15 15:22:17.062206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.177 [2024-07-15 15:22:17.075250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.177 [2024-07-15 15:22:17.075271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.436 [2024-07-15 15:22:17.089072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.436 [2024-07-15 15:22:17.089093] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.436 [2024-07-15 15:22:17.100035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.436 [2024-07-15 15:22:17.100055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.436 [2024-07-15 15:22:17.113648] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.436 [2024-07-15 15:22:17.113668] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.436 [2024-07-15 15:22:17.127074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.436 [2024-07-15 15:22:17.127094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.436 [2024-07-15 15:22:17.140923] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.436 [2024-07-15 15:22:17.140943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.436 [2024-07-15 15:22:17.152011] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.436 [2024-07-15 15:22:17.152031] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.436 [2024-07-15 15:22:17.166237] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.436 [2024-07-15 15:22:17.166257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.436 [2024-07-15 15:22:17.179395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.436 [2024-07-15 15:22:17.179416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.436 [2024-07-15 15:22:17.192592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.436 [2024-07-15 15:22:17.192617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.436 [2024-07-15 15:22:17.206012] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.436 [2024-07-15 15:22:17.206033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.436 [2024-07-15 15:22:17.219315] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.436 [2024-07-15 15:22:17.219336] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.436 [2024-07-15 15:22:17.232737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.436 [2024-07-15 15:22:17.232758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.436 [2024-07-15 15:22:17.246217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.436 [2024-07-15 15:22:17.246239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.437 [2024-07-15 15:22:17.259816] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.437 [2024-07-15 15:22:17.259846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.437 [2024-07-15 15:22:17.273444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.437 [2024-07-15 15:22:17.273465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.437 [2024-07-15 15:22:17.287252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.437 [2024-07-15 15:22:17.287273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.437 [2024-07-15 15:22:17.298300] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.437 [2024-07-15 15:22:17.298320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.437 [2024-07-15 15:22:17.312323] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.437 [2024-07-15 15:22:17.312343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.437 [2024-07-15 15:22:17.326126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.437 [2024-07-15 15:22:17.326146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.437 [2024-07-15 15:22:17.339964] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.437 [2024-07-15 15:22:17.339984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.696 [2024-07-15 15:22:17.351661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.696 [2024-07-15 15:22:17.351682] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.696 [2024-07-15 15:22:17.365191] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.696 [2024-07-15 15:22:17.365211] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.696 [2024-07-15 15:22:17.378690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.696 [2024-07-15 15:22:17.378709] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.696 [2024-07-15 15:22:17.392262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.696 [2024-07-15 15:22:17.392283] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.696 [2024-07-15 15:22:17.405690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.696 [2024-07-15 15:22:17.405710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.696 [2024-07-15 15:22:17.419187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.696 [2024-07-15 15:22:17.419207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.696 [2024-07-15 15:22:17.432118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.696 [2024-07-15 15:22:17.432139] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.696 [2024-07-15 15:22:17.445688] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.696 [2024-07-15 15:22:17.445708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.696 [2024-07-15 15:22:17.459062] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.696 [2024-07-15 15:22:17.459082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.696 [2024-07-15 15:22:17.472398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.696 [2024-07-15 15:22:17.472418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.696 [2024-07-15 15:22:17.486003] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.696 [2024-07-15 15:22:17.486023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.696 [2024-07-15 15:22:17.499112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.696 [2024-07-15 15:22:17.499133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.696 [2024-07-15 15:22:17.512516] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.696 [2024-07-15 15:22:17.512537] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.696 [2024-07-15 15:22:17.526162] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.696 [2024-07-15 15:22:17.526182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.697 [2024-07-15 15:22:17.540138] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.697 [2024-07-15 15:22:17.540158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.697 [2024-07-15 15:22:17.551033] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.697 [2024-07-15 15:22:17.551053] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.697 [2024-07-15 15:22:17.565099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.697 [2024-07-15 15:22:17.565119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.697 [2024-07-15 15:22:17.578440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.697 [2024-07-15 15:22:17.578461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.697 [2024-07-15 15:22:17.592123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.697 [2024-07-15 15:22:17.592143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.956 [2024-07-15 15:22:17.605367] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.956 [2024-07-15 15:22:17.605388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.956 [2024-07-15 15:22:17.619177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.956 [2024-07-15 15:22:17.619197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.956 [2024-07-15 15:22:17.632796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.956 [2024-07-15 15:22:17.632817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.956 [2024-07-15 15:22:17.646318] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.956 [2024-07-15 15:22:17.646339] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.956 [2024-07-15 15:22:17.659325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.956 [2024-07-15 15:22:17.659345] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.956 [2024-07-15 15:22:17.672950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.956 [2024-07-15 15:22:17.672971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.956 [2024-07-15 15:22:17.686459] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.956 [2024-07-15 15:22:17.686480] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.957 [2024-07-15 15:22:17.700005] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.957 [2024-07-15 15:22:17.700025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.957 [2024-07-15 15:22:17.713180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.957 [2024-07-15 15:22:17.713201] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.957 [2024-07-15 15:22:17.726697] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.957 [2024-07-15 15:22:17.726718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.957 [2024-07-15 15:22:17.740068] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.957 [2024-07-15 15:22:17.740088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.957 [2024-07-15 15:22:17.754286] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.957 [2024-07-15 15:22:17.754307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.957 [2024-07-15 15:22:17.765715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.957 [2024-07-15 15:22:17.765735] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.957 [2024-07-15 15:22:17.779437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.957 [2024-07-15 15:22:17.779457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.957 [2024-07-15 15:22:17.793160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.957 [2024-07-15 15:22:17.793180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.957 [2024-07-15 15:22:17.806507] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.957 [2024-07-15 15:22:17.806526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.957 [2024-07-15 15:22:17.820187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.957 [2024-07-15 15:22:17.820207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.957 [2024-07-15 15:22:17.833860] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.957 [2024-07-15 15:22:17.833881] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.957 [2024-07-15 15:22:17.847453] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.957 [2024-07-15 15:22:17.847473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.957 [2024-07-15 15:22:17.862061] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.957 [2024-07-15 15:22:17.862081] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.216 [2024-07-15 15:22:17.877360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.216 [2024-07-15 15:22:17.877381] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.216 [2024-07-15 15:22:17.891354] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.216 [2024-07-15 15:22:17.891375] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.216 [2024-07-15 15:22:17.904730] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.216 [2024-07-15 15:22:17.904750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.216 [2024-07-15 15:22:17.918487] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.216 [2024-07-15 15:22:17.918507] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.216 [2024-07-15 15:22:17.932763] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.216 [2024-07-15 15:22:17.932783] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.216 [2024-07-15 15:22:17.948243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.216 [2024-07-15 15:22:17.948266] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.216 [2024-07-15 15:22:17.961894] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.216 [2024-07-15 15:22:17.961914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.217 [2024-07-15 15:22:17.977186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.217 [2024-07-15 15:22:17.977206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.217 [2024-07-15 15:22:17.991797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.217 [2024-07-15 15:22:17.991816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.217 [2024-07-15 15:22:18.005867] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.217 [2024-07-15 15:22:18.005887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.217 [2024-07-15 15:22:18.021538] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.217 [2024-07-15 15:22:18.021558] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.217 [2024-07-15 15:22:18.035151] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.217 [2024-07-15 15:22:18.035171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.217 [2024-07-15 15:22:18.048340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.217 [2024-07-15 15:22:18.048359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.217 [2024-07-15 15:22:18.061625] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.217 [2024-07-15 15:22:18.061645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.217 [2024-07-15 15:22:18.075063] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.217 [2024-07-15 15:22:18.075083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.217 [2024-07-15 15:22:18.088960] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.217 [2024-07-15 15:22:18.088981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.217 [2024-07-15 15:22:18.102644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.217 [2024-07-15 15:22:18.102664] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.217 [2024-07-15 15:22:18.116040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.217 [2024-07-15 15:22:18.116060] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.476 [2024-07-15 15:22:18.129498] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.476 [2024-07-15 15:22:18.129518] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.476 [2024-07-15 15:22:18.142994] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.476 [2024-07-15 15:22:18.143014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.476 [2024-07-15 15:22:18.156844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.476 [2024-07-15 15:22:18.156865] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.476 [2024-07-15 15:22:18.170605] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.476 [2024-07-15 15:22:18.170625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.476 [2024-07-15 15:22:18.184558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.476 [2024-07-15 15:22:18.184578] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.476 [2024-07-15 15:22:18.195551] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.476 [2024-07-15 15:22:18.195570] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.476 [2024-07-15 15:22:18.212126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.476 [2024-07-15 15:22:18.212145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.476 [2024-07-15 15:22:18.226227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.476 [2024-07-15 15:22:18.226247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.476 [2024-07-15 15:22:18.239920] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.476 [2024-07-15 15:22:18.239940] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.476 [2024-07-15 15:22:18.253216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.476 [2024-07-15 15:22:18.253236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.476 [2024-07-15 15:22:18.266460] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.476 [2024-07-15 15:22:18.266481] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.476 [2024-07-15 15:22:18.279531] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.476 [2024-07-15 15:22:18.279552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.476 [2024-07-15 15:22:18.293395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.476 [2024-07-15 15:22:18.293416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.476 [2024-07-15 15:22:18.307291] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.476 [2024-07-15 15:22:18.307310] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.476 [2024-07-15 15:22:18.321581] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.476 [2024-07-15 15:22:18.321601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.476 [2024-07-15 15:22:18.336864] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.476 [2024-07-15 15:22:18.336884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.476 [2024-07-15 15:22:18.351655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.476 [2024-07-15 15:22:18.351675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.476 [2024-07-15 15:22:18.366717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.476 [2024-07-15 15:22:18.366737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.476 [2024-07-15 15:22:18.380739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.476 [2024-07-15 15:22:18.380760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.736 [2024-07-15 15:22:18.394356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.736 [2024-07-15 15:22:18.394376] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.736 [2024-07-15 15:22:18.408664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.736 [2024-07-15 15:22:18.408684] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.736 [2024-07-15 15:22:18.424347] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.736 [2024-07-15 15:22:18.424368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.736 [2024-07-15 15:22:18.438548] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.736 [2024-07-15 15:22:18.438569] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.736 [2024-07-15 15:22:18.449132] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.736 [2024-07-15 15:22:18.449153] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.736 [2024-07-15 15:22:18.463369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.736 [2024-07-15 15:22:18.463389] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.736 [2024-07-15 15:22:18.476976] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.736 [2024-07-15 15:22:18.476999] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.736 [2024-07-15 15:22:18.491136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.736 [2024-07-15 15:22:18.491155] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.736 [2024-07-15 15:22:18.507004] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.736 [2024-07-15 15:22:18.507024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.736 [2024-07-15 15:22:18.520396] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.736 [2024-07-15 15:22:18.520416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.736 [2024-07-15 15:22:18.533981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.736 [2024-07-15 15:22:18.534000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.736 [2024-07-15 15:22:18.547043] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.736 [2024-07-15 15:22:18.547064] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.736 [2024-07-15 15:22:18.560967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.736 [2024-07-15 15:22:18.560987] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.736 [2024-07-15 15:22:18.574282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.736 [2024-07-15 15:22:18.574302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.736 [2024-07-15 15:22:18.587938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.736 [2024-07-15 15:22:18.587958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.736 [2024-07-15 15:22:18.601290] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.736 [2024-07-15 15:22:18.601310] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.736 [2024-07-15 15:22:18.614447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.736 [2024-07-15 15:22:18.614467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.736 [2024-07-15 15:22:18.627540] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.736 [2024-07-15 15:22:18.627560] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.736 [2024-07-15 15:22:18.640740] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.736 [2024-07-15 15:22:18.640760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.995 [2024-07-15 15:22:18.654371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.996 [2024-07-15 15:22:18.654391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.996 [2024-07-15 15:22:18.668333] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.996 [2024-07-15 15:22:18.668353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.996 [2024-07-15 15:22:18.679059] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.996 [2024-07-15 15:22:18.679079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.996 [2024-07-15 15:22:18.693137] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.996 [2024-07-15 15:22:18.693157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.996 [2024-07-15 15:22:18.706437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.996 [2024-07-15 15:22:18.706457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.996 [2024-07-15 15:22:18.719921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.996 [2024-07-15 15:22:18.719941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.996 [2024-07-15 15:22:18.733375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.996 [2024-07-15 15:22:18.733400] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.996 [2024-07-15 15:22:18.746921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.996 [2024-07-15 15:22:18.746943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.996 [2024-07-15 15:22:18.760187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.996 [2024-07-15 15:22:18.760208] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.996 [2024-07-15 15:22:18.773465] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.996 [2024-07-15 15:22:18.773487] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.996 [2024-07-15 15:22:18.787159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.996 [2024-07-15 15:22:18.787181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.996 [2024-07-15 15:22:18.801003] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.996 [2024-07-15 15:22:18.801025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.996 [2024-07-15 15:22:18.814473] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.996 [2024-07-15 15:22:18.814495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.996 [2024-07-15 15:22:18.827824] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.996 [2024-07-15 15:22:18.827850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.996 [2024-07-15 15:22:18.841317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.996 [2024-07-15 15:22:18.841337] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.996 [2024-07-15 15:22:18.855087] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.996 [2024-07-15 15:22:18.855107] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.996 [2024-07-15 15:22:18.868549] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.996 [2024-07-15 15:22:18.868570] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.996 [2024-07-15 15:22:18.882068] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.996 [2024-07-15 15:22:18.882089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.996 [2024-07-15 15:22:18.895575] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.996 [2024-07-15 15:22:18.895596] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.255 [2024-07-15 15:22:18.909310] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.255 [2024-07-15 15:22:18.909330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.255 [2024-07-15 15:22:18.922909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.255 [2024-07-15 15:22:18.922929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.255 [2024-07-15 15:22:18.936631] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.255 [2024-07-15 15:22:18.936651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.255 [2024-07-15 15:22:18.949884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.255 [2024-07-15 15:22:18.949904] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.255 [2024-07-15 15:22:18.963239] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.255 [2024-07-15 15:22:18.963259] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.255 [2024-07-15 15:22:18.977063] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.255 [2024-07-15 15:22:18.977084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.255 [2024-07-15 15:22:18.990064] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.255 [2024-07-15 15:22:18.990088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.255 [2024-07-15 15:22:19.003843] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.255 [2024-07-15 15:22:19.003864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.255 [2024-07-15 15:22:19.016946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.255 [2024-07-15 15:22:19.016966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.255 [2024-07-15 15:22:19.030160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.255 [2024-07-15 15:22:19.030180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.255 [2024-07-15 15:22:19.043814] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.255 [2024-07-15 15:22:19.043839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.255 [2024-07-15 15:22:19.057390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.255 [2024-07-15 15:22:19.057411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.255 [2024-07-15 15:22:19.070694] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.255 [2024-07-15 15:22:19.070714] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.255 [2024-07-15 15:22:19.084260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.255 [2024-07-15 15:22:19.084280] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.255 [2024-07-15 15:22:19.097489] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.255 [2024-07-15 15:22:19.097510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.255 [2024-07-15 15:22:19.111352] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.255 [2024-07-15 15:22:19.111373] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.255 [2024-07-15 15:22:19.124682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.255 [2024-07-15 15:22:19.124702] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.255 [2024-07-15 15:22:19.137740] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.255 [2024-07-15 15:22:19.137760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.255 [2024-07-15 15:22:19.151137] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.255 [2024-07-15 15:22:19.151157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.514 [2024-07-15 15:22:19.164993] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.514 [2024-07-15 15:22:19.165014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.514 [2024-07-15 15:22:19.178800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.514 [2024-07-15 15:22:19.178820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.514 [2024-07-15 15:22:19.190162] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.514 [2024-07-15 15:22:19.190182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.514 [2024-07-15 15:22:19.203686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.514 [2024-07-15 15:22:19.203706] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.514 [2024-07-15 15:22:19.216940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.514 [2024-07-15 15:22:19.216961] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.514 [2024-07-15 15:22:19.230235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.514 [2024-07-15 15:22:19.230258] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.514 [2024-07-15 15:22:19.243762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.514 [2024-07-15 15:22:19.243787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.514 [2024-07-15 15:22:19.257508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.514 [2024-07-15 15:22:19.257529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.514 [2024-07-15 15:22:19.270909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.514 [2024-07-15 15:22:19.270930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.514 [2024-07-15 15:22:19.284215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.514 [2024-07-15 15:22:19.284242] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.514 [2024-07-15 15:22:19.297816] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.514 [2024-07-15 15:22:19.297843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.514 [2024-07-15 15:22:19.311232] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.514 [2024-07-15 15:22:19.311253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.514 [2024-07-15 15:22:19.324937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.514 [2024-07-15 15:22:19.324956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.514 [2024-07-15 15:22:19.338675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.514 [2024-07-15 15:22:19.338695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.515 [2024-07-15 15:22:19.352432] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.515 [2024-07-15 15:22:19.352453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.515 [2024-07-15 15:22:19.366301] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.515 [2024-07-15 15:22:19.366321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.515 [2024-07-15 15:22:19.379809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.515 [2024-07-15 15:22:19.379830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.515 [2024-07-15 15:22:19.393559] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.515 [2024-07-15 15:22:19.393579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.515 [2024-07-15 15:22:19.407456] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.515 [2024-07-15 15:22:19.407476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.515 [2024-07-15 15:22:19.418545] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.515 [2024-07-15 15:22:19.418567] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.774 [2024-07-15 15:22:19.432651] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.774 [2024-07-15 15:22:19.432671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.774 [2024-07-15 15:22:19.446061] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.774 [2024-07-15 15:22:19.446082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.774 [2024-07-15 15:22:19.459701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.774 [2024-07-15 15:22:19.459721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.774 [2024-07-15 15:22:19.473224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.774 [2024-07-15 15:22:19.473245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.774 [2024-07-15 15:22:19.487115] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.774 [2024-07-15 15:22:19.487135] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.774 [2024-07-15 15:22:19.499024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.774 [2024-07-15 15:22:19.499044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.774 [2024-07-15 15:22:19.511422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.774 [2024-07-15 15:22:19.511442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.774 [2024-07-15 15:22:19.525562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.774 [2024-07-15 15:22:19.525582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.774 [2024-07-15 15:22:19.539400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.774 [2024-07-15 15:22:19.539419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.774 [2024-07-15 15:22:19.554862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.774 [2024-07-15 15:22:19.554882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.774 [2024-07-15 15:22:19.568933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.774 [2024-07-15 15:22:19.568953] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.774 [2024-07-15 15:22:19.582468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.774 [2024-07-15 15:22:19.582487] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.774 [2024-07-15 15:22:19.594078] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.774 [2024-07-15 15:22:19.594097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.774 [2024-07-15 15:22:19.608300] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.774 [2024-07-15 15:22:19.608320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.774 [2024-07-15 15:22:19.621458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.774 [2024-07-15 15:22:19.621478] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.774 [2024-07-15 15:22:19.635038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.774 [2024-07-15 15:22:19.635058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.774 [2024-07-15 15:22:19.648897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.774 [2024-07-15 15:22:19.648916] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.774 [2024-07-15 15:22:19.659528] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.774 [2024-07-15 15:22:19.659548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.774 [2024-07-15 15:22:19.673241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:15.774 [2024-07-15 15:22:19.673261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.033 [2024-07-15 15:22:19.689317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.033 [2024-07-15 15:22:19.689338] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.033 [2024-07-15 15:22:19.703579] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.033 [2024-07-15 15:22:19.703599] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.033 [2024-07-15 15:22:19.717440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.033 [2024-07-15 15:22:19.717460] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.033 [2024-07-15 15:22:19.732763] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.033 [2024-07-15 15:22:19.732783] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.033 [2024-07-15 15:22:19.747371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.033 [2024-07-15 15:22:19.747391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.033 [2024-07-15 15:22:19.757492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.033 [2024-07-15 15:22:19.757512] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.033 [2024-07-15 15:22:19.771612] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.033 [2024-07-15 15:22:19.771631] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.033 [2024-07-15 15:22:19.785756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.033 [2024-07-15 15:22:19.785774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.033 [2024-07-15 15:22:19.801217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.033 [2024-07-15 15:22:19.801236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.033 [2024-07-15 15:22:19.815039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.033 [2024-07-15 15:22:19.815059] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.033 [2024-07-15 15:22:19.828822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.033 [2024-07-15 15:22:19.828847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.033 [2024-07-15 15:22:19.842089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.033 [2024-07-15 15:22:19.842109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.033 [2024-07-15 15:22:19.855888] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.033 [2024-07-15 15:22:19.855907] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.033 [2024-07-15 15:22:19.869513] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.033 [2024-07-15 15:22:19.869534] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.033 [2024-07-15 15:22:19.882688] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.033 [2024-07-15 15:22:19.882707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.033 [2024-07-15 15:22:19.896185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.033 [2024-07-15 15:22:19.896220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.033 [2024-07-15 15:22:19.909918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.033 [2024-07-15 15:22:19.909938] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.033 [2024-07-15 15:22:19.923480] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.033 [2024-07-15 15:22:19.923499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.033 [2024-07-15 15:22:19.936597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.033 [2024-07-15 15:22:19.936618] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.293 [2024-07-15 15:22:19.951333] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.293 [2024-07-15 15:22:19.951353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.293 [2024-07-15 15:22:19.968672] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.293 [2024-07-15 15:22:19.968693] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.293 [2024-07-15 15:22:19.983485] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.293 [2024-07-15 15:22:19.983505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.293 [2024-07-15 15:22:19.996628] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.293 [2024-07-15 15:22:19.996648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.293 [2024-07-15 15:22:20.010847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.293 [2024-07-15 15:22:20.010868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.293 [2024-07-15 15:22:20.027059] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.293 [2024-07-15 15:22:20.027080] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.293 [2024-07-15 15:22:20.041396] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.293 [2024-07-15 15:22:20.041416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.293 [2024-07-15 15:22:20.057062] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.293 [2024-07-15 15:22:20.057083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.293 [2024-07-15 15:22:20.071948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.293 [2024-07-15 15:22:20.071969] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.293 [2024-07-15 15:22:20.087523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.293 [2024-07-15 15:22:20.087544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.293 [2024-07-15 15:22:20.101606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.293 [2024-07-15 15:22:20.101627] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.293 [2024-07-15 15:22:20.115294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.293 [2024-07-15 15:22:20.115330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.293 [2024-07-15 15:22:20.129244] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.293 [2024-07-15 15:22:20.129264] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.293 [2024-07-15 15:22:20.144367] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.293 [2024-07-15 15:22:20.144387] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.293 [2024-07-15 15:22:20.159379] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.293 [2024-07-15 15:22:20.159399] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.293 [2024-07-15 15:22:20.174423] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.293 [2024-07-15 15:22:20.174444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.293 [2024-07-15 15:22:20.188323] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.293 [2024-07-15 15:22:20.188353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.552 [2024-07-15 15:22:20.202494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.552 [2024-07-15 15:22:20.202514] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.552 [2024-07-15 15:22:20.213964] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.552 [2024-07-15 15:22:20.213985] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.552 [2024-07-15 15:22:20.228228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.552 [2024-07-15 15:22:20.228248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.552 [2024-07-15 15:22:20.242973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.552 [2024-07-15 15:22:20.242993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.552 [2024-07-15 15:22:20.258402] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.552 [2024-07-15 15:22:20.258424] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.552 [2024-07-15 15:22:20.272423] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.552 [2024-07-15 15:22:20.272444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.552 [2024-07-15 15:22:20.286194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.552 [2024-07-15 15:22:20.286215] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.552 [2024-07-15 15:22:20.299769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.552 [2024-07-15 15:22:20.299790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.552 [2024-07-15 15:22:20.313441] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.552 [2024-07-15 15:22:20.313462] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.552 [2024-07-15 15:22:20.327013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.552 [2024-07-15 15:22:20.327034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.552 [2024-07-15 15:22:20.340492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.552 [2024-07-15 15:22:20.340513] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.552 [2024-07-15 15:22:20.354164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.552 [2024-07-15 15:22:20.354185] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.552 [2024-07-15 15:22:20.367839] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.552 [2024-07-15 15:22:20.367861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.552 [2024-07-15 15:22:20.381354] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.552 [2024-07-15 15:22:20.381375] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.552 [2024-07-15 15:22:20.395198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.552 [2024-07-15 15:22:20.395218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.552 [2024-07-15 15:22:20.408551] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.552 [2024-07-15 15:22:20.408572] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.552 [2024-07-15 15:22:20.422229] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.552 [2024-07-15 15:22:20.422249] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.553 [2024-07-15 15:22:20.435629] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.553 [2024-07-15 15:22:20.435649] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.553 [2024-07-15 15:22:20.449556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.553 [2024-07-15 15:22:20.449576] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.812 [2024-07-15 15:22:20.463103] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.812 [2024-07-15 15:22:20.463123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.812 [2024-07-15 15:22:20.477011] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.812 [2024-07-15 15:22:20.477032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.812 [2024-07-15 15:22:20.490624] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.812 [2024-07-15 15:22:20.490644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.812 [2024-07-15 15:22:20.503964] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.812 [2024-07-15 15:22:20.503983] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.812 [2024-07-15 15:22:20.517496] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.812 [2024-07-15 15:22:20.517516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.812 [2024-07-15 15:22:20.530573] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.812 [2024-07-15 15:22:20.530592] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.812 [2024-07-15 15:22:20.543931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.812 [2024-07-15 15:22:20.543956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.812 [2024-07-15 15:22:20.557671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.812 [2024-07-15 15:22:20.557691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.812 [2024-07-15 15:22:20.570638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.812 [2024-07-15 15:22:20.570658] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.812 [2024-07-15 15:22:20.584457] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.812 [2024-07-15 15:22:20.584478] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.812 [2024-07-15 15:22:20.597854] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.812 [2024-07-15 15:22:20.597875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.812 [2024-07-15 15:22:20.611178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.812 [2024-07-15 15:22:20.611198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.812 [2024-07-15 15:22:20.625139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.812 [2024-07-15 15:22:20.625160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.812 [2024-07-15 15:22:20.638375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.812 [2024-07-15 15:22:20.638395] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.812 [2024-07-15 15:22:20.651452] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.812 [2024-07-15 15:22:20.651473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.812 [2024-07-15 15:22:20.664759] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.812 [2024-07-15 15:22:20.664779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.812 [2024-07-15 15:22:20.678608] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.812 [2024-07-15 15:22:20.678628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.812 [2024-07-15 15:22:20.690252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.812 [2024-07-15 15:22:20.690272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.812 [2024-07-15 15:22:20.704086] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.812 [2024-07-15 15:22:20.704107] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.812 [2024-07-15 15:22:20.718133] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:16.812 [2024-07-15 15:22:20.718154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.071 [2024-07-15 15:22:20.732014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.072 [2024-07-15 15:22:20.732034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.072 [2024-07-15 15:22:20.743849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.072 [2024-07-15 15:22:20.743870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.072 [2024-07-15 15:22:20.757308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.072 [2024-07-15 15:22:20.757328] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.072 [2024-07-15 15:22:20.770526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.072 [2024-07-15 15:22:20.770546] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.072 [2024-07-15 15:22:20.783670] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.072 [2024-07-15 15:22:20.783691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.072 [2024-07-15 15:22:20.797228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.072 [2024-07-15 15:22:20.797253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.072 [2024-07-15 15:22:20.810409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.072 [2024-07-15 15:22:20.810429] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.072 [2024-07-15 15:22:20.823735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.072 [2024-07-15 15:22:20.823756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.072 [2024-07-15 15:22:20.837016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.072 [2024-07-15 15:22:20.837037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.072 [2024-07-15 15:22:20.850221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.072 [2024-07-15 15:22:20.850241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.072 [2024-07-15 15:22:20.863621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.072 [2024-07-15 15:22:20.863642] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.072 [2024-07-15 15:22:20.877013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.072 [2024-07-15 15:22:20.877033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.072 [2024-07-15 15:22:20.890775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.072 [2024-07-15 15:22:20.890796] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.072 [2024-07-15 15:22:20.902012] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.072 [2024-07-15 15:22:20.902032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.072 [2024-07-15 15:22:20.916050] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.072 [2024-07-15 15:22:20.916071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.072 [2024-07-15 15:22:20.929508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.072 [2024-07-15 15:22:20.929528] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.072 [2024-07-15 15:22:20.942917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.072 [2024-07-15 15:22:20.942937] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.072 [2024-07-15 15:22:20.956418] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.072 [2024-07-15 15:22:20.956438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.072 [2024-07-15 15:22:20.969558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.072 [2024-07-15 15:22:20.969577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.331 [2024-07-15 15:22:20.983325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.331 [2024-07-15 15:22:20.983345] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.332 [2024-07-15 15:22:20.996767] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.332 [2024-07-15 15:22:20.996787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.332 [2024-07-15 15:22:21.010027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.332 [2024-07-15 15:22:21.010047] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.332 [2024-07-15 15:22:21.023363] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.332 [2024-07-15 15:22:21.023383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.332 [2024-07-15 15:22:21.036779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.332 [2024-07-15 15:22:21.036799] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.332 [2024-07-15 15:22:21.050312] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.332 [2024-07-15 15:22:21.050337] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.332 [2024-07-15 15:22:21.063903] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.332 [2024-07-15 15:22:21.063924] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.332 [2024-07-15 15:22:21.078245] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.332 [2024-07-15 15:22:21.078265] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.332 [2024-07-15 15:22:21.093689] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.332 [2024-07-15 15:22:21.093710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.332 [2024-07-15 15:22:21.107379] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.332 [2024-07-15 15:22:21.107399] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.332 [2024-07-15 15:22:21.121762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.332 [2024-07-15 15:22:21.121781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.332 [2024-07-15 15:22:21.139414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.332 [2024-07-15 15:22:21.139434] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.332 [2024-07-15 15:22:21.153227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.332 [2024-07-15 15:22:21.153247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.332 [2024-07-15 15:22:21.166946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.332 [2024-07-15 15:22:21.166965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.332 [2024-07-15 15:22:21.181639] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.332 [2024-07-15 15:22:21.181659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.332 [2024-07-15 15:22:21.196332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.332 [2024-07-15 15:22:21.196352] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.332 [2024-07-15 15:22:21.209988] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.332 [2024-07-15 15:22:21.210008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.332 [2024-07-15 15:22:21.223795] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.332 [2024-07-15 15:22:21.223815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.332 [2024-07-15 15:22:21.234216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.332 [2024-07-15 15:22:21.234236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.591 [2024-07-15 15:22:21.248698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.591 [2024-07-15 15:22:21.248718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.591 [2024-07-15 15:22:21.260465] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.591 [2024-07-15 15:22:21.260486] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.591 [2024-07-15 15:22:21.273798] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.591 [2024-07-15 15:22:21.273818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.591 [2024-07-15 15:22:21.288159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.591 [2024-07-15 15:22:21.288179] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.591 [2024-07-15 15:22:21.303209] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.591 [2024-07-15 15:22:21.303230] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.591 [2024-07-15 15:22:21.316810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.591 [2024-07-15 15:22:21.316840] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.591 [2024-07-15 15:22:21.327791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.591 [2024-07-15 15:22:21.327810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.591 [2024-07-15 15:22:21.341915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.591 [2024-07-15 15:22:21.341936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.591 [2024-07-15 15:22:21.355692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.591 [2024-07-15 15:22:21.355713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.591 [2024-07-15 15:22:21.369083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.591 [2024-07-15 15:22:21.369103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.591 [2024-07-15 15:22:21.382669] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.591 [2024-07-15 15:22:21.382689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.591 [2024-07-15 15:22:21.396002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.591 [2024-07-15 15:22:21.396022] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.591 [2024-07-15 15:22:21.409660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.591 [2024-07-15 15:22:21.409680] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.591 [2024-07-15 15:22:21.424295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.591 [2024-07-15 15:22:21.424315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.591 [2024-07-15 15:22:21.439523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.591 [2024-07-15 15:22:21.439543] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.591 [2024-07-15 15:22:21.453037] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.591 [2024-07-15 15:22:21.453058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.591 [2024-07-15 15:22:21.467649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.591 [2024-07-15 15:22:21.467669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.591 [2024-07-15 15:22:21.482636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.591 [2024-07-15 15:22:21.482657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.591 [2024-07-15 15:22:21.497284] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.591 [2024-07-15 15:22:21.497304] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.851 [2024-07-15 15:22:21.510580] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.851 [2024-07-15 15:22:21.510600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.851 00:17:17.851 Latency(us) 00:17:17.851 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:17.851 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:17:17.851 Nvme1n1 : 5.01 17375.84 135.75 0.00 0.00 7359.54 2411.72 23488.10 00:17:17.851 =================================================================================================================== 00:17:17.851 Total : 17375.84 135.75 0.00 0.00 7359.54 2411.72 23488.10 00:17:17.851 [2024-07-15 15:22:21.520655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.851 [2024-07-15 15:22:21.520672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.851 [2024-07-15 15:22:21.532684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.851 [2024-07-15 15:22:21.532698] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.851 [2024-07-15 15:22:21.544724] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.851 [2024-07-15 15:22:21.544744] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.851 [2024-07-15 15:22:21.556749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.851 [2024-07-15 15:22:21.556764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.851 [2024-07-15 15:22:21.568782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.851 [2024-07-15 15:22:21.568795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.851 [2024-07-15 15:22:21.580810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.851 [2024-07-15 15:22:21.580823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.851 [2024-07-15 15:22:21.592844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.851 [2024-07-15 15:22:21.592857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.851 [2024-07-15 15:22:21.604876] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.851 [2024-07-15 15:22:21.604890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.851 [2024-07-15 15:22:21.616914] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.851 [2024-07-15 15:22:21.616926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.851 [2024-07-15 15:22:21.628933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.851 [2024-07-15 15:22:21.628944] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.851 [2024-07-15 15:22:21.640968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.851 [2024-07-15 15:22:21.640981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.851 [2024-07-15 15:22:21.652996] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.852 [2024-07-15 15:22:21.653007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.852 [2024-07-15 15:22:21.665026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.852 [2024-07-15 15:22:21.665037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.852 [2024-07-15 15:22:21.677059] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.852 [2024-07-15 15:22:21.677071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.852 [2024-07-15 15:22:21.689089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:17.852 [2024-07-15 15:22:21.689100] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:17.852 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3033320) - No such process 00:17:17.852 15:22:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3033320 00:17:17.852 15:22:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:17.852 15:22:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.852 15:22:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:17.852 15:22:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.852 15:22:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:17:17.852 15:22:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.852 15:22:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:17.852 delay0 00:17:17.852 15:22:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.852 15:22:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:17:17.852 15:22:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.852 15:22:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:17.852 15:22:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.852 15:22:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:17:18.110 EAL: No free 2048 kB hugepages reported on node 1 00:17:18.110 [2024-07-15 15:22:21.777951] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:17:24.672 Initializing NVMe Controllers 00:17:24.672 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:24.672 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:24.672 Initialization complete. Launching workers. 00:17:24.672 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 69 00:17:24.672 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 356, failed to submit 33 00:17:24.672 success 138, unsuccess 218, failed 0 00:17:24.672 15:22:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:17:24.672 15:22:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:17:24.672 15:22:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:24.672 15:22:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:17:24.672 15:22:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:24.672 15:22:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:17:24.672 15:22:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:24.672 15:22:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:24.672 rmmod nvme_tcp 00:17:24.672 rmmod nvme_fabrics 00:17:24.672 rmmod nvme_keyring 00:17:24.672 15:22:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:24.672 15:22:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:17:24.672 15:22:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:17:24.672 15:22:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 3031217 ']' 00:17:24.672 15:22:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 3031217 00:17:24.672 15:22:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 3031217 ']' 00:17:24.672 15:22:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 3031217 00:17:24.672 15:22:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:17:24.672 15:22:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:24.672 15:22:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3031217 00:17:24.672 15:22:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:24.672 15:22:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:24.672 15:22:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3031217' 00:17:24.672 killing process with pid 3031217 00:17:24.672 15:22:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 3031217 00:17:24.672 15:22:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 3031217 00:17:24.672 15:22:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:24.672 15:22:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:24.672 15:22:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:24.672 15:22:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:24.672 15:22:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:24.672 15:22:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:24.672 15:22:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:24.672 15:22:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:26.578 15:22:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:26.578 00:17:26.578 real 0m32.127s 00:17:26.578 user 0m41.459s 00:17:26.578 sys 0m12.972s 00:17:26.578 15:22:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:26.578 15:22:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:26.578 ************************************ 00:17:26.578 END TEST nvmf_zcopy 00:17:26.578 ************************************ 00:17:26.578 15:22:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:26.578 15:22:30 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:17:26.578 15:22:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:26.578 15:22:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:26.578 15:22:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:26.578 ************************************ 00:17:26.578 START TEST nvmf_nmic 00:17:26.578 ************************************ 00:17:26.578 15:22:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:17:26.838 * Looking for test storage... 00:17:26.838 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:26.838 15:22:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:26.838 15:22:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:17:26.838 15:22:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:26.838 15:22:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:26.838 15:22:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:26.838 15:22:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:26.838 15:22:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:26.838 15:22:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:26.838 15:22:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:26.838 15:22:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:26.838 15:22:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:26.838 15:22:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:26.838 15:22:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:26.838 15:22:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:17:26.838 15:22:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:26.838 15:22:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:26.838 15:22:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:26.838 15:22:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:26.838 15:22:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:26.838 15:22:30 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:26.838 15:22:30 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:26.838 15:22:30 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:26.838 15:22:30 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.838 15:22:30 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.838 15:22:30 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.838 15:22:30 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:17:26.838 15:22:30 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.838 15:22:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:17:26.838 15:22:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:26.838 15:22:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:26.838 15:22:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:26.838 15:22:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:26.838 15:22:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:26.838 15:22:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:26.838 15:22:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:26.838 15:22:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:26.838 15:22:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:26.838 15:22:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:26.838 15:22:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:17:26.838 15:22:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:26.838 15:22:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:26.838 15:22:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:26.838 15:22:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:26.838 15:22:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:26.838 15:22:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:26.838 15:22:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:26.838 15:22:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:26.838 15:22:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:26.838 15:22:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:26.838 15:22:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:17:26.838 15:22:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:33.405 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:33.405 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:17:33.405 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:33.405 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:33.405 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:33.405 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:33.405 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:33.405 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:17:33.405 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:33.405 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:17:33.405 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:17:33.405 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:17:33.405 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:17:33.405 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:17:33.405 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:17:33.405 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:33.405 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:33.405 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:33.405 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:33.405 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:33.405 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:33.405 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:33.405 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:33.405 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:33.405 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:33.405 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:33.405 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:33.405 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:33.405 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:33.405 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:33.405 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:33.405 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:33.405 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:33.405 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:33.405 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:33.405 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:33.405 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:33.405 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:33.405 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:33.405 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:33.405 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:33.405 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:33.405 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:33.405 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:33.405 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:33.405 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:33.405 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:33.405 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:33.405 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:33.405 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:33.405 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:33.405 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:33.405 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:33.405 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:33.405 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:33.405 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:33.405 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:33.405 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:33.405 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:33.405 Found net devices under 0000:af:00.0: cvl_0_0 00:17:33.405 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:33.405 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:33.405 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:33.405 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:33.406 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:33.406 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:33.406 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:33.406 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:33.406 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:33.406 Found net devices under 0000:af:00.1: cvl_0_1 00:17:33.406 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:33.406 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:33.406 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:17:33.406 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:33.406 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:33.406 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:33.406 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:33.406 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:33.406 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:33.406 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:33.406 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:33.406 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:33.406 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:33.406 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:33.406 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:33.406 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:33.406 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:33.406 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:33.406 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:33.406 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:33.406 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:33.406 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:33.406 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:33.406 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:33.406 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:33.406 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:33.406 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:33.406 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:17:33.406 00:17:33.406 --- 10.0.0.2 ping statistics --- 00:17:33.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:33.406 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:17:33.406 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:33.406 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:33.406 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:17:33.406 00:17:33.406 --- 10.0.0.1 ping statistics --- 00:17:33.406 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:33.406 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:17:33.406 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:33.406 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:17:33.406 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:33.406 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:33.406 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:33.406 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:33.406 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:33.406 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:33.406 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:33.406 15:22:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:17:33.406 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:33.406 15:22:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:33.406 15:22:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:33.406 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=3038854 00:17:33.406 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 3038854 00:17:33.406 15:22:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:33.406 15:22:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 3038854 ']' 00:17:33.406 15:22:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:33.406 15:22:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:33.406 15:22:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:33.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:33.406 15:22:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:33.406 15:22:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:33.665 [2024-07-15 15:22:37.357325] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:17:33.665 [2024-07-15 15:22:37.357372] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:33.665 EAL: No free 2048 kB hugepages reported on node 1 00:17:33.665 [2024-07-15 15:22:37.430010] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:33.665 [2024-07-15 15:22:37.505035] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:33.665 [2024-07-15 15:22:37.505074] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:33.665 [2024-07-15 15:22:37.505083] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:33.665 [2024-07-15 15:22:37.505092] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:33.665 [2024-07-15 15:22:37.505099] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:33.665 [2024-07-15 15:22:37.505146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:33.665 [2024-07-15 15:22:37.505242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:33.665 [2024-07-15 15:22:37.505255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:33.665 [2024-07-15 15:22:37.505257] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:34.603 15:22:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:34.603 15:22:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:17:34.603 15:22:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:34.603 15:22:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:34.603 15:22:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:34.603 15:22:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:34.603 15:22:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:34.603 15:22:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.603 15:22:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:34.603 [2024-07-15 15:22:38.214633] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:34.603 15:22:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.603 15:22:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:34.603 15:22:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.603 15:22:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:34.603 Malloc0 00:17:34.603 15:22:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.603 15:22:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:34.603 15:22:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.603 15:22:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:34.603 15:22:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.603 15:22:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:34.603 15:22:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.603 15:22:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:34.603 15:22:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.603 15:22:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:34.603 15:22:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.603 15:22:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:34.603 [2024-07-15 15:22:38.269406] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:34.603 15:22:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.603 15:22:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:17:34.603 test case1: single bdev can't be used in multiple subsystems 00:17:34.603 15:22:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:17:34.603 15:22:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.603 15:22:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:34.603 15:22:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.603 15:22:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:17:34.603 15:22:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.603 15:22:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:34.603 15:22:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.603 15:22:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:17:34.603 15:22:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:17:34.603 15:22:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.603 15:22:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:34.603 [2024-07-15 15:22:38.293306] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:17:34.603 [2024-07-15 15:22:38.293328] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:17:34.603 [2024-07-15 15:22:38.293338] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:34.603 request: 00:17:34.603 { 00:17:34.603 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:17:34.603 "namespace": { 00:17:34.603 "bdev_name": "Malloc0", 00:17:34.603 "no_auto_visible": false 00:17:34.603 }, 00:17:34.603 "method": "nvmf_subsystem_add_ns", 00:17:34.603 "req_id": 1 00:17:34.603 } 00:17:34.603 Got JSON-RPC error response 00:17:34.603 response: 00:17:34.603 { 00:17:34.603 "code": -32602, 00:17:34.603 "message": "Invalid parameters" 00:17:34.603 } 00:17:34.603 15:22:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:34.603 15:22:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:17:34.603 15:22:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:17:34.603 15:22:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:17:34.603 Adding namespace failed - expected result. 00:17:34.603 15:22:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:17:34.603 test case2: host connect to nvmf target in multiple paths 00:17:34.603 15:22:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:34.603 15:22:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.603 15:22:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:34.603 [2024-07-15 15:22:38.309452] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:34.603 15:22:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.603 15:22:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:35.980 15:22:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:17:37.391 15:22:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:17:37.391 15:22:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:17:37.391 15:22:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:37.391 15:22:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:37.391 15:22:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:17:39.292 15:22:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:39.292 15:22:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:39.292 15:22:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:39.292 15:22:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:39.292 15:22:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:39.292 15:22:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:17:39.292 15:22:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:39.292 [global] 00:17:39.292 thread=1 00:17:39.292 invalidate=1 00:17:39.292 rw=write 00:17:39.292 time_based=1 00:17:39.292 runtime=1 00:17:39.292 ioengine=libaio 00:17:39.292 direct=1 00:17:39.292 bs=4096 00:17:39.292 iodepth=1 00:17:39.292 norandommap=0 00:17:39.292 numjobs=1 00:17:39.292 00:17:39.292 verify_dump=1 00:17:39.292 verify_backlog=512 00:17:39.292 verify_state_save=0 00:17:39.292 do_verify=1 00:17:39.292 verify=crc32c-intel 00:17:39.292 [job0] 00:17:39.292 filename=/dev/nvme0n1 00:17:39.292 Could not set queue depth (nvme0n1) 00:17:39.551 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:39.551 fio-3.35 00:17:39.551 Starting 1 thread 00:17:40.927 00:17:40.927 job0: (groupid=0, jobs=1): err= 0: pid=3040083: Mon Jul 15 15:22:44 2024 00:17:40.927 read: IOPS=1401, BW=5606KiB/s (5741kB/s)(5612KiB/1001msec) 00:17:40.927 slat (nsec): min=8761, max=36332, avg=9545.57, stdev=1223.62 00:17:40.927 clat (usec): min=322, max=516, avg=423.22, stdev=25.57 00:17:40.927 lat (usec): min=332, max=525, avg=432.77, stdev=25.61 00:17:40.927 clat percentiles (usec): 00:17:40.927 | 1.00th=[ 338], 5.00th=[ 355], 10.00th=[ 400], 20.00th=[ 416], 00:17:40.927 | 30.00th=[ 424], 40.00th=[ 424], 50.00th=[ 429], 60.00th=[ 433], 00:17:40.927 | 70.00th=[ 433], 80.00th=[ 437], 90.00th=[ 441], 95.00th=[ 449], 00:17:40.927 | 99.00th=[ 490], 99.50th=[ 502], 99.90th=[ 515], 99.95th=[ 519], 00:17:40.927 | 99.99th=[ 519] 00:17:40.927 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:17:40.927 slat (usec): min=11, max=25758, avg=29.50, stdev=656.91 00:17:40.927 clat (usec): min=190, max=486, avg=222.05, stdev=26.89 00:17:40.927 lat (usec): min=204, max=26237, avg=251.55, stdev=664.01 00:17:40.927 clat percentiles (usec): 00:17:40.927 | 1.00th=[ 194], 5.00th=[ 200], 10.00th=[ 202], 20.00th=[ 204], 00:17:40.927 | 30.00th=[ 206], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 217], 00:17:40.927 | 70.00th=[ 223], 80.00th=[ 241], 90.00th=[ 273], 95.00th=[ 277], 00:17:40.927 | 99.00th=[ 285], 99.50th=[ 293], 99.90th=[ 482], 99.95th=[ 486], 00:17:40.927 | 99.99th=[ 486] 00:17:40.927 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:17:40.927 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:17:40.927 lat (usec) : 250=43.31%, 500=56.38%, 750=0.31% 00:17:40.927 cpu : usr=1.80%, sys=3.60%, ctx=2941, majf=0, minf=2 00:17:40.927 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:40.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.927 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:40.927 issued rwts: total=1403,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:40.927 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:40.927 00:17:40.927 Run status group 0 (all jobs): 00:17:40.927 READ: bw=5606KiB/s (5741kB/s), 5606KiB/s-5606KiB/s (5741kB/s-5741kB/s), io=5612KiB (5747kB), run=1001-1001msec 00:17:40.927 WRITE: bw=6138KiB/s (6285kB/s), 6138KiB/s-6138KiB/s (6285kB/s-6285kB/s), io=6144KiB (6291kB), run=1001-1001msec 00:17:40.927 00:17:40.927 Disk stats (read/write): 00:17:40.927 nvme0n1: ios=1200/1536, merge=0/0, ticks=1474/335, in_queue=1809, util=98.70% 00:17:40.927 15:22:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:40.927 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:17:40.927 15:22:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:40.927 15:22:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:17:40.927 15:22:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:40.927 15:22:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:40.927 15:22:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:40.927 15:22:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:40.927 15:22:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:17:40.927 15:22:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:17:40.927 15:22:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:17:40.927 15:22:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:40.927 15:22:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:17:40.927 15:22:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:40.927 15:22:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:17:40.927 15:22:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:40.927 15:22:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:40.927 rmmod nvme_tcp 00:17:40.927 rmmod nvme_fabrics 00:17:40.927 rmmod nvme_keyring 00:17:40.927 15:22:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:40.927 15:22:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:17:40.927 15:22:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:17:40.927 15:22:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 3038854 ']' 00:17:40.927 15:22:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 3038854 00:17:40.927 15:22:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 3038854 ']' 00:17:40.927 15:22:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 3038854 00:17:40.927 15:22:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:17:40.927 15:22:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:40.927 15:22:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3038854 00:17:40.927 15:22:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:40.927 15:22:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:40.927 15:22:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3038854' 00:17:40.927 killing process with pid 3038854 00:17:40.927 15:22:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 3038854 00:17:40.927 15:22:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 3038854 00:17:41.187 15:22:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:41.187 15:22:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:41.187 15:22:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:41.187 15:22:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:41.187 15:22:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:41.187 15:22:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:41.187 15:22:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:41.187 15:22:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:43.718 15:22:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:43.718 00:17:43.718 real 0m16.722s 00:17:43.718 user 0m39.916s 00:17:43.718 sys 0m6.339s 00:17:43.718 15:22:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:43.718 15:22:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:43.718 ************************************ 00:17:43.718 END TEST nvmf_nmic 00:17:43.718 ************************************ 00:17:43.718 15:22:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:43.718 15:22:47 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:17:43.718 15:22:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:43.718 15:22:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:43.718 15:22:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:43.718 ************************************ 00:17:43.718 START TEST nvmf_fio_target 00:17:43.718 ************************************ 00:17:43.718 15:22:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:17:43.718 * Looking for test storage... 00:17:43.718 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:43.718 15:22:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:43.718 15:22:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:17:43.718 15:22:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:43.718 15:22:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:43.718 15:22:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:43.718 15:22:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:43.718 15:22:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:43.718 15:22:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:43.718 15:22:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:43.718 15:22:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:43.718 15:22:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:43.718 15:22:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:43.718 15:22:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:43.718 15:22:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:17:43.718 15:22:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:43.718 15:22:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:43.718 15:22:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:43.718 15:22:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:43.718 15:22:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:43.718 15:22:47 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:43.718 15:22:47 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:43.718 15:22:47 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:43.718 15:22:47 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.718 15:22:47 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.718 15:22:47 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.718 15:22:47 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:17:43.718 15:22:47 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.718 15:22:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:17:43.718 15:22:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:43.718 15:22:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:43.718 15:22:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:43.718 15:22:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:43.718 15:22:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:43.718 15:22:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:43.718 15:22:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:43.718 15:22:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:43.718 15:22:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:43.718 15:22:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:43.718 15:22:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:43.718 15:22:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:17:43.718 15:22:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:43.718 15:22:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:43.718 15:22:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:43.718 15:22:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:43.718 15:22:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:43.718 15:22:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:43.718 15:22:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:43.718 15:22:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:43.718 15:22:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:43.718 15:22:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:43.718 15:22:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:43.718 15:22:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:50.284 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:50.284 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:50.284 Found net devices under 0000:af:00.0: cvl_0_0 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:50.284 Found net devices under 0000:af:00.1: cvl_0_1 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:50.284 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:50.543 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:50.543 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:50.543 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:50.543 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:50.543 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:50.543 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:50.543 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:50.543 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:50.543 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:17:50.543 00:17:50.543 --- 10.0.0.2 ping statistics --- 00:17:50.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.543 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:17:50.543 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:50.543 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:50.543 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.253 ms 00:17:50.543 00:17:50.543 --- 10.0.0.1 ping statistics --- 00:17:50.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.543 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:17:50.543 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:50.543 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:17:50.543 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:50.543 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:50.543 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:50.543 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:50.543 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:50.543 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:50.543 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:50.543 15:22:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:17:50.543 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:50.543 15:22:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:50.543 15:22:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.543 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=3044027 00:17:50.543 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:50.543 15:22:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 3044027 00:17:50.543 15:22:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 3044027 ']' 00:17:50.543 15:22:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:50.543 15:22:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:50.543 15:22:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:50.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:50.543 15:22:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:50.543 15:22:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.802 [2024-07-15 15:22:54.453854] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:17:50.802 [2024-07-15 15:22:54.453916] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:50.802 EAL: No free 2048 kB hugepages reported on node 1 00:17:50.802 [2024-07-15 15:22:54.528025] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:50.802 [2024-07-15 15:22:54.599364] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:50.802 [2024-07-15 15:22:54.599403] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:50.802 [2024-07-15 15:22:54.599413] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:50.802 [2024-07-15 15:22:54.599422] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:50.802 [2024-07-15 15:22:54.599429] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:50.802 [2024-07-15 15:22:54.599476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:50.802 [2024-07-15 15:22:54.599495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:50.802 [2024-07-15 15:22:54.599584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:50.802 [2024-07-15 15:22:54.599586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.370 15:22:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:51.370 15:22:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:17:51.370 15:22:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:51.370 15:22:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:51.370 15:22:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.629 15:22:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:51.629 15:22:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:51.629 [2024-07-15 15:22:55.467343] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:51.629 15:22:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:51.887 15:22:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:17:51.887 15:22:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:52.146 15:22:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:17:52.146 15:22:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:52.405 15:22:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:17:52.405 15:22:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:52.405 15:22:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:17:52.405 15:22:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:17:52.664 15:22:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:52.922 15:22:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:17:52.922 15:22:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:53.182 15:22:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:17:53.182 15:22:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:53.182 15:22:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:17:53.182 15:22:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:17:53.441 15:22:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:53.699 15:22:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:53.700 15:22:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:53.700 15:22:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:53.700 15:22:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:53.959 15:22:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:54.218 [2024-07-15 15:22:57.928397] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:54.218 15:22:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:17:54.477 15:22:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:17:54.477 15:22:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:55.854 15:22:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:17:55.854 15:22:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:17:55.854 15:22:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:55.854 15:22:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:17:55.854 15:22:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:17:55.854 15:22:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:17:57.756 15:23:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:57.756 15:23:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:57.756 15:23:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:57.756 15:23:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:17:57.756 15:23:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:57.756 15:23:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:17:57.756 15:23:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:58.014 [global] 00:17:58.014 thread=1 00:17:58.014 invalidate=1 00:17:58.014 rw=write 00:17:58.014 time_based=1 00:17:58.014 runtime=1 00:17:58.014 ioengine=libaio 00:17:58.014 direct=1 00:17:58.014 bs=4096 00:17:58.014 iodepth=1 00:17:58.014 norandommap=0 00:17:58.014 numjobs=1 00:17:58.014 00:17:58.014 verify_dump=1 00:17:58.014 verify_backlog=512 00:17:58.014 verify_state_save=0 00:17:58.014 do_verify=1 00:17:58.014 verify=crc32c-intel 00:17:58.014 [job0] 00:17:58.014 filename=/dev/nvme0n1 00:17:58.014 [job1] 00:17:58.014 filename=/dev/nvme0n2 00:17:58.014 [job2] 00:17:58.014 filename=/dev/nvme0n3 00:17:58.014 [job3] 00:17:58.014 filename=/dev/nvme0n4 00:17:58.014 Could not set queue depth (nvme0n1) 00:17:58.014 Could not set queue depth (nvme0n2) 00:17:58.014 Could not set queue depth (nvme0n3) 00:17:58.014 Could not set queue depth (nvme0n4) 00:17:58.271 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:58.271 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:58.271 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:58.271 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:58.271 fio-3.35 00:17:58.271 Starting 4 threads 00:17:59.648 00:17:59.648 job0: (groupid=0, jobs=1): err= 0: pid=3045556: Mon Jul 15 15:23:03 2024 00:17:59.648 read: IOPS=22, BW=89.3KiB/s (91.5kB/s)(92.0KiB/1030msec) 00:17:59.648 slat (nsec): min=9695, max=28020, avg=23176.22, stdev=4157.05 00:17:59.648 clat (usec): min=345, max=41439, avg=39223.64, stdev=8475.78 00:17:59.648 lat (usec): min=357, max=41448, avg=39246.82, stdev=8478.31 00:17:59.648 clat percentiles (usec): 00:17:59.648 | 1.00th=[ 347], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:17:59.648 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:17:59.648 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:17:59.648 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:17:59.648 | 99.99th=[41681] 00:17:59.648 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:17:59.648 slat (nsec): min=11973, max=62910, avg=13256.89, stdev=3081.42 00:17:59.648 clat (usec): min=171, max=359, avg=231.25, stdev=25.01 00:17:59.648 lat (usec): min=183, max=401, avg=244.50, stdev=25.83 00:17:59.648 clat percentiles (usec): 00:17:59.648 | 1.00th=[ 188], 5.00th=[ 196], 10.00th=[ 206], 20.00th=[ 212], 00:17:59.648 | 30.00th=[ 217], 40.00th=[ 223], 50.00th=[ 229], 60.00th=[ 233], 00:17:59.648 | 70.00th=[ 239], 80.00th=[ 249], 90.00th=[ 273], 95.00th=[ 281], 00:17:59.648 | 99.00th=[ 293], 99.50th=[ 343], 99.90th=[ 359], 99.95th=[ 359], 00:17:59.648 | 99.99th=[ 359] 00:17:59.648 bw ( KiB/s): min= 4096, max= 4096, per=25.78%, avg=4096.00, stdev= 0.00, samples=1 00:17:59.648 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:59.648 lat (usec) : 250=77.20%, 500=18.69% 00:17:59.648 lat (msec) : 50=4.11% 00:17:59.648 cpu : usr=0.58%, sys=0.87%, ctx=535, majf=0, minf=1 00:17:59.648 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:59.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:59.648 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:59.648 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:59.648 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:59.648 job1: (groupid=0, jobs=1): err= 0: pid=3045557: Mon Jul 15 15:23:03 2024 00:17:59.648 read: IOPS=1072, BW=4292KiB/s (4395kB/s)(4296KiB/1001msec) 00:17:59.648 slat (nsec): min=6983, max=25221, avg=9487.70, stdev=1110.21 00:17:59.648 clat (usec): min=381, max=760, avg=510.21, stdev=38.86 00:17:59.648 lat (usec): min=388, max=771, avg=519.70, stdev=39.04 00:17:59.648 clat percentiles (usec): 00:17:59.648 | 1.00th=[ 404], 5.00th=[ 429], 10.00th=[ 453], 20.00th=[ 494], 00:17:59.648 | 30.00th=[ 506], 40.00th=[ 515], 50.00th=[ 519], 60.00th=[ 523], 00:17:59.648 | 70.00th=[ 529], 80.00th=[ 537], 90.00th=[ 537], 95.00th=[ 545], 00:17:59.648 | 99.00th=[ 594], 99.50th=[ 701], 99.90th=[ 725], 99.95th=[ 758], 00:17:59.648 | 99.99th=[ 758] 00:17:59.648 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:17:59.648 slat (usec): min=4, max=240, avg=13.51, stdev=17.53 00:17:59.648 clat (usec): min=63, max=1048, avg=268.54, stdev=76.67 00:17:59.648 lat (usec): min=205, max=1059, avg=282.04, stdev=78.78 00:17:59.648 clat percentiles (usec): 00:17:59.648 | 1.00th=[ 202], 5.00th=[ 208], 10.00th=[ 212], 20.00th=[ 219], 00:17:59.648 | 30.00th=[ 227], 40.00th=[ 233], 50.00th=[ 243], 60.00th=[ 258], 00:17:59.648 | 70.00th=[ 273], 80.00th=[ 314], 90.00th=[ 367], 95.00th=[ 396], 00:17:59.648 | 99.00th=[ 529], 99.50th=[ 685], 99.90th=[ 996], 99.95th=[ 1057], 00:17:59.648 | 99.99th=[ 1057] 00:17:59.648 bw ( KiB/s): min= 6472, max= 6472, per=40.73%, avg=6472.00, stdev= 0.00, samples=1 00:17:59.648 iops : min= 1618, max= 1618, avg=1618.00, stdev= 0.00, samples=1 00:17:59.648 lat (usec) : 100=0.08%, 250=32.30%, 500=35.33%, 750=32.07%, 1000=0.19% 00:17:59.648 lat (msec) : 2=0.04% 00:17:59.648 cpu : usr=3.10%, sys=3.30%, ctx=2613, majf=0, minf=2 00:17:59.648 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:59.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:59.648 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:59.648 issued rwts: total=1074,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:59.648 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:59.648 job2: (groupid=0, jobs=1): err= 0: pid=3045560: Mon Jul 15 15:23:03 2024 00:17:59.648 read: IOPS=21, BW=85.4KiB/s (87.4kB/s)(88.0KiB/1031msec) 00:17:59.648 slat (nsec): min=11612, max=26248, avg=24609.77, stdev=2929.44 00:17:59.648 clat (usec): min=40732, max=41093, avg=40959.40, stdev=69.38 00:17:59.648 lat (usec): min=40744, max=41118, avg=40984.01, stdev=71.50 00:17:59.648 clat percentiles (usec): 00:17:59.648 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:17:59.648 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:17:59.648 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:17:59.648 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:17:59.648 | 99.99th=[41157] 00:17:59.648 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:17:59.648 slat (nsec): min=11288, max=44420, avg=13504.90, stdev=2377.40 00:17:59.648 clat (usec): min=165, max=429, avg=235.38, stdev=27.83 00:17:59.648 lat (usec): min=178, max=473, avg=248.89, stdev=29.08 00:17:59.648 clat percentiles (usec): 00:17:59.648 | 1.00th=[ 196], 5.00th=[ 202], 10.00th=[ 208], 20.00th=[ 217], 00:17:59.648 | 30.00th=[ 219], 40.00th=[ 225], 50.00th=[ 229], 60.00th=[ 235], 00:17:59.648 | 70.00th=[ 243], 80.00th=[ 255], 90.00th=[ 269], 95.00th=[ 281], 00:17:59.648 | 99.00th=[ 326], 99.50th=[ 334], 99.90th=[ 429], 99.95th=[ 429], 00:17:59.648 | 99.99th=[ 429] 00:17:59.648 bw ( KiB/s): min= 4096, max= 4096, per=25.78%, avg=4096.00, stdev= 0.00, samples=1 00:17:59.648 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:59.648 lat (usec) : 250=73.41%, 500=22.47% 00:17:59.648 lat (msec) : 50=4.12% 00:17:59.648 cpu : usr=0.58%, sys=0.39%, ctx=536, majf=0, minf=1 00:17:59.648 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:59.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:59.648 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:59.648 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:59.648 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:59.648 job3: (groupid=0, jobs=1): err= 0: pid=3045561: Mon Jul 15 15:23:03 2024 00:17:59.648 read: IOPS=1203, BW=4815KiB/s (4931kB/s)(4820KiB/1001msec) 00:17:59.648 slat (nsec): min=8657, max=23818, avg=9549.33, stdev=978.85 00:17:59.648 clat (usec): min=276, max=658, avg=498.47, stdev=62.56 00:17:59.648 lat (usec): min=285, max=668, avg=508.02, stdev=62.54 00:17:59.648 clat percentiles (usec): 00:17:59.648 | 1.00th=[ 318], 5.00th=[ 338], 10.00th=[ 379], 20.00th=[ 486], 00:17:59.648 | 30.00th=[ 510], 40.00th=[ 515], 50.00th=[ 523], 60.00th=[ 529], 00:17:59.648 | 70.00th=[ 529], 80.00th=[ 537], 90.00th=[ 545], 95.00th=[ 545], 00:17:59.648 | 99.00th=[ 562], 99.50th=[ 619], 99.90th=[ 660], 99.95th=[ 660], 00:17:59.648 | 99.99th=[ 660] 00:17:59.648 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:17:59.648 slat (nsec): min=11749, max=50604, avg=12957.81, stdev=2253.50 00:17:59.648 clat (usec): min=197, max=367, avg=234.46, stdev=19.48 00:17:59.648 lat (usec): min=209, max=408, avg=247.41, stdev=19.88 00:17:59.648 clat percentiles (usec): 00:17:59.648 | 1.00th=[ 204], 5.00th=[ 210], 10.00th=[ 212], 20.00th=[ 219], 00:17:59.648 | 30.00th=[ 223], 40.00th=[ 227], 50.00th=[ 231], 60.00th=[ 237], 00:17:59.648 | 70.00th=[ 241], 80.00th=[ 249], 90.00th=[ 265], 95.00th=[ 273], 00:17:59.648 | 99.00th=[ 285], 99.50th=[ 293], 99.90th=[ 330], 99.95th=[ 367], 00:17:59.648 | 99.99th=[ 367] 00:17:59.648 bw ( KiB/s): min= 7152, max= 7152, per=45.01%, avg=7152.00, stdev= 0.00, samples=1 00:17:59.648 iops : min= 1788, max= 1788, avg=1788.00, stdev= 0.00, samples=1 00:17:59.648 lat (usec) : 250=45.42%, 500=20.83%, 750=33.75% 00:17:59.648 cpu : usr=3.50%, sys=3.80%, ctx=2741, majf=0, minf=1 00:17:59.648 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:59.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:59.648 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:59.648 issued rwts: total=1205,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:59.648 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:59.648 00:17:59.648 Run status group 0 (all jobs): 00:17:59.648 READ: bw=9016KiB/s (9233kB/s), 85.4KiB/s-4815KiB/s (87.4kB/s-4931kB/s), io=9296KiB (9519kB), run=1001-1031msec 00:17:59.648 WRITE: bw=15.5MiB/s (16.3MB/s), 1986KiB/s-6138KiB/s (2034kB/s-6285kB/s), io=16.0MiB (16.8MB), run=1001-1031msec 00:17:59.648 00:17:59.648 Disk stats (read/write): 00:17:59.648 nvme0n1: ios=67/512, merge=0/0, ticks=692/112, in_queue=804, util=84.37% 00:17:59.648 nvme0n2: ios=1048/1034, merge=0/0, ticks=1504/250, in_queue=1754, util=99.38% 00:17:59.648 nvme0n3: ios=39/512, merge=0/0, ticks=1605/121, in_queue=1726, util=99.57% 00:17:59.648 nvme0n4: ios=1024/1085, merge=0/0, ticks=530/251, in_queue=781, util=89.25% 00:17:59.648 15:23:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:17:59.648 [global] 00:17:59.648 thread=1 00:17:59.648 invalidate=1 00:17:59.648 rw=randwrite 00:17:59.648 time_based=1 00:17:59.648 runtime=1 00:17:59.648 ioengine=libaio 00:17:59.648 direct=1 00:17:59.648 bs=4096 00:17:59.648 iodepth=1 00:17:59.648 norandommap=0 00:17:59.648 numjobs=1 00:17:59.648 00:17:59.648 verify_dump=1 00:17:59.648 verify_backlog=512 00:17:59.648 verify_state_save=0 00:17:59.648 do_verify=1 00:17:59.648 verify=crc32c-intel 00:17:59.648 [job0] 00:17:59.648 filename=/dev/nvme0n1 00:17:59.648 [job1] 00:17:59.648 filename=/dev/nvme0n2 00:17:59.648 [job2] 00:17:59.648 filename=/dev/nvme0n3 00:17:59.648 [job3] 00:17:59.648 filename=/dev/nvme0n4 00:17:59.648 Could not set queue depth (nvme0n1) 00:17:59.648 Could not set queue depth (nvme0n2) 00:17:59.648 Could not set queue depth (nvme0n3) 00:17:59.649 Could not set queue depth (nvme0n4) 00:17:59.907 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:59.907 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:59.907 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:59.907 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:59.907 fio-3.35 00:17:59.907 Starting 4 threads 00:18:01.352 00:18:01.352 job0: (groupid=0, jobs=1): err= 0: pid=3045984: Mon Jul 15 15:23:04 2024 00:18:01.352 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:18:01.352 slat (nsec): min=8833, max=27399, avg=9927.68, stdev=1677.24 00:18:01.352 clat (usec): min=296, max=40968, avg=615.54, stdev=1787.82 00:18:01.353 lat (usec): min=306, max=40977, avg=625.47, stdev=1788.16 00:18:01.353 clat percentiles (usec): 00:18:01.353 | 1.00th=[ 433], 5.00th=[ 486], 10.00th=[ 498], 20.00th=[ 506], 00:18:01.353 | 30.00th=[ 515], 40.00th=[ 523], 50.00th=[ 529], 60.00th=[ 537], 00:18:01.353 | 70.00th=[ 545], 80.00th=[ 562], 90.00th=[ 570], 95.00th=[ 586], 00:18:01.353 | 99.00th=[ 668], 99.50th=[ 1057], 99.90th=[41157], 99.95th=[41157], 00:18:01.353 | 99.99th=[41157] 00:18:01.353 write: IOPS=1451, BW=5806KiB/s (5946kB/s)(5812KiB/1001msec); 0 zone resets 00:18:01.353 slat (usec): min=11, max=255, avg=13.50, stdev= 6.80 00:18:01.353 clat (usec): min=158, max=471, avg=228.76, stdev=37.95 00:18:01.353 lat (usec): min=171, max=525, avg=242.26, stdev=39.25 00:18:01.353 clat percentiles (usec): 00:18:01.353 | 1.00th=[ 169], 5.00th=[ 180], 10.00th=[ 192], 20.00th=[ 204], 00:18:01.353 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 221], 60.00th=[ 227], 00:18:01.353 | 70.00th=[ 239], 80.00th=[ 255], 90.00th=[ 281], 95.00th=[ 293], 00:18:01.353 | 99.00th=[ 347], 99.50th=[ 351], 99.90th=[ 469], 99.95th=[ 474], 00:18:01.353 | 99.99th=[ 474] 00:18:01.353 bw ( KiB/s): min= 4496, max= 4496, per=28.86%, avg=4496.00, stdev= 0.00, samples=1 00:18:01.353 iops : min= 1124, max= 1124, avg=1124.00, stdev= 0.00, samples=1 00:18:01.353 lat (usec) : 250=45.14%, 500=18.89%, 750=35.65%, 1000=0.04% 00:18:01.353 lat (msec) : 2=0.16%, 4=0.04%, 50=0.08% 00:18:01.353 cpu : usr=3.30%, sys=3.50%, ctx=2480, majf=0, minf=2 00:18:01.353 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:01.353 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:01.353 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:01.353 issued rwts: total=1024,1453,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:01.353 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:01.353 job1: (groupid=0, jobs=1): err= 0: pid=3045985: Mon Jul 15 15:23:04 2024 00:18:01.353 read: IOPS=218, BW=875KiB/s (896kB/s)(900KiB/1029msec) 00:18:01.353 slat (nsec): min=8710, max=41484, avg=10936.95, stdev=4727.30 00:18:01.353 clat (usec): min=248, max=41213, avg=3968.59, stdev=11587.43 00:18:01.353 lat (usec): min=267, max=41229, avg=3979.53, stdev=11590.35 00:18:01.353 clat percentiles (usec): 00:18:01.353 | 1.00th=[ 258], 5.00th=[ 273], 10.00th=[ 281], 20.00th=[ 297], 00:18:01.353 | 30.00th=[ 326], 40.00th=[ 343], 50.00th=[ 359], 60.00th=[ 383], 00:18:01.353 | 70.00th=[ 400], 80.00th=[ 433], 90.00th=[ 529], 95.00th=[41157], 00:18:01.353 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:01.353 | 99.99th=[41157] 00:18:01.353 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:18:01.353 slat (usec): min=7, max=271, avg=17.01, stdev=24.96 00:18:01.353 clat (usec): min=28, max=546, avg=237.69, stdev=47.50 00:18:01.353 lat (usec): min=177, max=586, avg=254.70, stdev=49.35 00:18:01.353 clat percentiles (usec): 00:18:01.353 | 1.00th=[ 167], 5.00th=[ 178], 10.00th=[ 188], 20.00th=[ 198], 00:18:01.353 | 30.00th=[ 215], 40.00th=[ 225], 50.00th=[ 233], 60.00th=[ 245], 00:18:01.353 | 70.00th=[ 258], 80.00th=[ 273], 90.00th=[ 293], 95.00th=[ 326], 00:18:01.353 | 99.00th=[ 359], 99.50th=[ 363], 99.90th=[ 545], 99.95th=[ 545], 00:18:01.353 | 99.99th=[ 545] 00:18:01.353 bw ( KiB/s): min= 4096, max= 4096, per=26.29%, avg=4096.00, stdev= 0.00, samples=1 00:18:01.353 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:01.353 lat (usec) : 50=0.27%, 100=0.14%, 250=44.78%, 500=51.56%, 750=0.54% 00:18:01.353 lat (msec) : 50=2.71% 00:18:01.353 cpu : usr=0.29%, sys=1.56%, ctx=739, majf=0, minf=1 00:18:01.353 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:01.353 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:01.353 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:01.353 issued rwts: total=225,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:01.353 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:01.353 job2: (groupid=0, jobs=1): err= 0: pid=3045986: Mon Jul 15 15:23:04 2024 00:18:01.353 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:18:01.353 slat (nsec): min=8926, max=47523, avg=10006.77, stdev=1883.40 00:18:01.353 clat (usec): min=324, max=41315, avg=577.85, stdev=1277.59 00:18:01.353 lat (usec): min=334, max=41325, avg=587.86, stdev=1277.58 00:18:01.353 clat percentiles (usec): 00:18:01.353 | 1.00th=[ 396], 5.00th=[ 433], 10.00th=[ 486], 20.00th=[ 510], 00:18:01.353 | 30.00th=[ 519], 40.00th=[ 529], 50.00th=[ 537], 60.00th=[ 545], 00:18:01.353 | 70.00th=[ 553], 80.00th=[ 562], 90.00th=[ 578], 95.00th=[ 594], 00:18:01.353 | 99.00th=[ 840], 99.50th=[ 1205], 99.90th=[ 1844], 99.95th=[41157], 00:18:01.353 | 99.99th=[41157] 00:18:01.353 write: IOPS=1529, BW=6118KiB/s (6265kB/s)(6124KiB/1001msec); 0 zone resets 00:18:01.353 slat (nsec): min=12230, max=47116, avg=13322.72, stdev=2005.58 00:18:01.353 clat (usec): min=192, max=1456, avg=241.85, stdev=41.43 00:18:01.353 lat (usec): min=205, max=1471, avg=255.17, stdev=41.66 00:18:01.353 clat percentiles (usec): 00:18:01.353 | 1.00th=[ 198], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 219], 00:18:01.353 | 30.00th=[ 225], 40.00th=[ 233], 50.00th=[ 239], 60.00th=[ 243], 00:18:01.353 | 70.00th=[ 251], 80.00th=[ 265], 90.00th=[ 277], 95.00th=[ 289], 00:18:01.353 | 99.00th=[ 318], 99.50th=[ 355], 99.90th=[ 449], 99.95th=[ 1450], 00:18:01.353 | 99.99th=[ 1450] 00:18:01.353 bw ( KiB/s): min= 5296, max= 5296, per=33.99%, avg=5296.00, stdev= 0.00, samples=1 00:18:01.353 iops : min= 1324, max= 1324, avg=1324.00, stdev= 0.00, samples=1 00:18:01.353 lat (usec) : 250=41.33%, 500=24.42%, 750=33.74%, 1000=0.20% 00:18:01.353 lat (msec) : 2=0.27%, 50=0.04% 00:18:01.353 cpu : usr=2.50%, sys=4.50%, ctx=2556, majf=0, minf=1 00:18:01.353 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:01.353 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:01.353 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:01.353 issued rwts: total=1024,1531,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:01.353 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:01.353 job3: (groupid=0, jobs=1): err= 0: pid=3045987: Mon Jul 15 15:23:04 2024 00:18:01.353 read: IOPS=20, BW=83.8KiB/s (85.8kB/s)(84.0KiB/1002msec) 00:18:01.353 slat (nsec): min=11383, max=26841, avg=24077.05, stdev=3106.77 00:18:01.353 clat (usec): min=40823, max=41120, avg=40974.21, stdev=85.75 00:18:01.353 lat (usec): min=40848, max=41143, avg=40998.28, stdev=85.24 00:18:01.353 clat percentiles (usec): 00:18:01.353 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:18:01.353 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:01.353 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:18:01.353 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:01.353 | 99.99th=[41157] 00:18:01.353 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:18:01.353 slat (nsec): min=12383, max=44620, avg=14008.98, stdev=2888.61 00:18:01.353 clat (usec): min=224, max=450, avg=256.79, stdev=22.23 00:18:01.353 lat (usec): min=237, max=494, avg=270.79, stdev=22.86 00:18:01.353 clat percentiles (usec): 00:18:01.353 | 1.00th=[ 229], 5.00th=[ 233], 10.00th=[ 235], 20.00th=[ 239], 00:18:01.353 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 258], 00:18:01.353 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 289], 95.00th=[ 297], 00:18:01.353 | 99.00th=[ 318], 99.50th=[ 330], 99.90th=[ 449], 99.95th=[ 449], 00:18:01.353 | 99.99th=[ 449] 00:18:01.353 bw ( KiB/s): min= 4096, max= 4096, per=26.29%, avg=4096.00, stdev= 0.00, samples=1 00:18:01.353 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:01.353 lat (usec) : 250=45.59%, 500=50.47% 00:18:01.353 lat (msec) : 50=3.94% 00:18:01.353 cpu : usr=1.00%, sys=0.50%, ctx=534, majf=0, minf=1 00:18:01.353 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:01.353 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:01.353 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:01.353 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:01.353 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:01.353 00:18:01.353 Run status group 0 (all jobs): 00:18:01.353 READ: bw=8917KiB/s (9131kB/s), 83.8KiB/s-4092KiB/s (85.8kB/s-4190kB/s), io=9176KiB (9396kB), run=1001-1029msec 00:18:01.353 WRITE: bw=15.2MiB/s (16.0MB/s), 1990KiB/s-6118KiB/s (2038kB/s-6265kB/s), io=15.7MiB (16.4MB), run=1001-1029msec 00:18:01.353 00:18:01.353 Disk stats (read/write): 00:18:01.353 nvme0n1: ios=930/1024, merge=0/0, ticks=985/227, in_queue=1212, util=100.00% 00:18:01.353 nvme0n2: ios=243/512, merge=0/0, ticks=1590/119, in_queue=1709, util=89.25% 00:18:01.353 nvme0n3: ios=959/1024, merge=0/0, ticks=1407/250, in_queue=1657, util=92.66% 00:18:01.353 nvme0n4: ios=41/512, merge=0/0, ticks=1603/118, in_queue=1721, util=95.91% 00:18:01.353 15:23:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:18:01.353 [global] 00:18:01.353 thread=1 00:18:01.353 invalidate=1 00:18:01.353 rw=write 00:18:01.353 time_based=1 00:18:01.353 runtime=1 00:18:01.353 ioengine=libaio 00:18:01.353 direct=1 00:18:01.353 bs=4096 00:18:01.353 iodepth=128 00:18:01.353 norandommap=0 00:18:01.353 numjobs=1 00:18:01.353 00:18:01.353 verify_dump=1 00:18:01.353 verify_backlog=512 00:18:01.353 verify_state_save=0 00:18:01.353 do_verify=1 00:18:01.353 verify=crc32c-intel 00:18:01.353 [job0] 00:18:01.353 filename=/dev/nvme0n1 00:18:01.353 [job1] 00:18:01.353 filename=/dev/nvme0n2 00:18:01.353 [job2] 00:18:01.353 filename=/dev/nvme0n3 00:18:01.353 [job3] 00:18:01.353 filename=/dev/nvme0n4 00:18:01.353 Could not set queue depth (nvme0n1) 00:18:01.353 Could not set queue depth (nvme0n2) 00:18:01.353 Could not set queue depth (nvme0n3) 00:18:01.353 Could not set queue depth (nvme0n4) 00:18:01.612 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:01.612 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:01.612 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:01.612 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:01.612 fio-3.35 00:18:01.612 Starting 4 threads 00:18:02.989 00:18:02.989 job0: (groupid=0, jobs=1): err= 0: pid=3046411: Mon Jul 15 15:23:06 2024 00:18:02.989 read: IOPS=3746, BW=14.6MiB/s (15.3MB/s)(14.8MiB/1008msec) 00:18:02.989 slat (usec): min=2, max=14658, avg=124.71, stdev=904.23 00:18:02.989 clat (usec): min=3846, max=61798, avg=17491.32, stdev=7021.93 00:18:02.989 lat (usec): min=9230, max=61806, avg=17616.03, stdev=7070.95 00:18:02.989 clat percentiles (usec): 00:18:02.989 | 1.00th=[ 9372], 5.00th=[ 9634], 10.00th=[10290], 20.00th=[11731], 00:18:02.989 | 30.00th=[13304], 40.00th=[14484], 50.00th=[15795], 60.00th=[17171], 00:18:02.989 | 70.00th=[19792], 80.00th=[21890], 90.00th=[26608], 95.00th=[32113], 00:18:02.989 | 99.00th=[40633], 99.50th=[46924], 99.90th=[48497], 99.95th=[61604], 00:18:02.989 | 99.99th=[61604] 00:18:02.989 write: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec); 0 zone resets 00:18:02.989 slat (usec): min=3, max=18025, avg=109.15, stdev=716.27 00:18:02.989 clat (usec): min=3198, max=51526, avg=14992.23, stdev=8228.92 00:18:02.989 lat (usec): min=4142, max=51540, avg=15101.38, stdev=8280.85 00:18:02.989 clat percentiles (usec): 00:18:02.989 | 1.00th=[ 5080], 5.00th=[ 6587], 10.00th=[ 7111], 20.00th=[ 8979], 00:18:02.989 | 30.00th=[ 9896], 40.00th=[11076], 50.00th=[13435], 60.00th=[14746], 00:18:02.989 | 70.00th=[16319], 80.00th=[19006], 90.00th=[25822], 95.00th=[33424], 00:18:02.989 | 99.00th=[46924], 99.50th=[49546], 99.90th=[51119], 99.95th=[51643], 00:18:02.989 | 99.99th=[51643] 00:18:02.989 bw ( KiB/s): min=16384, max=16384, per=24.58%, avg=16384.00, stdev= 0.00, samples=2 00:18:02.989 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:18:02.989 lat (msec) : 4=0.03%, 10=19.51%, 20=58.85%, 50=21.38%, 100=0.23% 00:18:02.989 cpu : usr=5.26%, sys=7.94%, ctx=266, majf=0, minf=1 00:18:02.989 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:02.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:02.989 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:02.989 issued rwts: total=3776,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:02.989 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:02.989 job1: (groupid=0, jobs=1): err= 0: pid=3046412: Mon Jul 15 15:23:06 2024 00:18:02.989 read: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1010msec) 00:18:02.989 slat (nsec): min=1969, max=22191k, avg=90212.66, stdev=764645.87 00:18:02.989 clat (usec): min=2707, max=69815, avg=13552.03, stdev=9428.51 00:18:02.989 lat (usec): min=2716, max=73863, avg=13642.25, stdev=9494.67 00:18:02.989 clat percentiles (usec): 00:18:02.989 | 1.00th=[ 3228], 5.00th=[ 5342], 10.00th=[ 6783], 20.00th=[ 7701], 00:18:02.989 | 30.00th=[ 8160], 40.00th=[ 8586], 50.00th=[10290], 60.00th=[11469], 00:18:02.989 | 70.00th=[13435], 80.00th=[17957], 90.00th=[26084], 95.00th=[36439], 00:18:02.989 | 99.00th=[48497], 99.50th=[50594], 99.90th=[69731], 99.95th=[69731], 00:18:02.989 | 99.99th=[69731] 00:18:02.989 write: IOPS=3848, BW=15.0MiB/s (15.8MB/s)(15.2MiB/1010msec); 0 zone resets 00:18:02.989 slat (usec): min=2, max=12826, avg=151.51, stdev=888.36 00:18:02.989 clat (usec): min=910, max=130677, avg=20406.29, stdev=25672.29 00:18:02.989 lat (usec): min=925, max=130689, avg=20557.80, stdev=25824.76 00:18:02.989 clat percentiles (msec): 00:18:02.989 | 1.00th=[ 4], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 8], 00:18:02.989 | 30.00th=[ 9], 40.00th=[ 10], 50.00th=[ 11], 60.00th=[ 12], 00:18:02.989 | 70.00th=[ 14], 80.00th=[ 22], 90.00th=[ 65], 95.00th=[ 83], 00:18:02.989 | 99.00th=[ 116], 99.50th=[ 123], 99.90th=[ 131], 99.95th=[ 131], 00:18:02.989 | 99.99th=[ 131] 00:18:02.989 bw ( KiB/s): min= 8328, max=21744, per=22.55%, avg=15036.00, stdev=9486.54, samples=2 00:18:02.989 iops : min= 2082, max= 5436, avg=3759.00, stdev=2371.64, samples=2 00:18:02.989 lat (usec) : 1000=0.04% 00:18:02.989 lat (msec) : 2=0.13%, 4=2.03%, 10=46.25%, 20=32.08%, 50=13.05% 00:18:02.989 lat (msec) : 100=4.74%, 250=1.67% 00:18:02.989 cpu : usr=4.86%, sys=4.96%, ctx=323, majf=0, minf=1 00:18:02.989 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:02.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:02.989 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:02.989 issued rwts: total=3584,3887,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:02.989 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:02.989 job2: (groupid=0, jobs=1): err= 0: pid=3046414: Mon Jul 15 15:23:06 2024 00:18:02.989 read: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec) 00:18:02.989 slat (usec): min=2, max=12317, avg=82.94, stdev=596.31 00:18:02.989 clat (usec): min=3069, max=26283, avg=11444.37, stdev=3358.59 00:18:02.989 lat (usec): min=3110, max=27497, avg=11527.31, stdev=3395.19 00:18:02.989 clat percentiles (usec): 00:18:02.989 | 1.00th=[ 5538], 5.00th=[ 6915], 10.00th=[ 8291], 20.00th=[ 9110], 00:18:02.989 | 30.00th=[ 9503], 40.00th=[10159], 50.00th=[10552], 60.00th=[11338], 00:18:02.989 | 70.00th=[12387], 80.00th=[13698], 90.00th=[15926], 95.00th=[17957], 00:18:02.989 | 99.00th=[21890], 99.50th=[22152], 99.90th=[22676], 99.95th=[25297], 00:18:02.989 | 99.99th=[26346] 00:18:02.989 write: IOPS=4730, BW=18.5MiB/s (19.4MB/s)(18.6MiB/1005msec); 0 zone resets 00:18:02.989 slat (usec): min=2, max=19882, avg=115.53, stdev=767.62 00:18:02.989 clat (usec): min=1397, max=66073, avg=15674.42, stdev=12591.64 00:18:02.989 lat (usec): min=1414, max=66085, avg=15789.95, stdev=12675.97 00:18:02.989 clat percentiles (usec): 00:18:02.989 | 1.00th=[ 3130], 5.00th=[ 6915], 10.00th=[ 8356], 20.00th=[ 8979], 00:18:02.989 | 30.00th=[ 9634], 40.00th=[10683], 50.00th=[11338], 60.00th=[12125], 00:18:02.989 | 70.00th=[13304], 80.00th=[16188], 90.00th=[38011], 95.00th=[50594], 00:18:02.989 | 99.00th=[60556], 99.50th=[61080], 99.90th=[66323], 99.95th=[66323], 00:18:02.989 | 99.99th=[66323] 00:18:02.989 bw ( KiB/s): min=16376, max=20640, per=27.76%, avg=18508.00, stdev=3015.10, samples=2 00:18:02.989 iops : min= 4094, max= 5160, avg=4627.00, stdev=753.78, samples=2 00:18:02.989 lat (msec) : 2=0.17%, 4=0.57%, 10=34.86%, 20=55.19%, 50=6.49% 00:18:02.989 lat (msec) : 100=2.71% 00:18:02.989 cpu : usr=4.28%, sys=5.78%, ctx=439, majf=0, minf=1 00:18:02.989 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:18:02.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:02.989 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:02.989 issued rwts: total=4608,4754,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:02.989 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:02.989 job3: (groupid=0, jobs=1): err= 0: pid=3046415: Mon Jul 15 15:23:06 2024 00:18:02.989 read: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec) 00:18:02.989 slat (usec): min=2, max=22541, avg=123.99, stdev=996.17 00:18:02.989 clat (usec): min=3738, max=63230, avg=17568.99, stdev=9118.34 00:18:02.989 lat (usec): min=3867, max=63251, avg=17692.99, stdev=9201.85 00:18:02.989 clat percentiles (usec): 00:18:02.989 | 1.00th=[ 6521], 5.00th=[ 8356], 10.00th=[ 9372], 20.00th=[10421], 00:18:02.989 | 30.00th=[11469], 40.00th=[12256], 50.00th=[14615], 60.00th=[17171], 00:18:02.989 | 70.00th=[19792], 80.00th=[24249], 90.00th=[32637], 95.00th=[35390], 00:18:02.989 | 99.00th=[46400], 99.50th=[50070], 99.90th=[50070], 99.95th=[50070], 00:18:02.989 | 99.99th=[63177] 00:18:02.989 write: IOPS=4064, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec); 0 zone resets 00:18:02.989 slat (usec): min=2, max=12954, avg=102.84, stdev=742.94 00:18:02.989 clat (usec): min=1190, max=33093, avg=13640.84, stdev=4220.95 00:18:02.989 lat (usec): min=1406, max=33104, avg=13743.68, stdev=4261.58 00:18:02.989 clat percentiles (usec): 00:18:02.989 | 1.00th=[ 4817], 5.00th=[ 7177], 10.00th=[ 8356], 20.00th=[10159], 00:18:02.989 | 30.00th=[11207], 40.00th=[12387], 50.00th=[13698], 60.00th=[14353], 00:18:02.989 | 70.00th=[15795], 80.00th=[16712], 90.00th=[18482], 95.00th=[21890], 00:18:02.989 | 99.00th=[24249], 99.50th=[25297], 99.90th=[25297], 99.95th=[25822], 00:18:02.989 | 99.99th=[33162] 00:18:02.989 bw ( KiB/s): min=16384, max=16384, per=24.58%, avg=16384.00, stdev= 0.00, samples=2 00:18:02.989 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:18:02.989 lat (msec) : 2=0.10%, 4=0.24%, 10=18.13%, 20=63.04%, 50=18.27% 00:18:02.989 lat (msec) : 100=0.22% 00:18:02.989 cpu : usr=4.07%, sys=7.05%, ctx=273, majf=0, minf=1 00:18:02.989 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:02.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:02.989 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:02.989 issued rwts: total=4096,4097,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:02.989 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:02.989 00:18:02.989 Run status group 0 (all jobs): 00:18:02.989 READ: bw=62.1MiB/s (65.1MB/s), 13.9MiB/s-17.9MiB/s (14.5MB/s-18.8MB/s), io=62.8MiB (65.8MB), run=1005-1010msec 00:18:02.989 WRITE: bw=65.1MiB/s (68.3MB/s), 15.0MiB/s-18.5MiB/s (15.8MB/s-19.4MB/s), io=65.8MiB (69.0MB), run=1005-1010msec 00:18:02.989 00:18:02.989 Disk stats (read/write): 00:18:02.989 nvme0n1: ios=3084/3079, merge=0/0, ticks=43299/38507, in_queue=81806, util=97.39% 00:18:02.990 nvme0n2: ios=3595/3599, merge=0/0, ticks=38629/49514, in_queue=88143, util=99.90% 00:18:02.990 nvme0n3: ios=3518/3584, merge=0/0, ticks=31148/49146, in_queue=80294, util=99.57% 00:18:02.990 nvme0n4: ios=3400/3584, merge=0/0, ticks=35375/30375, in_queue=65750, util=88.97% 00:18:02.990 15:23:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:18:02.990 [global] 00:18:02.990 thread=1 00:18:02.990 invalidate=1 00:18:02.990 rw=randwrite 00:18:02.990 time_based=1 00:18:02.990 runtime=1 00:18:02.990 ioengine=libaio 00:18:02.990 direct=1 00:18:02.990 bs=4096 00:18:02.990 iodepth=128 00:18:02.990 norandommap=0 00:18:02.990 numjobs=1 00:18:02.990 00:18:02.990 verify_dump=1 00:18:02.990 verify_backlog=512 00:18:02.990 verify_state_save=0 00:18:02.990 do_verify=1 00:18:02.990 verify=crc32c-intel 00:18:02.990 [job0] 00:18:02.990 filename=/dev/nvme0n1 00:18:02.990 [job1] 00:18:02.990 filename=/dev/nvme0n2 00:18:02.990 [job2] 00:18:02.990 filename=/dev/nvme0n3 00:18:02.990 [job3] 00:18:02.990 filename=/dev/nvme0n4 00:18:02.990 Could not set queue depth (nvme0n1) 00:18:02.990 Could not set queue depth (nvme0n2) 00:18:02.990 Could not set queue depth (nvme0n3) 00:18:02.990 Could not set queue depth (nvme0n4) 00:18:03.247 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:03.248 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:03.248 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:03.248 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:03.248 fio-3.35 00:18:03.248 Starting 4 threads 00:18:04.624 00:18:04.624 job0: (groupid=0, jobs=1): err= 0: pid=3046831: Mon Jul 15 15:23:08 2024 00:18:04.624 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:18:04.624 slat (nsec): min=1700, max=18568k, avg=123666.49, stdev=860175.06 00:18:04.624 clat (usec): min=5753, max=56930, avg=16652.71, stdev=7680.35 00:18:04.624 lat (usec): min=5758, max=63110, avg=16776.38, stdev=7711.85 00:18:04.624 clat percentiles (usec): 00:18:04.624 | 1.00th=[ 5997], 5.00th=[ 8356], 10.00th=[ 9896], 20.00th=[11207], 00:18:04.624 | 30.00th=[11863], 40.00th=[12518], 50.00th=[14222], 60.00th=[16712], 00:18:04.624 | 70.00th=[18220], 80.00th=[21103], 90.00th=[27395], 95.00th=[31589], 00:18:04.624 | 99.00th=[46400], 99.50th=[49021], 99.90th=[56886], 99.95th=[56886], 00:18:04.624 | 99.99th=[56886] 00:18:04.624 write: IOPS=4080, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:18:04.624 slat (usec): min=2, max=15017, avg=127.97, stdev=857.17 00:18:04.624 clat (usec): min=890, max=56631, avg=16512.55, stdev=9537.04 00:18:04.624 lat (usec): min=1854, max=56639, avg=16640.52, stdev=9605.38 00:18:04.624 clat percentiles (usec): 00:18:04.624 | 1.00th=[ 3556], 5.00th=[ 7898], 10.00th=[ 8848], 20.00th=[10290], 00:18:04.624 | 30.00th=[11076], 40.00th=[11863], 50.00th=[12649], 60.00th=[14091], 00:18:04.624 | 70.00th=[16581], 80.00th=[23725], 90.00th=[28705], 95.00th=[35914], 00:18:04.624 | 99.00th=[51643], 99.50th=[52691], 99.90th=[56361], 99.95th=[56886], 00:18:04.624 | 99.99th=[56886] 00:18:04.624 bw ( KiB/s): min=15336, max=16384, per=22.97%, avg=15860.00, stdev=741.05, samples=2 00:18:04.624 iops : min= 3834, max= 4096, avg=3965.00, stdev=185.26, samples=2 00:18:04.624 lat (usec) : 1000=0.01% 00:18:04.624 lat (msec) : 2=0.07%, 4=0.69%, 10=13.34%, 20=59.92%, 50=25.05% 00:18:04.624 lat (msec) : 100=0.92% 00:18:04.624 cpu : usr=2.69%, sys=5.39%, ctx=327, majf=0, minf=1 00:18:04.624 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:04.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:04.624 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:04.624 issued rwts: total=3584,4093,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:04.624 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:04.624 job1: (groupid=0, jobs=1): err= 0: pid=3046832: Mon Jul 15 15:23:08 2024 00:18:04.624 read: IOPS=5079, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1008msec) 00:18:04.624 slat (nsec): min=1959, max=6906.5k, avg=83309.83, stdev=487185.96 00:18:04.624 clat (usec): min=811, max=36721, avg=11215.65, stdev=4206.71 00:18:04.624 lat (usec): min=825, max=36744, avg=11298.96, stdev=4243.18 00:18:04.625 clat percentiles (usec): 00:18:04.625 | 1.00th=[ 1975], 5.00th=[ 7570], 10.00th=[ 8356], 20.00th=[ 8848], 00:18:04.625 | 30.00th=[ 9503], 40.00th=[10159], 50.00th=[10552], 60.00th=[10945], 00:18:04.625 | 70.00th=[11207], 80.00th=[12256], 90.00th=[15795], 95.00th=[18482], 00:18:04.625 | 99.00th=[29754], 99.50th=[32637], 99.90th=[34866], 99.95th=[34866], 00:18:04.625 | 99.99th=[36963] 00:18:04.625 write: IOPS=5536, BW=21.6MiB/s (22.7MB/s)(21.8MiB/1008msec); 0 zone resets 00:18:04.625 slat (usec): min=2, max=13819, avg=94.15, stdev=551.17 00:18:04.625 clat (usec): min=1390, max=45332, avg=12621.40, stdev=6384.11 00:18:04.625 lat (usec): min=1402, max=45366, avg=12715.55, stdev=6428.41 00:18:04.625 clat percentiles (usec): 00:18:04.625 | 1.00th=[ 3916], 5.00th=[ 7504], 10.00th=[ 8291], 20.00th=[ 9110], 00:18:04.625 | 30.00th=[ 9765], 40.00th=[10159], 50.00th=[10421], 60.00th=[10814], 00:18:04.625 | 70.00th=[11994], 80.00th=[13566], 90.00th=[22938], 95.00th=[27919], 00:18:04.625 | 99.00th=[36439], 99.50th=[37487], 99.90th=[38011], 99.95th=[42206], 00:18:04.625 | 99.99th=[45351] 00:18:04.625 bw ( KiB/s): min=19048, max=24576, per=31.60%, avg=21812.00, stdev=3908.89, samples=2 00:18:04.625 iops : min= 4762, max= 6144, avg=5453.00, stdev=977.22, samples=2 00:18:04.625 lat (usec) : 1000=0.01% 00:18:04.625 lat (msec) : 2=0.75%, 4=0.87%, 10=34.99%, 20=55.34%, 50=8.05% 00:18:04.625 cpu : usr=4.77%, sys=6.26%, ctx=576, majf=0, minf=1 00:18:04.625 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:18:04.625 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:04.625 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:04.625 issued rwts: total=5120,5581,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:04.625 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:04.625 job2: (groupid=0, jobs=1): err= 0: pid=3046833: Mon Jul 15 15:23:08 2024 00:18:04.625 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:18:04.625 slat (nsec): min=1742, max=25420k, avg=96757.75, stdev=761049.18 00:18:04.625 clat (usec): min=732, max=40689, avg=13978.04, stdev=6085.18 00:18:04.625 lat (usec): min=1335, max=40700, avg=14074.79, stdev=6107.70 00:18:04.625 clat percentiles (usec): 00:18:04.625 | 1.00th=[ 1614], 5.00th=[ 5735], 10.00th=[ 7635], 20.00th=[ 9896], 00:18:04.625 | 30.00th=[11076], 40.00th=[11863], 50.00th=[12387], 60.00th=[13829], 00:18:04.625 | 70.00th=[15533], 80.00th=[18482], 90.00th=[21103], 95.00th=[24773], 00:18:04.625 | 99.00th=[38011], 99.50th=[38011], 99.90th=[40633], 99.95th=[40633], 00:18:04.625 | 99.99th=[40633] 00:18:04.625 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:18:04.625 slat (usec): min=2, max=11435, avg=100.02, stdev=633.32 00:18:04.625 clat (usec): min=758, max=39978, avg=13581.73, stdev=5753.48 00:18:04.625 lat (usec): min=2052, max=39988, avg=13681.75, stdev=5787.01 00:18:04.625 clat percentiles (usec): 00:18:04.625 | 1.00th=[ 4948], 5.00th=[ 6128], 10.00th=[ 6980], 20.00th=[ 9241], 00:18:04.625 | 30.00th=[10683], 40.00th=[11731], 50.00th=[11994], 60.00th=[13566], 00:18:04.625 | 70.00th=[15139], 80.00th=[17957], 90.00th=[21365], 95.00th=[23200], 00:18:04.625 | 99.00th=[34866], 99.50th=[36439], 99.90th=[40109], 99.95th=[40109], 00:18:04.625 | 99.99th=[40109] 00:18:04.625 bw ( KiB/s): min=20472, max=20472, per=29.65%, avg=20472.00, stdev= 0.00, samples=1 00:18:04.625 iops : min= 5118, max= 5118, avg=5118.00, stdev= 0.00, samples=1 00:18:04.625 lat (usec) : 750=0.01%, 1000=0.01% 00:18:04.625 lat (msec) : 2=0.65%, 4=0.67%, 10=21.90%, 20=63.42%, 50=13.34% 00:18:04.625 cpu : usr=5.00%, sys=6.39%, ctx=433, majf=0, minf=1 00:18:04.625 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:18:04.625 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:04.625 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:04.625 issued rwts: total=4599,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:04.625 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:04.625 job3: (groupid=0, jobs=1): err= 0: pid=3046834: Mon Jul 15 15:23:08 2024 00:18:04.625 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:18:04.625 slat (usec): min=2, max=22694, avg=149.16, stdev=1092.36 00:18:04.625 clat (usec): min=5776, max=64094, avg=19018.17, stdev=9887.93 00:18:04.625 lat (usec): min=5779, max=64111, avg=19167.33, stdev=9989.75 00:18:04.625 clat percentiles (usec): 00:18:04.625 | 1.00th=[ 7832], 5.00th=[ 9372], 10.00th=[ 9503], 20.00th=[10552], 00:18:04.625 | 30.00th=[11731], 40.00th=[13829], 50.00th=[15139], 60.00th=[19268], 00:18:04.625 | 70.00th=[22938], 80.00th=[27395], 90.00th=[33162], 95.00th=[41157], 00:18:04.625 | 99.00th=[42730], 99.50th=[55313], 99.90th=[55837], 99.95th=[60556], 00:18:04.625 | 99.99th=[64226] 00:18:04.625 write: IOPS=3105, BW=12.1MiB/s (12.7MB/s)(12.2MiB/1003msec); 0 zone resets 00:18:04.625 slat (usec): min=3, max=16468, avg=165.40, stdev=995.33 00:18:04.625 clat (usec): min=1341, max=82599, avg=21694.02, stdev=15087.81 00:18:04.625 lat (usec): min=4669, max=82606, avg=21859.42, stdev=15195.35 00:18:04.625 clat percentiles (usec): 00:18:04.625 | 1.00th=[ 4883], 5.00th=[ 9110], 10.00th=[ 9110], 20.00th=[11076], 00:18:04.625 | 30.00th=[12518], 40.00th=[14746], 50.00th=[17171], 60.00th=[20579], 00:18:04.625 | 70.00th=[22414], 80.00th=[29492], 90.00th=[37487], 95.00th=[55313], 00:18:04.625 | 99.00th=[78119], 99.50th=[79168], 99.90th=[82314], 99.95th=[82314], 00:18:04.625 | 99.99th=[82314] 00:18:04.625 bw ( KiB/s): min= 9080, max=15496, per=17.80%, avg=12288.00, stdev=4536.80, samples=2 00:18:04.625 iops : min= 2270, max= 3874, avg=3072.00, stdev=1134.20, samples=2 00:18:04.625 lat (msec) : 2=0.02%, 10=15.95%, 20=44.51%, 50=36.54%, 100=2.97% 00:18:04.625 cpu : usr=3.89%, sys=3.99%, ctx=363, majf=0, minf=1 00:18:04.625 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:18:04.625 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:04.625 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:04.625 issued rwts: total=3072,3115,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:04.625 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:04.625 00:18:04.625 Run status group 0 (all jobs): 00:18:04.625 READ: bw=63.5MiB/s (66.5MB/s), 12.0MiB/s-19.8MiB/s (12.5MB/s-20.8MB/s), io=64.0MiB (67.1MB), run=1002-1008msec 00:18:04.625 WRITE: bw=67.4MiB/s (70.7MB/s), 12.1MiB/s-21.6MiB/s (12.7MB/s-22.7MB/s), io=68.0MiB (71.3MB), run=1002-1008msec 00:18:04.625 00:18:04.625 Disk stats (read/write): 00:18:04.625 nvme0n1: ios=3096/3072, merge=0/0, ticks=29547/34686, in_queue=64233, util=84.27% 00:18:04.625 nvme0n2: ios=4449/4608, merge=0/0, ticks=17034/20877, in_queue=37911, util=84.61% 00:18:04.625 nvme0n3: ios=3584/3903, merge=0/0, ticks=35871/37524, in_queue=73395, util=87.74% 00:18:04.625 nvme0n4: ios=2083/2538, merge=0/0, ticks=22353/27939, in_queue=50292, util=99.35% 00:18:04.625 15:23:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:18:04.625 15:23:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3047095 00:18:04.625 15:23:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:18:04.625 15:23:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:18:04.625 [global] 00:18:04.625 thread=1 00:18:04.625 invalidate=1 00:18:04.625 rw=read 00:18:04.625 time_based=1 00:18:04.625 runtime=10 00:18:04.625 ioengine=libaio 00:18:04.625 direct=1 00:18:04.625 bs=4096 00:18:04.625 iodepth=1 00:18:04.625 norandommap=1 00:18:04.625 numjobs=1 00:18:04.625 00:18:04.625 [job0] 00:18:04.625 filename=/dev/nvme0n1 00:18:04.625 [job1] 00:18:04.625 filename=/dev/nvme0n2 00:18:04.625 [job2] 00:18:04.625 filename=/dev/nvme0n3 00:18:04.625 [job3] 00:18:04.625 filename=/dev/nvme0n4 00:18:04.625 Could not set queue depth (nvme0n1) 00:18:04.625 Could not set queue depth (nvme0n2) 00:18:04.625 Could not set queue depth (nvme0n3) 00:18:04.625 Could not set queue depth (nvme0n4) 00:18:04.883 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:04.883 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:04.883 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:04.883 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:04.883 fio-3.35 00:18:04.883 Starting 4 threads 00:18:07.414 15:23:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:18:07.673 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=20951040, buflen=4096 00:18:07.673 fio: pid=3047260, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:07.673 15:23:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:18:07.932 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=21024768, buflen=4096 00:18:07.932 fio: pid=3047259, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:07.932 15:23:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:07.932 15:23:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:18:08.190 15:23:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:08.190 15:23:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:18:08.190 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=344064, buflen=4096 00:18:08.190 fio: pid=3047255, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:08.190 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=11980800, buflen=4096 00:18:08.190 fio: pid=3047257, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:08.190 15:23:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:08.190 15:23:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:18:08.449 00:18:08.449 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3047255: Mon Jul 15 15:23:12 2024 00:18:08.449 read: IOPS=28, BW=111KiB/s (114kB/s)(336KiB/3024msec) 00:18:08.449 slat (nsec): min=9960, max=72721, avg=24494.49, stdev=7855.05 00:18:08.449 clat (usec): min=432, max=42210, avg=35721.30, stdev=13736.82 00:18:08.449 lat (usec): min=457, max=42220, avg=35745.89, stdev=13738.36 00:18:08.449 clat percentiles (usec): 00:18:08.449 | 1.00th=[ 433], 5.00th=[ 498], 10.00th=[ 611], 20.00th=[41157], 00:18:08.449 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:08.449 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:18:08.449 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:08.449 | 99.99th=[42206] 00:18:08.449 bw ( KiB/s): min= 96, max= 128, per=0.62%, avg=104.00, stdev=13.86, samples=5 00:18:08.449 iops : min= 24, max= 32, avg=26.00, stdev= 3.46, samples=5 00:18:08.449 lat (usec) : 500=5.88%, 750=7.06% 00:18:08.449 lat (msec) : 50=85.88% 00:18:08.449 cpu : usr=0.00%, sys=0.17%, ctx=88, majf=0, minf=1 00:18:08.449 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:08.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:08.449 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:08.449 issued rwts: total=85,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:08.449 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:08.449 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3047257: Mon Jul 15 15:23:12 2024 00:18:08.449 read: IOPS=918, BW=3672KiB/s (3760kB/s)(11.4MiB/3186msec) 00:18:08.449 slat (usec): min=8, max=14749, avg=25.47, stdev=433.42 00:18:08.449 clat (usec): min=248, max=42015, avg=1059.02, stdev=4903.85 00:18:08.449 lat (usec): min=258, max=42040, avg=1084.50, stdev=4923.33 00:18:08.449 clat percentiles (usec): 00:18:08.449 | 1.00th=[ 310], 5.00th=[ 371], 10.00th=[ 388], 20.00th=[ 400], 00:18:08.449 | 30.00th=[ 408], 40.00th=[ 416], 50.00th=[ 437], 60.00th=[ 486], 00:18:08.449 | 70.00th=[ 502], 80.00th=[ 515], 90.00th=[ 529], 95.00th=[ 553], 00:18:08.449 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:18:08.449 | 99.99th=[42206] 00:18:08.449 bw ( KiB/s): min= 96, max= 8368, per=20.92%, avg=3482.17, stdev=3208.45, samples=6 00:18:08.449 iops : min= 24, max= 2092, avg=870.50, stdev=802.09, samples=6 00:18:08.449 lat (usec) : 250=0.07%, 500=69.10%, 750=29.08%, 1000=0.07% 00:18:08.449 lat (msec) : 2=0.03%, 4=0.07%, 10=0.03%, 20=0.03%, 50=1.47% 00:18:08.449 cpu : usr=0.38%, sys=1.13%, ctx=2932, majf=0, minf=1 00:18:08.449 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:08.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:08.449 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:08.449 issued rwts: total=2926,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:08.449 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:08.449 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3047259: Mon Jul 15 15:23:12 2024 00:18:08.449 read: IOPS=1820, BW=7281KiB/s (7456kB/s)(20.1MiB/2820msec) 00:18:08.449 slat (usec): min=8, max=7308, avg=13.61, stdev=134.12 00:18:08.449 clat (usec): min=289, max=41371, avg=529.61, stdev=809.50 00:18:08.449 lat (usec): min=299, max=41384, avg=541.79, stdev=814.63 00:18:08.449 clat percentiles (usec): 00:18:08.449 | 1.00th=[ 322], 5.00th=[ 359], 10.00th=[ 412], 20.00th=[ 469], 00:18:08.449 | 30.00th=[ 498], 40.00th=[ 510], 50.00th=[ 523], 60.00th=[ 529], 00:18:08.449 | 70.00th=[ 537], 80.00th=[ 553], 90.00th=[ 586], 95.00th=[ 644], 00:18:08.449 | 99.00th=[ 685], 99.50th=[ 717], 99.90th=[ 1254], 99.95th=[ 4146], 00:18:08.449 | 99.99th=[41157] 00:18:08.449 bw ( KiB/s): min= 6760, max= 8096, per=44.27%, avg=7369.60, stdev=590.09, samples=5 00:18:08.449 iops : min= 1690, max= 2024, avg=1842.40, stdev=147.52, samples=5 00:18:08.449 lat (usec) : 500=32.06%, 750=67.65%, 1000=0.12% 00:18:08.449 lat (msec) : 2=0.06%, 4=0.02%, 10=0.04%, 50=0.04% 00:18:08.449 cpu : usr=0.96%, sys=3.09%, ctx=5136, majf=0, minf=1 00:18:08.449 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:08.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:08.449 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:08.449 issued rwts: total=5134,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:08.449 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:08.449 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3047260: Mon Jul 15 15:23:12 2024 00:18:08.449 read: IOPS=1948, BW=7794KiB/s (7981kB/s)(20.0MiB/2625msec) 00:18:08.449 slat (nsec): min=8456, max=52941, avg=10008.06, stdev=2448.48 00:18:08.449 clat (usec): min=232, max=41320, avg=497.26, stdev=810.74 00:18:08.449 lat (usec): min=241, max=41344, avg=507.27, stdev=811.06 00:18:08.449 clat percentiles (usec): 00:18:08.449 | 1.00th=[ 262], 5.00th=[ 347], 10.00th=[ 408], 20.00th=[ 424], 00:18:08.449 | 30.00th=[ 441], 40.00th=[ 482], 50.00th=[ 506], 60.00th=[ 515], 00:18:08.449 | 70.00th=[ 523], 80.00th=[ 529], 90.00th=[ 545], 95.00th=[ 553], 00:18:08.449 | 99.00th=[ 644], 99.50th=[ 668], 99.90th=[ 938], 99.95th=[ 2376], 00:18:08.449 | 99.99th=[41157] 00:18:08.449 bw ( KiB/s): min= 7192, max= 8328, per=46.96%, avg=7816.00, stdev=467.40, samples=5 00:18:08.449 iops : min= 1798, max= 2082, avg=1954.00, stdev=116.85, samples=5 00:18:08.449 lat (usec) : 250=0.43%, 500=46.68%, 750=52.74%, 1000=0.06% 00:18:08.449 lat (msec) : 2=0.02%, 4=0.02%, 50=0.04% 00:18:08.449 cpu : usr=1.03%, sys=3.09%, ctx=5117, majf=0, minf=2 00:18:08.449 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:08.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:08.449 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:08.449 issued rwts: total=5116,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:08.449 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:08.449 00:18:08.449 Run status group 0 (all jobs): 00:18:08.449 READ: bw=16.3MiB/s (17.0MB/s), 111KiB/s-7794KiB/s (114kB/s-7981kB/s), io=51.8MiB (54.3MB), run=2625-3186msec 00:18:08.449 00:18:08.449 Disk stats (read/write): 00:18:08.449 nvme0n1: ios=113/0, merge=0/0, ticks=3803/0, in_queue=3803, util=99.16% 00:18:08.449 nvme0n2: ios=2740/0, merge=0/0, ticks=4041/0, in_queue=4041, util=98.08% 00:18:08.449 nvme0n3: ios=4738/0, merge=0/0, ticks=2463/0, in_queue=2463, util=95.94% 00:18:08.449 nvme0n4: ios=5049/0, merge=0/0, ticks=2473/0, in_queue=2473, util=96.41% 00:18:08.449 15:23:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:08.449 15:23:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:18:08.708 15:23:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:08.708 15:23:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:18:08.967 15:23:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:08.967 15:23:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:18:08.967 15:23:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:08.967 15:23:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:18:09.226 15:23:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:18:09.226 15:23:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 3047095 00:18:09.226 15:23:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:18:09.226 15:23:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:09.491 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:09.491 15:23:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:09.491 15:23:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:18:09.491 15:23:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:09.491 15:23:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:09.491 15:23:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:09.491 15:23:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:09.491 15:23:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:18:09.491 15:23:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:18:09.491 15:23:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:18:09.491 nvmf hotplug test: fio failed as expected 00:18:09.491 15:23:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:09.491 15:23:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:18:09.491 15:23:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:18:09.491 15:23:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:18:09.491 15:23:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:18:09.491 15:23:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:18:09.491 15:23:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:09.491 15:23:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:18:09.491 15:23:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:09.491 15:23:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:18:09.491 15:23:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:09.491 15:23:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:09.491 rmmod nvme_tcp 00:18:09.491 rmmod nvme_fabrics 00:18:09.755 rmmod nvme_keyring 00:18:09.755 15:23:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:09.755 15:23:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:18:09.755 15:23:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:18:09.755 15:23:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 3044027 ']' 00:18:09.755 15:23:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 3044027 00:18:09.755 15:23:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 3044027 ']' 00:18:09.755 15:23:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 3044027 00:18:09.755 15:23:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:18:09.755 15:23:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:09.755 15:23:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3044027 00:18:09.755 15:23:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:09.755 15:23:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:09.755 15:23:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3044027' 00:18:09.755 killing process with pid 3044027 00:18:09.755 15:23:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 3044027 00:18:09.755 15:23:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 3044027 00:18:10.014 15:23:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:10.014 15:23:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:10.014 15:23:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:10.014 15:23:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:10.014 15:23:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:10.014 15:23:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:10.014 15:23:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:10.014 15:23:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:11.916 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:11.916 00:18:11.916 real 0m28.548s 00:18:11.916 user 2m2.335s 00:18:11.916 sys 0m10.201s 00:18:11.916 15:23:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:11.916 15:23:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.916 ************************************ 00:18:11.916 END TEST nvmf_fio_target 00:18:11.916 ************************************ 00:18:11.916 15:23:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:11.916 15:23:15 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:11.916 15:23:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:11.916 15:23:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:11.916 15:23:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:12.175 ************************************ 00:18:12.175 START TEST nvmf_bdevio 00:18:12.175 ************************************ 00:18:12.175 15:23:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:12.175 * Looking for test storage... 00:18:12.175 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:12.175 15:23:15 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:12.175 15:23:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:18:12.175 15:23:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:12.175 15:23:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:12.175 15:23:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:12.175 15:23:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:12.175 15:23:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:12.175 15:23:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:12.175 15:23:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:12.175 15:23:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:12.175 15:23:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:12.175 15:23:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:12.175 15:23:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:12.175 15:23:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:18:12.175 15:23:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:12.175 15:23:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:12.175 15:23:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:12.175 15:23:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:12.175 15:23:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:12.175 15:23:15 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:12.175 15:23:15 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:12.175 15:23:15 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:12.175 15:23:15 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.175 15:23:15 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.175 15:23:15 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.175 15:23:15 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:18:12.175 15:23:15 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.175 15:23:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:18:12.176 15:23:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:12.176 15:23:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:12.176 15:23:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:12.176 15:23:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:12.176 15:23:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:12.176 15:23:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:12.176 15:23:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:12.176 15:23:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:12.176 15:23:15 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:12.176 15:23:15 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:12.176 15:23:15 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:18:12.176 15:23:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:12.176 15:23:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:12.176 15:23:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:12.176 15:23:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:12.176 15:23:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:12.176 15:23:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:12.176 15:23:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:12.176 15:23:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:12.176 15:23:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:12.176 15:23:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:12.176 15:23:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:18:12.176 15:23:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:18.742 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:18.742 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:18:18.742 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:18.742 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:18.742 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:18.742 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:18.742 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:18.742 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:18:18.742 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:18.742 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:18:18.742 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:18:18.742 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:18:18.742 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:18:18.742 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:18:18.742 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:18:18.742 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:18.742 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:18.742 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:18.742 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:18.742 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:18.742 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:18.742 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:18.742 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:18.742 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:18.742 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:18.742 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:18.742 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:18.742 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:18.742 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:18.742 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:18.742 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:18.742 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:18.742 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:18.743 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:18.743 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:18.743 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:18.743 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:18.743 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:18.743 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:18.743 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:18.743 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:18.743 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:18.743 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:18.743 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:18.743 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:18.743 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:18.743 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:18.743 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:18.743 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:18.743 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:18.743 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:18.743 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:18.743 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:18.743 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:18.743 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:18.743 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:18.743 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:18.743 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:18.743 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:18.743 Found net devices under 0000:af:00.0: cvl_0_0 00:18:18.743 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:18.743 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:18.743 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:18.743 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:18.743 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:18.743 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:18.743 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:18.743 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:18.743 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:18.743 Found net devices under 0000:af:00.1: cvl_0_1 00:18:18.743 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:18.743 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:18.743 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:18:18.743 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:18.743 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:18.743 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:18.743 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:18.743 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:18.743 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:18.743 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:18.743 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:18.743 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:18.743 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:18.743 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:18.743 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:18.743 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:18.743 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:18.743 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:18.743 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:18.743 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:18.743 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:18.743 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:18.743 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:18.743 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:18.743 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:18.743 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:18.743 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:18.743 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:18:18.743 00:18:18.743 --- 10.0.0.2 ping statistics --- 00:18:18.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:18.743 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:18:18.743 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:19.001 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:19.001 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:18:19.001 00:18:19.001 --- 10.0.0.1 ping statistics --- 00:18:19.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.001 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:18:19.001 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:19.001 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:18:19.001 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:19.001 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:19.001 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:19.001 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:19.001 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:19.001 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:19.001 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:19.001 15:23:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:19.001 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:19.001 15:23:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:19.001 15:23:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:19.001 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=3051743 00:18:19.001 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 3051743 00:18:19.001 15:23:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:18:19.001 15:23:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 3051743 ']' 00:18:19.001 15:23:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.001 15:23:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:19.001 15:23:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.001 15:23:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:19.001 15:23:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:19.001 [2024-07-15 15:23:22.733964] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:18:19.001 [2024-07-15 15:23:22.734010] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:19.001 EAL: No free 2048 kB hugepages reported on node 1 00:18:19.001 [2024-07-15 15:23:22.807671] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:19.001 [2024-07-15 15:23:22.880426] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:19.001 [2024-07-15 15:23:22.880467] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:19.001 [2024-07-15 15:23:22.880476] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:19.001 [2024-07-15 15:23:22.880484] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:19.001 [2024-07-15 15:23:22.880491] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:19.001 [2024-07-15 15:23:22.880620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:18:19.001 [2024-07-15 15:23:22.880726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:18:19.001 [2024-07-15 15:23:22.880841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:19.001 [2024-07-15 15:23:22.880886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:18:19.941 15:23:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:19.941 15:23:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:18:19.941 15:23:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:19.941 15:23:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:19.941 15:23:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:19.941 15:23:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:19.941 15:23:23 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:19.941 15:23:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.941 15:23:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:19.941 [2024-07-15 15:23:23.602830] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:19.941 15:23:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.941 15:23:23 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:19.941 15:23:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.941 15:23:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:19.941 Malloc0 00:18:19.941 15:23:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.941 15:23:23 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:19.941 15:23:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.941 15:23:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:19.941 15:23:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.941 15:23:23 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:19.941 15:23:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.941 15:23:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:19.941 15:23:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.941 15:23:23 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:19.941 15:23:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.941 15:23:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:19.941 [2024-07-15 15:23:23.649345] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:19.941 15:23:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.941 15:23:23 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:18:19.941 15:23:23 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:19.941 15:23:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:18:19.941 15:23:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:18:19.941 15:23:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:19.941 15:23:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:19.941 { 00:18:19.941 "params": { 00:18:19.941 "name": "Nvme$subsystem", 00:18:19.941 "trtype": "$TEST_TRANSPORT", 00:18:19.941 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:19.941 "adrfam": "ipv4", 00:18:19.941 "trsvcid": "$NVMF_PORT", 00:18:19.941 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:19.941 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:19.941 "hdgst": ${hdgst:-false}, 00:18:19.941 "ddgst": ${ddgst:-false} 00:18:19.941 }, 00:18:19.941 "method": "bdev_nvme_attach_controller" 00:18:19.941 } 00:18:19.941 EOF 00:18:19.941 )") 00:18:19.941 15:23:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:18:19.941 15:23:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:18:19.941 15:23:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:18:19.941 15:23:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:19.941 "params": { 00:18:19.941 "name": "Nvme1", 00:18:19.941 "trtype": "tcp", 00:18:19.941 "traddr": "10.0.0.2", 00:18:19.941 "adrfam": "ipv4", 00:18:19.941 "trsvcid": "4420", 00:18:19.941 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:19.941 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:19.941 "hdgst": false, 00:18:19.941 "ddgst": false 00:18:19.941 }, 00:18:19.941 "method": "bdev_nvme_attach_controller" 00:18:19.941 }' 00:18:19.941 [2024-07-15 15:23:23.701046] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:18:19.941 [2024-07-15 15:23:23.701095] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3052002 ] 00:18:19.941 EAL: No free 2048 kB hugepages reported on node 1 00:18:19.941 [2024-07-15 15:23:23.772398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:19.941 [2024-07-15 15:23:23.843842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:19.941 [2024-07-15 15:23:23.843904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:19.941 [2024-07-15 15:23:23.843907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:20.200 I/O targets: 00:18:20.200 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:20.200 00:18:20.200 00:18:20.200 CUnit - A unit testing framework for C - Version 2.1-3 00:18:20.200 http://cunit.sourceforge.net/ 00:18:20.200 00:18:20.200 00:18:20.200 Suite: bdevio tests on: Nvme1n1 00:18:20.200 Test: blockdev write read block ...passed 00:18:20.200 Test: blockdev write zeroes read block ...passed 00:18:20.200 Test: blockdev write zeroes read no split ...passed 00:18:20.458 Test: blockdev write zeroes read split ...passed 00:18:20.458 Test: blockdev write zeroes read split partial ...passed 00:18:20.458 Test: blockdev reset ...[2024-07-15 15:23:24.220651] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:20.458 [2024-07-15 15:23:24.220714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d2810 (9): Bad file descriptor 00:18:20.458 [2024-07-15 15:23:24.241401] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:20.458 passed 00:18:20.458 Test: blockdev write read 8 blocks ...passed 00:18:20.458 Test: blockdev write read size > 128k ...passed 00:18:20.458 Test: blockdev write read invalid size ...passed 00:18:20.458 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:20.458 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:20.458 Test: blockdev write read max offset ...passed 00:18:20.717 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:20.717 Test: blockdev writev readv 8 blocks ...passed 00:18:20.717 Test: blockdev writev readv 30 x 1block ...passed 00:18:20.717 Test: blockdev writev readv block ...passed 00:18:20.717 Test: blockdev writev readv size > 128k ...passed 00:18:20.717 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:20.717 Test: blockdev comparev and writev ...[2024-07-15 15:23:24.417462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:20.717 [2024-07-15 15:23:24.417491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:20.717 [2024-07-15 15:23:24.417507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:20.717 [2024-07-15 15:23:24.417521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:20.717 [2024-07-15 15:23:24.417846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:20.717 [2024-07-15 15:23:24.417859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:20.717 [2024-07-15 15:23:24.417873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:20.717 [2024-07-15 15:23:24.417882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:20.717 [2024-07-15 15:23:24.418207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:20.717 [2024-07-15 15:23:24.418221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:20.717 [2024-07-15 15:23:24.418234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:20.717 [2024-07-15 15:23:24.418244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:20.717 [2024-07-15 15:23:24.418592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:20.717 [2024-07-15 15:23:24.418606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:20.717 [2024-07-15 15:23:24.418620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:20.717 [2024-07-15 15:23:24.418630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:20.717 passed 00:18:20.717 Test: blockdev nvme passthru rw ...passed 00:18:20.717 Test: blockdev nvme passthru vendor specific ...[2024-07-15 15:23:24.501416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:20.717 [2024-07-15 15:23:24.501433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:20.717 [2024-07-15 15:23:24.501641] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:20.717 [2024-07-15 15:23:24.501655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:20.717 [2024-07-15 15:23:24.501863] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:20.717 [2024-07-15 15:23:24.501875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:20.717 [2024-07-15 15:23:24.502077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:20.717 [2024-07-15 15:23:24.502090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:20.717 passed 00:18:20.717 Test: blockdev nvme admin passthru ...passed 00:18:20.717 Test: blockdev copy ...passed 00:18:20.717 00:18:20.717 Run Summary: Type Total Ran Passed Failed Inactive 00:18:20.717 suites 1 1 n/a 0 0 00:18:20.717 tests 23 23 23 0 0 00:18:20.717 asserts 152 152 152 0 n/a 00:18:20.717 00:18:20.717 Elapsed time = 1.123 seconds 00:18:20.976 15:23:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:20.976 15:23:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.976 15:23:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:20.976 15:23:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.976 15:23:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:20.976 15:23:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:18:20.976 15:23:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:20.976 15:23:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:18:20.976 15:23:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:20.976 15:23:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:18:20.976 15:23:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:20.976 15:23:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:20.976 rmmod nvme_tcp 00:18:20.976 rmmod nvme_fabrics 00:18:20.976 rmmod nvme_keyring 00:18:20.976 15:23:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:20.976 15:23:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:18:20.976 15:23:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:18:20.976 15:23:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 3051743 ']' 00:18:20.976 15:23:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 3051743 00:18:20.976 15:23:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 3051743 ']' 00:18:20.976 15:23:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 3051743 00:18:20.976 15:23:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:18:20.976 15:23:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:20.976 15:23:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3051743 00:18:20.976 15:23:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:18:20.976 15:23:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:18:20.976 15:23:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3051743' 00:18:20.976 killing process with pid 3051743 00:18:20.976 15:23:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 3051743 00:18:20.976 15:23:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 3051743 00:18:21.235 15:23:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:21.235 15:23:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:21.235 15:23:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:21.235 15:23:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:21.235 15:23:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:21.235 15:23:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:21.235 15:23:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:21.235 15:23:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:23.770 15:23:27 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:23.770 00:18:23.770 real 0m11.329s 00:18:23.770 user 0m12.470s 00:18:23.770 sys 0m5.754s 00:18:23.770 15:23:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:23.770 15:23:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:23.770 ************************************ 00:18:23.770 END TEST nvmf_bdevio 00:18:23.770 ************************************ 00:18:23.770 15:23:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:23.770 15:23:27 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:23.770 15:23:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:23.770 15:23:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:23.770 15:23:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:23.770 ************************************ 00:18:23.770 START TEST nvmf_auth_target 00:18:23.770 ************************************ 00:18:23.770 15:23:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:23.770 * Looking for test storage... 00:18:23.770 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:23.770 15:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:23.770 15:23:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:23.770 15:23:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:23.770 15:23:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:23.770 15:23:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:23.770 15:23:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:23.770 15:23:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:23.770 15:23:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:23.770 15:23:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:23.770 15:23:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:23.770 15:23:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:23.770 15:23:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:23.770 15:23:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:23.770 15:23:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:18:23.770 15:23:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:23.770 15:23:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:23.770 15:23:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:23.770 15:23:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:23.770 15:23:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:23.770 15:23:27 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:23.770 15:23:27 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:23.770 15:23:27 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:23.770 15:23:27 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.770 15:23:27 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.770 15:23:27 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.770 15:23:27 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:23.770 15:23:27 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.770 15:23:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:18:23.770 15:23:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:23.770 15:23:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:23.770 15:23:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:23.770 15:23:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:23.770 15:23:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:23.770 15:23:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:23.770 15:23:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:23.770 15:23:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:23.770 15:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:23.770 15:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:23.770 15:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:23.770 15:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:23.770 15:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:23.770 15:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:23.770 15:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:23.770 15:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:18:23.770 15:23:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:23.770 15:23:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:23.770 15:23:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:23.770 15:23:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:23.770 15:23:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:23.770 15:23:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:23.770 15:23:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:23.770 15:23:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:23.771 15:23:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:23.771 15:23:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:23.771 15:23:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:23.771 15:23:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:30.357 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:30.357 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:30.357 Found net devices under 0000:af:00.0: cvl_0_0 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:30.357 Found net devices under 0000:af:00.1: cvl_0_1 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:30.357 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:30.358 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:30.358 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:30.358 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:30.358 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:30.358 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:30.358 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:30.358 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:30.358 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:30.358 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:30.358 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:30.358 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:30.358 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:30.358 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:18:30.358 00:18:30.358 --- 10.0.0.2 ping statistics --- 00:18:30.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:30.358 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:18:30.358 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:30.358 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:30.358 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:18:30.358 00:18:30.358 --- 10.0.0.1 ping statistics --- 00:18:30.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:30.358 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:18:30.358 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:30.358 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:18:30.358 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:30.358 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:30.358 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:30.358 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:30.358 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:30.358 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:30.358 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:30.358 15:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:18:30.358 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:30.358 15:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:30.358 15:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.358 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3055710 00:18:30.358 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3055710 00:18:30.358 15:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3055710 ']' 00:18:30.358 15:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:30.358 15:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:30.358 15:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:30.358 15:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:30.358 15:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.358 15:23:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:30.617 15:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:30.617 15:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:30.617 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:30.617 15:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:30.617 15:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.876 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:30.876 15:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=3055939 00:18:30.876 15:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:30.876 15:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:30.876 15:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:18:30.876 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:30.876 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:30.876 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:30.876 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:18:30.876 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:30.876 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:30.876 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d8a8a2af2ddf7068d8bd293b314c9567601eaf5e3e7de9c7 00:18:30.876 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:30.876 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.NSG 00:18:30.876 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d8a8a2af2ddf7068d8bd293b314c9567601eaf5e3e7de9c7 0 00:18:30.876 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d8a8a2af2ddf7068d8bd293b314c9567601eaf5e3e7de9c7 0 00:18:30.876 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:30.876 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:30.876 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d8a8a2af2ddf7068d8bd293b314c9567601eaf5e3e7de9c7 00:18:30.876 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:18:30.876 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:30.876 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.NSG 00:18:30.876 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.NSG 00:18:30.876 15:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.NSG 00:18:30.876 15:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:18:30.876 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:30.876 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:30.876 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:30.876 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:30.876 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:30.876 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:30.876 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7cd696aa96264b16e88225239b1a08fca3a10718408308eb77a909773a02f481 00:18:30.876 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:30.876 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.W2e 00:18:30.876 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7cd696aa96264b16e88225239b1a08fca3a10718408308eb77a909773a02f481 3 00:18:30.876 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7cd696aa96264b16e88225239b1a08fca3a10718408308eb77a909773a02f481 3 00:18:30.876 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:30.876 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:30.876 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7cd696aa96264b16e88225239b1a08fca3a10718408308eb77a909773a02f481 00:18:30.876 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:30.876 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:30.876 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.W2e 00:18:30.876 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.W2e 00:18:30.876 15:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.W2e 00:18:30.877 15:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:18:30.877 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:30.877 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:30.877 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:30.877 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:30.877 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:30.877 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:30.877 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=eab4fb0be559371f004ebbebb402a64d 00:18:30.877 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:30.877 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.xSs 00:18:30.877 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key eab4fb0be559371f004ebbebb402a64d 1 00:18:30.877 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 eab4fb0be559371f004ebbebb402a64d 1 00:18:30.877 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:30.877 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:30.877 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=eab4fb0be559371f004ebbebb402a64d 00:18:30.877 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:30.877 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:30.877 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.xSs 00:18:30.877 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.xSs 00:18:30.877 15:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.xSs 00:18:30.877 15:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:18:30.877 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:30.877 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:30.877 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:30.877 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:30.877 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:30.877 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:30.877 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=bd2bd372dce9e3774819086dd61c055c68bb467cc576db9f 00:18:30.877 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:30.877 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.fbV 00:18:30.877 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key bd2bd372dce9e3774819086dd61c055c68bb467cc576db9f 2 00:18:30.877 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 bd2bd372dce9e3774819086dd61c055c68bb467cc576db9f 2 00:18:30.877 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:30.877 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:30.877 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=bd2bd372dce9e3774819086dd61c055c68bb467cc576db9f 00:18:30.877 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:30.877 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:31.135 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.fbV 00:18:31.135 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.fbV 00:18:31.135 15:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.fbV 00:18:31.135 15:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:18:31.135 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:31.135 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:31.135 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:31.135 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:31.135 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:31.135 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:31.135 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=87310638b2de0d8719955b7a13464c48b477d6bfbef095cd 00:18:31.135 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:31.135 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.ezA 00:18:31.135 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 87310638b2de0d8719955b7a13464c48b477d6bfbef095cd 2 00:18:31.135 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 87310638b2de0d8719955b7a13464c48b477d6bfbef095cd 2 00:18:31.135 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:31.135 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:31.135 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=87310638b2de0d8719955b7a13464c48b477d6bfbef095cd 00:18:31.135 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:31.135 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:31.135 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.ezA 00:18:31.135 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.ezA 00:18:31.135 15:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.ezA 00:18:31.135 15:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:18:31.135 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:31.135 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:31.135 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:31.135 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:31.135 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:31.135 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:31.135 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e6d9ed63c7545f0fc260080d9ded7de7 00:18:31.135 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:31.135 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.NcN 00:18:31.135 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e6d9ed63c7545f0fc260080d9ded7de7 1 00:18:31.135 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e6d9ed63c7545f0fc260080d9ded7de7 1 00:18:31.135 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:31.135 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:31.135 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e6d9ed63c7545f0fc260080d9ded7de7 00:18:31.135 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:31.135 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:31.135 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.NcN 00:18:31.135 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.NcN 00:18:31.135 15:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.NcN 00:18:31.135 15:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:18:31.135 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:31.135 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:31.135 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:31.135 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:31.135 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:31.135 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:31.136 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=51878367350e1535c23f812763f7db193c4abb0783d852fba7c5db688d2edf03 00:18:31.136 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:31.136 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.GL9 00:18:31.136 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 51878367350e1535c23f812763f7db193c4abb0783d852fba7c5db688d2edf03 3 00:18:31.136 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 51878367350e1535c23f812763f7db193c4abb0783d852fba7c5db688d2edf03 3 00:18:31.136 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:31.136 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:31.136 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=51878367350e1535c23f812763f7db193c4abb0783d852fba7c5db688d2edf03 00:18:31.136 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:31.136 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:31.136 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.GL9 00:18:31.136 15:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.GL9 00:18:31.136 15:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.GL9 00:18:31.136 15:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:18:31.136 15:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 3055710 00:18:31.136 15:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3055710 ']' 00:18:31.136 15:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.136 15:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:31.136 15:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.136 15:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:31.136 15:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.393 15:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:31.393 15:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:31.393 15:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 3055939 /var/tmp/host.sock 00:18:31.393 15:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3055939 ']' 00:18:31.393 15:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:18:31.393 15:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:31.393 15:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:31.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:31.393 15:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:31.393 15:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.650 15:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:31.650 15:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:31.650 15:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:18:31.650 15:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.650 15:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.650 15:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.650 15:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:31.650 15:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.NSG 00:18:31.650 15:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.650 15:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.650 15:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.650 15:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.NSG 00:18:31.650 15:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.NSG 00:18:31.650 15:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.W2e ]] 00:18:31.650 15:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.W2e 00:18:31.650 15:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.650 15:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.650 15:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.650 15:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.W2e 00:18:31.650 15:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.W2e 00:18:31.908 15:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:31.908 15:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.xSs 00:18:31.908 15:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.908 15:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.908 15:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.908 15:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.xSs 00:18:31.908 15:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.xSs 00:18:32.167 15:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.fbV ]] 00:18:32.167 15:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.fbV 00:18:32.167 15:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.167 15:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.167 15:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.167 15:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.fbV 00:18:32.167 15:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.fbV 00:18:32.167 15:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:32.167 15:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.ezA 00:18:32.167 15:23:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.167 15:23:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.425 15:23:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.425 15:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.ezA 00:18:32.425 15:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.ezA 00:18:32.425 15:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.NcN ]] 00:18:32.425 15:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.NcN 00:18:32.425 15:23:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.425 15:23:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.425 15:23:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.425 15:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.NcN 00:18:32.425 15:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.NcN 00:18:32.683 15:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:32.683 15:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.GL9 00:18:32.683 15:23:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.683 15:23:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.683 15:23:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.683 15:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.GL9 00:18:32.683 15:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.GL9 00:18:32.683 15:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:18:32.683 15:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:32.683 15:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:32.683 15:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:32.683 15:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:32.683 15:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:32.942 15:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:18:32.942 15:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:32.942 15:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:32.942 15:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:32.942 15:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:32.942 15:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:32.942 15:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:32.942 15:23:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.942 15:23:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.942 15:23:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.942 15:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:32.942 15:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.200 00:18:33.200 15:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:33.200 15:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:33.200 15:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.459 15:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.459 15:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.459 15:23:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.459 15:23:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.459 15:23:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.459 15:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:33.459 { 00:18:33.459 "cntlid": 1, 00:18:33.459 "qid": 0, 00:18:33.459 "state": "enabled", 00:18:33.459 "thread": "nvmf_tgt_poll_group_000", 00:18:33.459 "listen_address": { 00:18:33.459 "trtype": "TCP", 00:18:33.459 "adrfam": "IPv4", 00:18:33.459 "traddr": "10.0.0.2", 00:18:33.459 "trsvcid": "4420" 00:18:33.459 }, 00:18:33.459 "peer_address": { 00:18:33.459 "trtype": "TCP", 00:18:33.459 "adrfam": "IPv4", 00:18:33.459 "traddr": "10.0.0.1", 00:18:33.459 "trsvcid": "38398" 00:18:33.459 }, 00:18:33.459 "auth": { 00:18:33.459 "state": "completed", 00:18:33.459 "digest": "sha256", 00:18:33.459 "dhgroup": "null" 00:18:33.459 } 00:18:33.459 } 00:18:33.459 ]' 00:18:33.459 15:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:33.459 15:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:33.459 15:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:33.459 15:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:33.459 15:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:33.459 15:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.459 15:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.459 15:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.718 15:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZDhhOGEyYWYyZGRmNzA2OGQ4YmQyOTNiMzE0Yzk1Njc2MDFlYWY1ZTNlN2RlOWM3K0fCMQ==: --dhchap-ctrl-secret DHHC-1:03:N2NkNjk2YWE5NjI2NGIxNmU4ODIyNTIzOWIxYTA4ZmNhM2ExMDcxODQwODMwOGViNzdhOTA5NzczYTAyZjQ4McVZPBM=: 00:18:34.285 15:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.285 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.285 15:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:34.285 15:23:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.285 15:23:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.285 15:23:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.285 15:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:34.285 15:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:34.285 15:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:34.544 15:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:18:34.544 15:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:34.544 15:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:34.544 15:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:34.544 15:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:34.544 15:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.544 15:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.544 15:23:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.544 15:23:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.544 15:23:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.544 15:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.544 15:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.544 00:18:34.544 15:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:34.544 15:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:34.544 15:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.803 15:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.803 15:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.803 15:23:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.803 15:23:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.803 15:23:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.803 15:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:34.803 { 00:18:34.803 "cntlid": 3, 00:18:34.803 "qid": 0, 00:18:34.803 "state": "enabled", 00:18:34.803 "thread": "nvmf_tgt_poll_group_000", 00:18:34.803 "listen_address": { 00:18:34.803 "trtype": "TCP", 00:18:34.803 "adrfam": "IPv4", 00:18:34.803 "traddr": "10.0.0.2", 00:18:34.803 "trsvcid": "4420" 00:18:34.803 }, 00:18:34.803 "peer_address": { 00:18:34.803 "trtype": "TCP", 00:18:34.803 "adrfam": "IPv4", 00:18:34.803 "traddr": "10.0.0.1", 00:18:34.803 "trsvcid": "38424" 00:18:34.803 }, 00:18:34.803 "auth": { 00:18:34.803 "state": "completed", 00:18:34.803 "digest": "sha256", 00:18:34.803 "dhgroup": "null" 00:18:34.803 } 00:18:34.803 } 00:18:34.803 ]' 00:18:34.803 15:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:34.803 15:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:34.803 15:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:35.061 15:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:35.061 15:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:35.061 15:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.061 15:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.061 15:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.061 15:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:ZWFiNGZiMGJlNTU5MzcxZjAwNGViYmViYjQwMmE2NGSbZLbl: --dhchap-ctrl-secret DHHC-1:02:YmQyYmQzNzJkY2U5ZTM3NzQ4MTkwODZkZDYxYzA1NWM2OGJiNDY3Y2M1NzZkYjlmeHPAQA==: 00:18:35.628 15:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.628 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.628 15:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:35.628 15:23:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.628 15:23:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.628 15:23:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.628 15:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:35.628 15:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:35.628 15:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:35.888 15:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:18:35.888 15:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:35.888 15:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:35.888 15:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:35.888 15:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:35.888 15:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.888 15:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:35.888 15:23:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.888 15:23:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.888 15:23:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.888 15:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:35.888 15:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.147 00:18:36.147 15:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:36.147 15:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.147 15:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:36.407 15:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.407 15:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.407 15:23:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.407 15:23:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.407 15:23:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.407 15:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:36.407 { 00:18:36.407 "cntlid": 5, 00:18:36.407 "qid": 0, 00:18:36.407 "state": "enabled", 00:18:36.407 "thread": "nvmf_tgt_poll_group_000", 00:18:36.407 "listen_address": { 00:18:36.407 "trtype": "TCP", 00:18:36.407 "adrfam": "IPv4", 00:18:36.407 "traddr": "10.0.0.2", 00:18:36.407 "trsvcid": "4420" 00:18:36.407 }, 00:18:36.407 "peer_address": { 00:18:36.407 "trtype": "TCP", 00:18:36.407 "adrfam": "IPv4", 00:18:36.407 "traddr": "10.0.0.1", 00:18:36.407 "trsvcid": "38454" 00:18:36.407 }, 00:18:36.407 "auth": { 00:18:36.407 "state": "completed", 00:18:36.407 "digest": "sha256", 00:18:36.407 "dhgroup": "null" 00:18:36.407 } 00:18:36.407 } 00:18:36.407 ]' 00:18:36.407 15:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:36.407 15:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:36.407 15:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:36.407 15:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:36.407 15:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:36.407 15:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.407 15:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.407 15:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.666 15:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:ODczMTA2MzhiMmRlMGQ4NzE5OTU1YjdhMTM0NjRjNDhiNDc3ZDZiZmJlZjA5NWNkoTggUw==: --dhchap-ctrl-secret DHHC-1:01:ZTZkOWVkNjNjNzU0NWYwZmMyNjAwODBkOWRlZDdkZTf7PGQl: 00:18:37.234 15:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.234 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.234 15:23:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:37.234 15:23:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.234 15:23:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.234 15:23:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.234 15:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:37.234 15:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:37.234 15:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:37.493 15:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:18:37.493 15:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:37.493 15:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:37.493 15:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:37.493 15:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:37.493 15:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.493 15:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:37.493 15:23:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.493 15:23:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.493 15:23:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.493 15:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:37.493 15:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:37.752 00:18:37.752 15:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:37.752 15:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.752 15:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:37.752 15:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.752 15:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.752 15:23:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.752 15:23:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.752 15:23:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.752 15:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:37.752 { 00:18:37.752 "cntlid": 7, 00:18:37.752 "qid": 0, 00:18:37.752 "state": "enabled", 00:18:37.752 "thread": "nvmf_tgt_poll_group_000", 00:18:37.752 "listen_address": { 00:18:37.752 "trtype": "TCP", 00:18:37.752 "adrfam": "IPv4", 00:18:37.752 "traddr": "10.0.0.2", 00:18:37.752 "trsvcid": "4420" 00:18:37.752 }, 00:18:37.752 "peer_address": { 00:18:37.752 "trtype": "TCP", 00:18:37.752 "adrfam": "IPv4", 00:18:37.752 "traddr": "10.0.0.1", 00:18:37.752 "trsvcid": "38470" 00:18:37.752 }, 00:18:37.752 "auth": { 00:18:37.752 "state": "completed", 00:18:37.752 "digest": "sha256", 00:18:37.752 "dhgroup": "null" 00:18:37.752 } 00:18:37.752 } 00:18:37.752 ]' 00:18:37.752 15:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:37.752 15:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:38.011 15:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:38.011 15:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:38.011 15:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:38.011 15:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.011 15:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.011 15:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.011 15:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:NTE4NzgzNjczNTBlMTUzNWMyM2Y4MTI3NjNmN2RiMTkzYzRhYmIwNzgzZDg1MmZiYTdjNWRiNjg4ZDJlZGYwM/cmdi8=: 00:18:38.579 15:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.579 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.579 15:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:38.579 15:23:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.579 15:23:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.579 15:23:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.579 15:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:38.579 15:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:38.579 15:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:38.579 15:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:38.838 15:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:18:38.838 15:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:38.838 15:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:38.838 15:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:38.838 15:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:38.838 15:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.838 15:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:38.838 15:23:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.838 15:23:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.838 15:23:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.838 15:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:38.838 15:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.097 00:18:39.097 15:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:39.097 15:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:39.097 15:23:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.356 15:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.356 15:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.356 15:23:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.356 15:23:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.356 15:23:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.356 15:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:39.356 { 00:18:39.356 "cntlid": 9, 00:18:39.356 "qid": 0, 00:18:39.356 "state": "enabled", 00:18:39.356 "thread": "nvmf_tgt_poll_group_000", 00:18:39.356 "listen_address": { 00:18:39.356 "trtype": "TCP", 00:18:39.356 "adrfam": "IPv4", 00:18:39.356 "traddr": "10.0.0.2", 00:18:39.356 "trsvcid": "4420" 00:18:39.356 }, 00:18:39.356 "peer_address": { 00:18:39.356 "trtype": "TCP", 00:18:39.356 "adrfam": "IPv4", 00:18:39.356 "traddr": "10.0.0.1", 00:18:39.356 "trsvcid": "51564" 00:18:39.356 }, 00:18:39.356 "auth": { 00:18:39.356 "state": "completed", 00:18:39.356 "digest": "sha256", 00:18:39.356 "dhgroup": "ffdhe2048" 00:18:39.356 } 00:18:39.356 } 00:18:39.356 ]' 00:18:39.356 15:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:39.356 15:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:39.356 15:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:39.356 15:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:39.356 15:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:39.356 15:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.356 15:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.356 15:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.614 15:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZDhhOGEyYWYyZGRmNzA2OGQ4YmQyOTNiMzE0Yzk1Njc2MDFlYWY1ZTNlN2RlOWM3K0fCMQ==: --dhchap-ctrl-secret DHHC-1:03:N2NkNjk2YWE5NjI2NGIxNmU4ODIyNTIzOWIxYTA4ZmNhM2ExMDcxODQwODMwOGViNzdhOTA5NzczYTAyZjQ4McVZPBM=: 00:18:40.180 15:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.180 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.180 15:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:40.180 15:23:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.180 15:23:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.180 15:23:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.180 15:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:40.180 15:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:40.181 15:23:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:40.440 15:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:18:40.440 15:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:40.440 15:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:40.440 15:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:40.440 15:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:40.440 15:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.440 15:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.440 15:23:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.440 15:23:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.440 15:23:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.440 15:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.440 15:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.440 00:18:40.699 15:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:40.699 15:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:40.699 15:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.699 15:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.699 15:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.699 15:23:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.699 15:23:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.699 15:23:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.699 15:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:40.699 { 00:18:40.699 "cntlid": 11, 00:18:40.699 "qid": 0, 00:18:40.699 "state": "enabled", 00:18:40.699 "thread": "nvmf_tgt_poll_group_000", 00:18:40.699 "listen_address": { 00:18:40.699 "trtype": "TCP", 00:18:40.699 "adrfam": "IPv4", 00:18:40.699 "traddr": "10.0.0.2", 00:18:40.699 "trsvcid": "4420" 00:18:40.699 }, 00:18:40.699 "peer_address": { 00:18:40.699 "trtype": "TCP", 00:18:40.699 "adrfam": "IPv4", 00:18:40.699 "traddr": "10.0.0.1", 00:18:40.699 "trsvcid": "51612" 00:18:40.699 }, 00:18:40.699 "auth": { 00:18:40.699 "state": "completed", 00:18:40.699 "digest": "sha256", 00:18:40.699 "dhgroup": "ffdhe2048" 00:18:40.699 } 00:18:40.699 } 00:18:40.699 ]' 00:18:40.699 15:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:40.699 15:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:40.699 15:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:40.958 15:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:40.958 15:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:40.958 15:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.958 15:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.958 15:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.958 15:23:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:ZWFiNGZiMGJlNTU5MzcxZjAwNGViYmViYjQwMmE2NGSbZLbl: --dhchap-ctrl-secret DHHC-1:02:YmQyYmQzNzJkY2U5ZTM3NzQ4MTkwODZkZDYxYzA1NWM2OGJiNDY3Y2M1NzZkYjlmeHPAQA==: 00:18:41.526 15:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.526 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.526 15:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:41.526 15:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.526 15:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.526 15:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.526 15:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:41.526 15:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:41.526 15:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:41.786 15:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:18:41.786 15:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:41.786 15:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:41.786 15:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:41.786 15:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:41.786 15:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.786 15:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:41.786 15:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.786 15:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.786 15:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.786 15:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:41.786 15:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.045 00:18:42.045 15:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:42.045 15:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:42.045 15:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.304 15:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.304 15:23:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.304 15:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.304 15:23:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.304 15:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.304 15:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:42.304 { 00:18:42.304 "cntlid": 13, 00:18:42.304 "qid": 0, 00:18:42.304 "state": "enabled", 00:18:42.304 "thread": "nvmf_tgt_poll_group_000", 00:18:42.304 "listen_address": { 00:18:42.304 "trtype": "TCP", 00:18:42.304 "adrfam": "IPv4", 00:18:42.304 "traddr": "10.0.0.2", 00:18:42.304 "trsvcid": "4420" 00:18:42.304 }, 00:18:42.304 "peer_address": { 00:18:42.304 "trtype": "TCP", 00:18:42.304 "adrfam": "IPv4", 00:18:42.304 "traddr": "10.0.0.1", 00:18:42.304 "trsvcid": "51648" 00:18:42.304 }, 00:18:42.304 "auth": { 00:18:42.304 "state": "completed", 00:18:42.304 "digest": "sha256", 00:18:42.304 "dhgroup": "ffdhe2048" 00:18:42.304 } 00:18:42.304 } 00:18:42.304 ]' 00:18:42.304 15:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:42.304 15:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:42.304 15:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:42.304 15:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:42.304 15:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:42.305 15:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.305 15:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.305 15:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.563 15:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:ODczMTA2MzhiMmRlMGQ4NzE5OTU1YjdhMTM0NjRjNDhiNDc3ZDZiZmJlZjA5NWNkoTggUw==: --dhchap-ctrl-secret DHHC-1:01:ZTZkOWVkNjNjNzU0NWYwZmMyNjAwODBkOWRlZDdkZTf7PGQl: 00:18:43.132 15:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.132 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.132 15:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:43.132 15:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.132 15:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.132 15:23:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.132 15:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:43.132 15:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:43.133 15:23:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:43.392 15:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:18:43.392 15:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:43.392 15:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:43.392 15:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:43.392 15:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:43.392 15:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.392 15:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:43.392 15:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.392 15:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.392 15:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.392 15:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:43.392 15:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:43.392 00:18:43.651 15:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:43.651 15:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:43.651 15:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.651 15:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.651 15:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.651 15:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.651 15:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.651 15:23:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.651 15:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:43.651 { 00:18:43.651 "cntlid": 15, 00:18:43.651 "qid": 0, 00:18:43.651 "state": "enabled", 00:18:43.651 "thread": "nvmf_tgt_poll_group_000", 00:18:43.651 "listen_address": { 00:18:43.651 "trtype": "TCP", 00:18:43.651 "adrfam": "IPv4", 00:18:43.651 "traddr": "10.0.0.2", 00:18:43.651 "trsvcid": "4420" 00:18:43.651 }, 00:18:43.651 "peer_address": { 00:18:43.651 "trtype": "TCP", 00:18:43.651 "adrfam": "IPv4", 00:18:43.651 "traddr": "10.0.0.1", 00:18:43.651 "trsvcid": "51674" 00:18:43.651 }, 00:18:43.651 "auth": { 00:18:43.651 "state": "completed", 00:18:43.651 "digest": "sha256", 00:18:43.651 "dhgroup": "ffdhe2048" 00:18:43.651 } 00:18:43.651 } 00:18:43.651 ]' 00:18:43.651 15:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:43.651 15:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:43.651 15:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:43.910 15:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:43.910 15:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:43.910 15:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.910 15:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.910 15:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.910 15:23:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:NTE4NzgzNjczNTBlMTUzNWMyM2Y4MTI3NjNmN2RiMTkzYzRhYmIwNzgzZDg1MmZiYTdjNWRiNjg4ZDJlZGYwM/cmdi8=: 00:18:44.478 15:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.478 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.478 15:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:44.478 15:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.478 15:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.478 15:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.478 15:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:44.478 15:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:44.478 15:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:44.478 15:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:44.770 15:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:18:44.770 15:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:44.770 15:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:44.770 15:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:44.770 15:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:44.770 15:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.770 15:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.770 15:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.770 15:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.770 15:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.770 15:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.770 15:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.059 00:18:45.059 15:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:45.059 15:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:45.059 15:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.059 15:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.059 15:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.059 15:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.059 15:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.059 15:23:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.059 15:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:45.059 { 00:18:45.059 "cntlid": 17, 00:18:45.059 "qid": 0, 00:18:45.059 "state": "enabled", 00:18:45.059 "thread": "nvmf_tgt_poll_group_000", 00:18:45.059 "listen_address": { 00:18:45.059 "trtype": "TCP", 00:18:45.059 "adrfam": "IPv4", 00:18:45.059 "traddr": "10.0.0.2", 00:18:45.059 "trsvcid": "4420" 00:18:45.059 }, 00:18:45.059 "peer_address": { 00:18:45.059 "trtype": "TCP", 00:18:45.059 "adrfam": "IPv4", 00:18:45.059 "traddr": "10.0.0.1", 00:18:45.059 "trsvcid": "51696" 00:18:45.059 }, 00:18:45.059 "auth": { 00:18:45.059 "state": "completed", 00:18:45.059 "digest": "sha256", 00:18:45.059 "dhgroup": "ffdhe3072" 00:18:45.059 } 00:18:45.060 } 00:18:45.060 ]' 00:18:45.060 15:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:45.318 15:23:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:45.318 15:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:45.318 15:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:45.318 15:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:45.318 15:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.318 15:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.318 15:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.577 15:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZDhhOGEyYWYyZGRmNzA2OGQ4YmQyOTNiMzE0Yzk1Njc2MDFlYWY1ZTNlN2RlOWM3K0fCMQ==: --dhchap-ctrl-secret DHHC-1:03:N2NkNjk2YWE5NjI2NGIxNmU4ODIyNTIzOWIxYTA4ZmNhM2ExMDcxODQwODMwOGViNzdhOTA5NzczYTAyZjQ4McVZPBM=: 00:18:46.145 15:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.145 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.145 15:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:46.145 15:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.145 15:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.145 15:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.145 15:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:46.145 15:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:46.145 15:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:46.145 15:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:18:46.145 15:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:46.145 15:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:46.145 15:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:46.145 15:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:46.145 15:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.145 15:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:46.145 15:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.145 15:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.145 15:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.145 15:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:46.145 15:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:46.404 00:18:46.404 15:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:46.404 15:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.404 15:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:46.663 15:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.663 15:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.663 15:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.663 15:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.663 15:23:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.663 15:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:46.663 { 00:18:46.663 "cntlid": 19, 00:18:46.663 "qid": 0, 00:18:46.663 "state": "enabled", 00:18:46.663 "thread": "nvmf_tgt_poll_group_000", 00:18:46.663 "listen_address": { 00:18:46.663 "trtype": "TCP", 00:18:46.663 "adrfam": "IPv4", 00:18:46.663 "traddr": "10.0.0.2", 00:18:46.663 "trsvcid": "4420" 00:18:46.663 }, 00:18:46.663 "peer_address": { 00:18:46.663 "trtype": "TCP", 00:18:46.663 "adrfam": "IPv4", 00:18:46.663 "traddr": "10.0.0.1", 00:18:46.663 "trsvcid": "51734" 00:18:46.663 }, 00:18:46.663 "auth": { 00:18:46.663 "state": "completed", 00:18:46.663 "digest": "sha256", 00:18:46.663 "dhgroup": "ffdhe3072" 00:18:46.663 } 00:18:46.663 } 00:18:46.663 ]' 00:18:46.663 15:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:46.663 15:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:46.663 15:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:46.663 15:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:46.663 15:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:46.922 15:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.922 15:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.922 15:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.922 15:23:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:ZWFiNGZiMGJlNTU5MzcxZjAwNGViYmViYjQwMmE2NGSbZLbl: --dhchap-ctrl-secret DHHC-1:02:YmQyYmQzNzJkY2U5ZTM3NzQ4MTkwODZkZDYxYzA1NWM2OGJiNDY3Y2M1NzZkYjlmeHPAQA==: 00:18:47.502 15:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.502 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.502 15:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:47.502 15:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.502 15:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.502 15:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.502 15:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:47.502 15:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:47.502 15:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:47.761 15:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:18:47.761 15:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:47.761 15:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:47.761 15:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:47.761 15:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:47.761 15:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.761 15:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.761 15:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.761 15:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.761 15:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.761 15:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.761 15:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:48.019 00:18:48.019 15:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:48.019 15:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:48.019 15:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.278 15:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.278 15:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.278 15:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.278 15:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.278 15:23:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.278 15:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:48.278 { 00:18:48.278 "cntlid": 21, 00:18:48.278 "qid": 0, 00:18:48.278 "state": "enabled", 00:18:48.278 "thread": "nvmf_tgt_poll_group_000", 00:18:48.278 "listen_address": { 00:18:48.278 "trtype": "TCP", 00:18:48.278 "adrfam": "IPv4", 00:18:48.278 "traddr": "10.0.0.2", 00:18:48.278 "trsvcid": "4420" 00:18:48.278 }, 00:18:48.278 "peer_address": { 00:18:48.278 "trtype": "TCP", 00:18:48.278 "adrfam": "IPv4", 00:18:48.278 "traddr": "10.0.0.1", 00:18:48.278 "trsvcid": "47924" 00:18:48.278 }, 00:18:48.278 "auth": { 00:18:48.278 "state": "completed", 00:18:48.278 "digest": "sha256", 00:18:48.278 "dhgroup": "ffdhe3072" 00:18:48.278 } 00:18:48.278 } 00:18:48.278 ]' 00:18:48.278 15:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:48.278 15:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:48.278 15:23:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:48.278 15:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:48.278 15:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:48.278 15:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.278 15:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.278 15:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.536 15:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:ODczMTA2MzhiMmRlMGQ4NzE5OTU1YjdhMTM0NjRjNDhiNDc3ZDZiZmJlZjA5NWNkoTggUw==: --dhchap-ctrl-secret DHHC-1:01:ZTZkOWVkNjNjNzU0NWYwZmMyNjAwODBkOWRlZDdkZTf7PGQl: 00:18:49.104 15:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.104 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.104 15:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:49.104 15:23:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.104 15:23:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.104 15:23:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.104 15:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:49.104 15:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:49.104 15:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:49.104 15:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:18:49.104 15:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:49.104 15:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:49.104 15:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:49.104 15:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:49.104 15:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.104 15:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:49.104 15:23:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.104 15:23:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.104 15:23:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.104 15:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:49.104 15:23:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:49.362 00:18:49.362 15:23:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:49.362 15:23:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.362 15:23:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:49.621 15:23:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.621 15:23:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.621 15:23:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.621 15:23:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.621 15:23:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.621 15:23:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:49.621 { 00:18:49.621 "cntlid": 23, 00:18:49.621 "qid": 0, 00:18:49.621 "state": "enabled", 00:18:49.621 "thread": "nvmf_tgt_poll_group_000", 00:18:49.621 "listen_address": { 00:18:49.621 "trtype": "TCP", 00:18:49.621 "adrfam": "IPv4", 00:18:49.621 "traddr": "10.0.0.2", 00:18:49.621 "trsvcid": "4420" 00:18:49.621 }, 00:18:49.621 "peer_address": { 00:18:49.621 "trtype": "TCP", 00:18:49.621 "adrfam": "IPv4", 00:18:49.621 "traddr": "10.0.0.1", 00:18:49.621 "trsvcid": "47950" 00:18:49.621 }, 00:18:49.621 "auth": { 00:18:49.621 "state": "completed", 00:18:49.621 "digest": "sha256", 00:18:49.621 "dhgroup": "ffdhe3072" 00:18:49.621 } 00:18:49.621 } 00:18:49.621 ]' 00:18:49.621 15:23:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:49.621 15:23:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:49.621 15:23:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:49.621 15:23:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:49.621 15:23:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:49.879 15:23:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.879 15:23:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.879 15:23:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.879 15:23:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:NTE4NzgzNjczNTBlMTUzNWMyM2Y4MTI3NjNmN2RiMTkzYzRhYmIwNzgzZDg1MmZiYTdjNWRiNjg4ZDJlZGYwM/cmdi8=: 00:18:50.446 15:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.446 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.446 15:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:50.446 15:23:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.446 15:23:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.446 15:23:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.446 15:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:50.446 15:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:50.446 15:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:50.446 15:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:50.703 15:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:18:50.703 15:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:50.703 15:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:50.703 15:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:50.703 15:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:50.703 15:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.703 15:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.703 15:23:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.703 15:23:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.703 15:23:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.703 15:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.703 15:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.962 00:18:50.962 15:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:50.962 15:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:50.962 15:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.221 15:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.221 15:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.221 15:23:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.221 15:23:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.221 15:23:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.221 15:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:51.221 { 00:18:51.221 "cntlid": 25, 00:18:51.221 "qid": 0, 00:18:51.221 "state": "enabled", 00:18:51.221 "thread": "nvmf_tgt_poll_group_000", 00:18:51.221 "listen_address": { 00:18:51.221 "trtype": "TCP", 00:18:51.221 "adrfam": "IPv4", 00:18:51.221 "traddr": "10.0.0.2", 00:18:51.221 "trsvcid": "4420" 00:18:51.221 }, 00:18:51.221 "peer_address": { 00:18:51.221 "trtype": "TCP", 00:18:51.221 "adrfam": "IPv4", 00:18:51.221 "traddr": "10.0.0.1", 00:18:51.221 "trsvcid": "47984" 00:18:51.221 }, 00:18:51.221 "auth": { 00:18:51.221 "state": "completed", 00:18:51.221 "digest": "sha256", 00:18:51.221 "dhgroup": "ffdhe4096" 00:18:51.221 } 00:18:51.221 } 00:18:51.221 ]' 00:18:51.221 15:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:51.221 15:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:51.221 15:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:51.221 15:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:51.221 15:23:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:51.221 15:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.221 15:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.221 15:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.479 15:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZDhhOGEyYWYyZGRmNzA2OGQ4YmQyOTNiMzE0Yzk1Njc2MDFlYWY1ZTNlN2RlOWM3K0fCMQ==: --dhchap-ctrl-secret DHHC-1:03:N2NkNjk2YWE5NjI2NGIxNmU4ODIyNTIzOWIxYTA4ZmNhM2ExMDcxODQwODMwOGViNzdhOTA5NzczYTAyZjQ4McVZPBM=: 00:18:52.047 15:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.047 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.047 15:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:52.047 15:23:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.047 15:23:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.047 15:23:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.047 15:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:52.047 15:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:52.047 15:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:52.047 15:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:18:52.047 15:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:52.047 15:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:52.047 15:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:52.047 15:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:52.047 15:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.047 15:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.047 15:23:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.047 15:23:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.047 15:23:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.047 15:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.047 15:23:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.305 00:18:52.305 15:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:52.305 15:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:52.305 15:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.563 15:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.563 15:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.563 15:23:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.563 15:23:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.563 15:23:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.563 15:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:52.563 { 00:18:52.563 "cntlid": 27, 00:18:52.563 "qid": 0, 00:18:52.563 "state": "enabled", 00:18:52.563 "thread": "nvmf_tgt_poll_group_000", 00:18:52.563 "listen_address": { 00:18:52.563 "trtype": "TCP", 00:18:52.563 "adrfam": "IPv4", 00:18:52.563 "traddr": "10.0.0.2", 00:18:52.563 "trsvcid": "4420" 00:18:52.563 }, 00:18:52.563 "peer_address": { 00:18:52.563 "trtype": "TCP", 00:18:52.563 "adrfam": "IPv4", 00:18:52.563 "traddr": "10.0.0.1", 00:18:52.563 "trsvcid": "48018" 00:18:52.563 }, 00:18:52.563 "auth": { 00:18:52.563 "state": "completed", 00:18:52.563 "digest": "sha256", 00:18:52.563 "dhgroup": "ffdhe4096" 00:18:52.563 } 00:18:52.563 } 00:18:52.563 ]' 00:18:52.563 15:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:52.563 15:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:52.563 15:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:52.563 15:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:52.563 15:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:52.822 15:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.822 15:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.822 15:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.822 15:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:ZWFiNGZiMGJlNTU5MzcxZjAwNGViYmViYjQwMmE2NGSbZLbl: --dhchap-ctrl-secret DHHC-1:02:YmQyYmQzNzJkY2U5ZTM3NzQ4MTkwODZkZDYxYzA1NWM2OGJiNDY3Y2M1NzZkYjlmeHPAQA==: 00:18:53.390 15:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.390 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.390 15:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:53.390 15:23:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.390 15:23:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.390 15:23:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.390 15:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:53.390 15:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:53.390 15:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:53.650 15:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:18:53.650 15:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:53.650 15:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:53.650 15:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:53.650 15:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:53.650 15:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.650 15:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.650 15:23:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.650 15:23:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.650 15:23:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.650 15:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.650 15:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.909 00:18:53.909 15:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:53.909 15:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.909 15:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:54.168 15:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.168 15:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.168 15:23:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.168 15:23:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.168 15:23:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.168 15:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:54.168 { 00:18:54.168 "cntlid": 29, 00:18:54.168 "qid": 0, 00:18:54.168 "state": "enabled", 00:18:54.168 "thread": "nvmf_tgt_poll_group_000", 00:18:54.168 "listen_address": { 00:18:54.168 "trtype": "TCP", 00:18:54.168 "adrfam": "IPv4", 00:18:54.168 "traddr": "10.0.0.2", 00:18:54.168 "trsvcid": "4420" 00:18:54.168 }, 00:18:54.168 "peer_address": { 00:18:54.168 "trtype": "TCP", 00:18:54.168 "adrfam": "IPv4", 00:18:54.168 "traddr": "10.0.0.1", 00:18:54.168 "trsvcid": "48054" 00:18:54.168 }, 00:18:54.168 "auth": { 00:18:54.168 "state": "completed", 00:18:54.168 "digest": "sha256", 00:18:54.168 "dhgroup": "ffdhe4096" 00:18:54.168 } 00:18:54.168 } 00:18:54.168 ]' 00:18:54.168 15:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:54.168 15:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:54.168 15:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:54.168 15:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:54.168 15:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:54.168 15:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.168 15:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.168 15:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.427 15:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:ODczMTA2MzhiMmRlMGQ4NzE5OTU1YjdhMTM0NjRjNDhiNDc3ZDZiZmJlZjA5NWNkoTggUw==: --dhchap-ctrl-secret DHHC-1:01:ZTZkOWVkNjNjNzU0NWYwZmMyNjAwODBkOWRlZDdkZTf7PGQl: 00:18:54.993 15:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.993 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.993 15:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:54.993 15:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.993 15:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.993 15:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.993 15:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:54.993 15:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:54.993 15:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:54.993 15:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:18:54.993 15:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:54.993 15:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:54.993 15:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:54.993 15:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:54.993 15:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.993 15:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:54.993 15:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.993 15:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.993 15:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.993 15:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:54.993 15:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:55.251 00:18:55.251 15:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:55.251 15:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:55.251 15:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.509 15:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.509 15:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.509 15:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.509 15:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.509 15:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.509 15:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:55.509 { 00:18:55.509 "cntlid": 31, 00:18:55.509 "qid": 0, 00:18:55.509 "state": "enabled", 00:18:55.509 "thread": "nvmf_tgt_poll_group_000", 00:18:55.509 "listen_address": { 00:18:55.510 "trtype": "TCP", 00:18:55.510 "adrfam": "IPv4", 00:18:55.510 "traddr": "10.0.0.2", 00:18:55.510 "trsvcid": "4420" 00:18:55.510 }, 00:18:55.510 "peer_address": { 00:18:55.510 "trtype": "TCP", 00:18:55.510 "adrfam": "IPv4", 00:18:55.510 "traddr": "10.0.0.1", 00:18:55.510 "trsvcid": "48078" 00:18:55.510 }, 00:18:55.510 "auth": { 00:18:55.510 "state": "completed", 00:18:55.510 "digest": "sha256", 00:18:55.510 "dhgroup": "ffdhe4096" 00:18:55.510 } 00:18:55.510 } 00:18:55.510 ]' 00:18:55.510 15:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:55.510 15:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:55.510 15:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:55.510 15:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:55.510 15:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:55.768 15:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.768 15:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.768 15:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.768 15:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:NTE4NzgzNjczNTBlMTUzNWMyM2Y4MTI3NjNmN2RiMTkzYzRhYmIwNzgzZDg1MmZiYTdjNWRiNjg4ZDJlZGYwM/cmdi8=: 00:18:56.335 15:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.335 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.335 15:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:56.335 15:24:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.335 15:24:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.335 15:24:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.335 15:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:56.335 15:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:56.335 15:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:56.335 15:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:56.594 15:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:18:56.594 15:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:56.594 15:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:56.594 15:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:56.594 15:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:56.594 15:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.594 15:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.594 15:24:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.594 15:24:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.594 15:24:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.594 15:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.594 15:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.853 00:18:56.853 15:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:56.853 15:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.853 15:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:57.111 15:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.111 15:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.111 15:24:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.111 15:24:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.111 15:24:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.111 15:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:57.111 { 00:18:57.111 "cntlid": 33, 00:18:57.111 "qid": 0, 00:18:57.111 "state": "enabled", 00:18:57.111 "thread": "nvmf_tgt_poll_group_000", 00:18:57.111 "listen_address": { 00:18:57.111 "trtype": "TCP", 00:18:57.111 "adrfam": "IPv4", 00:18:57.111 "traddr": "10.0.0.2", 00:18:57.111 "trsvcid": "4420" 00:18:57.111 }, 00:18:57.111 "peer_address": { 00:18:57.111 "trtype": "TCP", 00:18:57.111 "adrfam": "IPv4", 00:18:57.111 "traddr": "10.0.0.1", 00:18:57.111 "trsvcid": "48106" 00:18:57.111 }, 00:18:57.111 "auth": { 00:18:57.111 "state": "completed", 00:18:57.111 "digest": "sha256", 00:18:57.111 "dhgroup": "ffdhe6144" 00:18:57.111 } 00:18:57.111 } 00:18:57.111 ]' 00:18:57.111 15:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:57.111 15:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:57.111 15:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:57.111 15:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:57.111 15:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:57.370 15:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.370 15:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.370 15:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.370 15:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZDhhOGEyYWYyZGRmNzA2OGQ4YmQyOTNiMzE0Yzk1Njc2MDFlYWY1ZTNlN2RlOWM3K0fCMQ==: --dhchap-ctrl-secret DHHC-1:03:N2NkNjk2YWE5NjI2NGIxNmU4ODIyNTIzOWIxYTA4ZmNhM2ExMDcxODQwODMwOGViNzdhOTA5NzczYTAyZjQ4McVZPBM=: 00:18:57.937 15:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.196 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.196 15:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:58.196 15:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.196 15:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.196 15:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.196 15:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:58.196 15:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:58.196 15:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:58.196 15:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:18:58.196 15:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:58.196 15:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:58.196 15:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:58.196 15:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:58.196 15:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.196 15:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.196 15:24:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.196 15:24:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.196 15:24:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.196 15:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.196 15:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.454 00:18:58.714 15:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:58.714 15:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.714 15:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:58.714 15:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.714 15:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.714 15:24:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.714 15:24:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.714 15:24:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.714 15:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:58.714 { 00:18:58.714 "cntlid": 35, 00:18:58.714 "qid": 0, 00:18:58.714 "state": "enabled", 00:18:58.714 "thread": "nvmf_tgt_poll_group_000", 00:18:58.714 "listen_address": { 00:18:58.714 "trtype": "TCP", 00:18:58.714 "adrfam": "IPv4", 00:18:58.714 "traddr": "10.0.0.2", 00:18:58.714 "trsvcid": "4420" 00:18:58.714 }, 00:18:58.714 "peer_address": { 00:18:58.714 "trtype": "TCP", 00:18:58.714 "adrfam": "IPv4", 00:18:58.714 "traddr": "10.0.0.1", 00:18:58.714 "trsvcid": "44438" 00:18:58.714 }, 00:18:58.714 "auth": { 00:18:58.714 "state": "completed", 00:18:58.714 "digest": "sha256", 00:18:58.714 "dhgroup": "ffdhe6144" 00:18:58.714 } 00:18:58.714 } 00:18:58.714 ]' 00:18:58.714 15:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:58.714 15:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:58.714 15:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:58.973 15:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:58.973 15:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:58.973 15:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.973 15:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.973 15:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.973 15:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:ZWFiNGZiMGJlNTU5MzcxZjAwNGViYmViYjQwMmE2NGSbZLbl: --dhchap-ctrl-secret DHHC-1:02:YmQyYmQzNzJkY2U5ZTM3NzQ4MTkwODZkZDYxYzA1NWM2OGJiNDY3Y2M1NzZkYjlmeHPAQA==: 00:18:59.540 15:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.540 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.540 15:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:59.540 15:24:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.540 15:24:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.540 15:24:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.540 15:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:59.540 15:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:59.540 15:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:59.798 15:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:18:59.798 15:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:59.798 15:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:59.798 15:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:59.798 15:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:59.798 15:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.798 15:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:59.798 15:24:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.798 15:24:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.798 15:24:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.798 15:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:59.799 15:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.057 00:19:00.057 15:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:00.057 15:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.057 15:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:00.316 15:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.316 15:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.316 15:24:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.316 15:24:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.316 15:24:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.316 15:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:00.316 { 00:19:00.316 "cntlid": 37, 00:19:00.316 "qid": 0, 00:19:00.316 "state": "enabled", 00:19:00.316 "thread": "nvmf_tgt_poll_group_000", 00:19:00.316 "listen_address": { 00:19:00.316 "trtype": "TCP", 00:19:00.316 "adrfam": "IPv4", 00:19:00.316 "traddr": "10.0.0.2", 00:19:00.316 "trsvcid": "4420" 00:19:00.316 }, 00:19:00.316 "peer_address": { 00:19:00.316 "trtype": "TCP", 00:19:00.316 "adrfam": "IPv4", 00:19:00.316 "traddr": "10.0.0.1", 00:19:00.316 "trsvcid": "44454" 00:19:00.316 }, 00:19:00.316 "auth": { 00:19:00.316 "state": "completed", 00:19:00.316 "digest": "sha256", 00:19:00.316 "dhgroup": "ffdhe6144" 00:19:00.316 } 00:19:00.316 } 00:19:00.316 ]' 00:19:00.316 15:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:00.316 15:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:00.316 15:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:00.576 15:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:00.576 15:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:00.576 15:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.576 15:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.576 15:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.576 15:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:ODczMTA2MzhiMmRlMGQ4NzE5OTU1YjdhMTM0NjRjNDhiNDc3ZDZiZmJlZjA5NWNkoTggUw==: --dhchap-ctrl-secret DHHC-1:01:ZTZkOWVkNjNjNzU0NWYwZmMyNjAwODBkOWRlZDdkZTf7PGQl: 00:19:01.143 15:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.143 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.143 15:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:01.143 15:24:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.143 15:24:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.143 15:24:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.143 15:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:01.143 15:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:01.143 15:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:01.415 15:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:19:01.415 15:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:01.415 15:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:01.415 15:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:01.415 15:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:01.415 15:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.415 15:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:19:01.415 15:24:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.415 15:24:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.415 15:24:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.415 15:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:01.415 15:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:01.724 00:19:01.724 15:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:01.724 15:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.724 15:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:01.984 15:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.984 15:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.984 15:24:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.984 15:24:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.984 15:24:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.984 15:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:01.984 { 00:19:01.984 "cntlid": 39, 00:19:01.984 "qid": 0, 00:19:01.984 "state": "enabled", 00:19:01.984 "thread": "nvmf_tgt_poll_group_000", 00:19:01.984 "listen_address": { 00:19:01.984 "trtype": "TCP", 00:19:01.984 "adrfam": "IPv4", 00:19:01.984 "traddr": "10.0.0.2", 00:19:01.984 "trsvcid": "4420" 00:19:01.984 }, 00:19:01.984 "peer_address": { 00:19:01.984 "trtype": "TCP", 00:19:01.984 "adrfam": "IPv4", 00:19:01.984 "traddr": "10.0.0.1", 00:19:01.984 "trsvcid": "44490" 00:19:01.984 }, 00:19:01.984 "auth": { 00:19:01.984 "state": "completed", 00:19:01.984 "digest": "sha256", 00:19:01.984 "dhgroup": "ffdhe6144" 00:19:01.984 } 00:19:01.984 } 00:19:01.984 ]' 00:19:01.984 15:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:01.984 15:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:01.984 15:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:01.984 15:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:01.984 15:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:01.984 15:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.984 15:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.984 15:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.243 15:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:NTE4NzgzNjczNTBlMTUzNWMyM2Y4MTI3NjNmN2RiMTkzYzRhYmIwNzgzZDg1MmZiYTdjNWRiNjg4ZDJlZGYwM/cmdi8=: 00:19:02.810 15:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.810 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.810 15:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:02.810 15:24:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.810 15:24:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.810 15:24:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.810 15:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:02.810 15:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:02.810 15:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:02.810 15:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:02.810 15:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:19:02.810 15:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:02.810 15:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:02.810 15:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:02.810 15:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:02.810 15:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.810 15:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.810 15:24:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.810 15:24:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.810 15:24:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.810 15:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.810 15:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.377 00:19:03.377 15:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:03.377 15:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.377 15:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:03.648 15:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.648 15:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.648 15:24:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.648 15:24:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.648 15:24:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.648 15:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:03.648 { 00:19:03.648 "cntlid": 41, 00:19:03.648 "qid": 0, 00:19:03.648 "state": "enabled", 00:19:03.648 "thread": "nvmf_tgt_poll_group_000", 00:19:03.648 "listen_address": { 00:19:03.648 "trtype": "TCP", 00:19:03.648 "adrfam": "IPv4", 00:19:03.648 "traddr": "10.0.0.2", 00:19:03.648 "trsvcid": "4420" 00:19:03.648 }, 00:19:03.648 "peer_address": { 00:19:03.648 "trtype": "TCP", 00:19:03.648 "adrfam": "IPv4", 00:19:03.648 "traddr": "10.0.0.1", 00:19:03.648 "trsvcid": "44514" 00:19:03.648 }, 00:19:03.648 "auth": { 00:19:03.648 "state": "completed", 00:19:03.648 "digest": "sha256", 00:19:03.648 "dhgroup": "ffdhe8192" 00:19:03.648 } 00:19:03.648 } 00:19:03.648 ]' 00:19:03.648 15:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:03.648 15:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:03.648 15:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:03.648 15:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:03.648 15:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:03.648 15:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.648 15:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.649 15:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.907 15:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZDhhOGEyYWYyZGRmNzA2OGQ4YmQyOTNiMzE0Yzk1Njc2MDFlYWY1ZTNlN2RlOWM3K0fCMQ==: --dhchap-ctrl-secret DHHC-1:03:N2NkNjk2YWE5NjI2NGIxNmU4ODIyNTIzOWIxYTA4ZmNhM2ExMDcxODQwODMwOGViNzdhOTA5NzczYTAyZjQ4McVZPBM=: 00:19:04.475 15:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.475 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.475 15:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:04.475 15:24:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.475 15:24:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.475 15:24:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.475 15:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:04.475 15:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:04.475 15:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:04.475 15:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:19:04.475 15:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:04.475 15:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:04.475 15:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:04.475 15:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:04.476 15:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.476 15:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.476 15:24:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.476 15:24:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.476 15:24:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.476 15:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.476 15:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.044 00:19:05.044 15:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:05.044 15:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:05.044 15:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.304 15:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.304 15:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.304 15:24:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.304 15:24:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.304 15:24:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.304 15:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:05.304 { 00:19:05.304 "cntlid": 43, 00:19:05.304 "qid": 0, 00:19:05.304 "state": "enabled", 00:19:05.304 "thread": "nvmf_tgt_poll_group_000", 00:19:05.304 "listen_address": { 00:19:05.304 "trtype": "TCP", 00:19:05.304 "adrfam": "IPv4", 00:19:05.304 "traddr": "10.0.0.2", 00:19:05.304 "trsvcid": "4420" 00:19:05.304 }, 00:19:05.304 "peer_address": { 00:19:05.304 "trtype": "TCP", 00:19:05.304 "adrfam": "IPv4", 00:19:05.304 "traddr": "10.0.0.1", 00:19:05.304 "trsvcid": "44540" 00:19:05.304 }, 00:19:05.304 "auth": { 00:19:05.304 "state": "completed", 00:19:05.304 "digest": "sha256", 00:19:05.304 "dhgroup": "ffdhe8192" 00:19:05.304 } 00:19:05.304 } 00:19:05.304 ]' 00:19:05.304 15:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:05.304 15:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:05.304 15:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:05.304 15:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:05.304 15:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:05.304 15:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.304 15:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.304 15:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.563 15:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:ZWFiNGZiMGJlNTU5MzcxZjAwNGViYmViYjQwMmE2NGSbZLbl: --dhchap-ctrl-secret DHHC-1:02:YmQyYmQzNzJkY2U5ZTM3NzQ4MTkwODZkZDYxYzA1NWM2OGJiNDY3Y2M1NzZkYjlmeHPAQA==: 00:19:06.130 15:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.130 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.130 15:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:06.130 15:24:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.130 15:24:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.130 15:24:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.130 15:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:06.130 15:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:06.130 15:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:06.387 15:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:19:06.387 15:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:06.387 15:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:06.387 15:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:06.387 15:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:06.387 15:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.387 15:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.387 15:24:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.387 15:24:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.387 15:24:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.387 15:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.387 15:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.644 00:19:06.644 15:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:06.644 15:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:06.644 15:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.902 15:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.902 15:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.902 15:24:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.902 15:24:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.902 15:24:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.902 15:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:06.902 { 00:19:06.902 "cntlid": 45, 00:19:06.902 "qid": 0, 00:19:06.902 "state": "enabled", 00:19:06.902 "thread": "nvmf_tgt_poll_group_000", 00:19:06.902 "listen_address": { 00:19:06.902 "trtype": "TCP", 00:19:06.902 "adrfam": "IPv4", 00:19:06.902 "traddr": "10.0.0.2", 00:19:06.902 "trsvcid": "4420" 00:19:06.902 }, 00:19:06.902 "peer_address": { 00:19:06.902 "trtype": "TCP", 00:19:06.902 "adrfam": "IPv4", 00:19:06.902 "traddr": "10.0.0.1", 00:19:06.902 "trsvcid": "44572" 00:19:06.902 }, 00:19:06.902 "auth": { 00:19:06.902 "state": "completed", 00:19:06.902 "digest": "sha256", 00:19:06.902 "dhgroup": "ffdhe8192" 00:19:06.902 } 00:19:06.902 } 00:19:06.902 ]' 00:19:06.902 15:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:06.902 15:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:06.902 15:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:06.902 15:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:07.159 15:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:07.159 15:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.159 15:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.159 15:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.159 15:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:ODczMTA2MzhiMmRlMGQ4NzE5OTU1YjdhMTM0NjRjNDhiNDc3ZDZiZmJlZjA5NWNkoTggUw==: --dhchap-ctrl-secret DHHC-1:01:ZTZkOWVkNjNjNzU0NWYwZmMyNjAwODBkOWRlZDdkZTf7PGQl: 00:19:07.726 15:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.726 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.726 15:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:07.726 15:24:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.726 15:24:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.726 15:24:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.726 15:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:07.726 15:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:07.726 15:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:07.984 15:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:19:07.984 15:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:07.984 15:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:07.984 15:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:07.984 15:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:07.984 15:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.984 15:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:19:07.984 15:24:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.984 15:24:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.984 15:24:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.984 15:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:07.984 15:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:08.549 00:19:08.549 15:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:08.549 15:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:08.549 15:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.549 15:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.549 15:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.549 15:24:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.549 15:24:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.549 15:24:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.549 15:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:08.549 { 00:19:08.549 "cntlid": 47, 00:19:08.549 "qid": 0, 00:19:08.549 "state": "enabled", 00:19:08.549 "thread": "nvmf_tgt_poll_group_000", 00:19:08.549 "listen_address": { 00:19:08.549 "trtype": "TCP", 00:19:08.549 "adrfam": "IPv4", 00:19:08.549 "traddr": "10.0.0.2", 00:19:08.549 "trsvcid": "4420" 00:19:08.549 }, 00:19:08.549 "peer_address": { 00:19:08.549 "trtype": "TCP", 00:19:08.549 "adrfam": "IPv4", 00:19:08.549 "traddr": "10.0.0.1", 00:19:08.549 "trsvcid": "36290" 00:19:08.549 }, 00:19:08.549 "auth": { 00:19:08.549 "state": "completed", 00:19:08.549 "digest": "sha256", 00:19:08.549 "dhgroup": "ffdhe8192" 00:19:08.549 } 00:19:08.549 } 00:19:08.549 ]' 00:19:08.550 15:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:08.550 15:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:08.550 15:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:08.807 15:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:08.807 15:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:08.807 15:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.807 15:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.807 15:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.807 15:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:NTE4NzgzNjczNTBlMTUzNWMyM2Y4MTI3NjNmN2RiMTkzYzRhYmIwNzgzZDg1MmZiYTdjNWRiNjg4ZDJlZGYwM/cmdi8=: 00:19:09.370 15:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.370 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.370 15:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:09.370 15:24:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.370 15:24:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.370 15:24:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.370 15:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:09.370 15:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:09.370 15:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:09.370 15:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:09.371 15:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:09.627 15:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:19:09.627 15:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:09.627 15:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:09.627 15:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:09.627 15:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:09.627 15:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.627 15:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.627 15:24:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.627 15:24:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.627 15:24:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.627 15:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.627 15:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.884 00:19:09.884 15:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:09.884 15:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:09.884 15:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.142 15:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.142 15:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.142 15:24:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.142 15:24:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.142 15:24:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.142 15:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:10.142 { 00:19:10.142 "cntlid": 49, 00:19:10.142 "qid": 0, 00:19:10.142 "state": "enabled", 00:19:10.142 "thread": "nvmf_tgt_poll_group_000", 00:19:10.142 "listen_address": { 00:19:10.142 "trtype": "TCP", 00:19:10.142 "adrfam": "IPv4", 00:19:10.142 "traddr": "10.0.0.2", 00:19:10.142 "trsvcid": "4420" 00:19:10.142 }, 00:19:10.142 "peer_address": { 00:19:10.142 "trtype": "TCP", 00:19:10.142 "adrfam": "IPv4", 00:19:10.142 "traddr": "10.0.0.1", 00:19:10.142 "trsvcid": "36334" 00:19:10.142 }, 00:19:10.142 "auth": { 00:19:10.142 "state": "completed", 00:19:10.142 "digest": "sha384", 00:19:10.142 "dhgroup": "null" 00:19:10.142 } 00:19:10.142 } 00:19:10.142 ]' 00:19:10.142 15:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:10.142 15:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:10.142 15:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:10.142 15:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:10.142 15:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:10.142 15:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.142 15:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.142 15:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.399 15:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZDhhOGEyYWYyZGRmNzA2OGQ4YmQyOTNiMzE0Yzk1Njc2MDFlYWY1ZTNlN2RlOWM3K0fCMQ==: --dhchap-ctrl-secret DHHC-1:03:N2NkNjk2YWE5NjI2NGIxNmU4ODIyNTIzOWIxYTA4ZmNhM2ExMDcxODQwODMwOGViNzdhOTA5NzczYTAyZjQ4McVZPBM=: 00:19:10.964 15:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.964 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.964 15:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:10.964 15:24:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.964 15:24:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.964 15:24:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.964 15:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:10.964 15:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:10.964 15:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:11.221 15:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:19:11.221 15:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:11.221 15:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:11.221 15:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:11.221 15:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:11.221 15:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.221 15:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.221 15:24:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.221 15:24:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.221 15:24:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.221 15:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.221 15:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.221 00:19:11.221 15:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:11.221 15:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:11.221 15:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.479 15:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.479 15:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.479 15:24:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.479 15:24:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.479 15:24:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.479 15:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:11.479 { 00:19:11.479 "cntlid": 51, 00:19:11.479 "qid": 0, 00:19:11.479 "state": "enabled", 00:19:11.479 "thread": "nvmf_tgt_poll_group_000", 00:19:11.479 "listen_address": { 00:19:11.479 "trtype": "TCP", 00:19:11.479 "adrfam": "IPv4", 00:19:11.479 "traddr": "10.0.0.2", 00:19:11.479 "trsvcid": "4420" 00:19:11.479 }, 00:19:11.479 "peer_address": { 00:19:11.479 "trtype": "TCP", 00:19:11.479 "adrfam": "IPv4", 00:19:11.479 "traddr": "10.0.0.1", 00:19:11.479 "trsvcid": "36370" 00:19:11.479 }, 00:19:11.479 "auth": { 00:19:11.479 "state": "completed", 00:19:11.479 "digest": "sha384", 00:19:11.479 "dhgroup": "null" 00:19:11.479 } 00:19:11.479 } 00:19:11.479 ]' 00:19:11.479 15:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:11.479 15:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:11.479 15:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:11.479 15:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:11.479 15:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:11.736 15:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.736 15:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.736 15:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.736 15:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:ZWFiNGZiMGJlNTU5MzcxZjAwNGViYmViYjQwMmE2NGSbZLbl: --dhchap-ctrl-secret DHHC-1:02:YmQyYmQzNzJkY2U5ZTM3NzQ4MTkwODZkZDYxYzA1NWM2OGJiNDY3Y2M1NzZkYjlmeHPAQA==: 00:19:12.302 15:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.302 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.302 15:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:12.302 15:24:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.302 15:24:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.302 15:24:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.302 15:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:12.302 15:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:12.302 15:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:12.560 15:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:19:12.560 15:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:12.560 15:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:12.560 15:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:12.560 15:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:12.560 15:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.560 15:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.560 15:24:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.560 15:24:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.560 15:24:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.560 15:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.560 15:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.817 00:19:12.817 15:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:12.817 15:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:12.817 15:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.075 15:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.075 15:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.075 15:24:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.075 15:24:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.075 15:24:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.075 15:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:13.075 { 00:19:13.075 "cntlid": 53, 00:19:13.075 "qid": 0, 00:19:13.075 "state": "enabled", 00:19:13.075 "thread": "nvmf_tgt_poll_group_000", 00:19:13.075 "listen_address": { 00:19:13.075 "trtype": "TCP", 00:19:13.075 "adrfam": "IPv4", 00:19:13.075 "traddr": "10.0.0.2", 00:19:13.075 "trsvcid": "4420" 00:19:13.075 }, 00:19:13.075 "peer_address": { 00:19:13.075 "trtype": "TCP", 00:19:13.075 "adrfam": "IPv4", 00:19:13.075 "traddr": "10.0.0.1", 00:19:13.075 "trsvcid": "36386" 00:19:13.075 }, 00:19:13.075 "auth": { 00:19:13.075 "state": "completed", 00:19:13.075 "digest": "sha384", 00:19:13.075 "dhgroup": "null" 00:19:13.075 } 00:19:13.075 } 00:19:13.075 ]' 00:19:13.075 15:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:13.075 15:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:13.075 15:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:13.075 15:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:13.075 15:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:13.075 15:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.075 15:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.075 15:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.333 15:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:ODczMTA2MzhiMmRlMGQ4NzE5OTU1YjdhMTM0NjRjNDhiNDc3ZDZiZmJlZjA5NWNkoTggUw==: --dhchap-ctrl-secret DHHC-1:01:ZTZkOWVkNjNjNzU0NWYwZmMyNjAwODBkOWRlZDdkZTf7PGQl: 00:19:13.897 15:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.897 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.897 15:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:13.897 15:24:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.898 15:24:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.898 15:24:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.898 15:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:13.898 15:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:13.898 15:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:13.898 15:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:19:13.898 15:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:13.898 15:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:13.898 15:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:13.898 15:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:13.898 15:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.898 15:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:19:13.898 15:24:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.898 15:24:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.898 15:24:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.898 15:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:13.898 15:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:14.155 00:19:14.155 15:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:14.155 15:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:14.155 15:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.413 15:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.413 15:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.413 15:24:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.413 15:24:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.413 15:24:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.413 15:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:14.413 { 00:19:14.413 "cntlid": 55, 00:19:14.413 "qid": 0, 00:19:14.413 "state": "enabled", 00:19:14.413 "thread": "nvmf_tgt_poll_group_000", 00:19:14.413 "listen_address": { 00:19:14.413 "trtype": "TCP", 00:19:14.413 "adrfam": "IPv4", 00:19:14.413 "traddr": "10.0.0.2", 00:19:14.413 "trsvcid": "4420" 00:19:14.413 }, 00:19:14.413 "peer_address": { 00:19:14.413 "trtype": "TCP", 00:19:14.413 "adrfam": "IPv4", 00:19:14.413 "traddr": "10.0.0.1", 00:19:14.413 "trsvcid": "36398" 00:19:14.413 }, 00:19:14.413 "auth": { 00:19:14.413 "state": "completed", 00:19:14.413 "digest": "sha384", 00:19:14.413 "dhgroup": "null" 00:19:14.413 } 00:19:14.413 } 00:19:14.413 ]' 00:19:14.413 15:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:14.413 15:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:14.413 15:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:14.413 15:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:14.413 15:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:14.671 15:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.671 15:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.671 15:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.671 15:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:NTE4NzgzNjczNTBlMTUzNWMyM2Y4MTI3NjNmN2RiMTkzYzRhYmIwNzgzZDg1MmZiYTdjNWRiNjg4ZDJlZGYwM/cmdi8=: 00:19:15.235 15:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.235 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.235 15:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:15.235 15:24:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.235 15:24:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.235 15:24:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.235 15:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:15.236 15:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:15.236 15:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:15.236 15:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:15.492 15:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:19:15.492 15:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:15.492 15:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:15.492 15:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:15.492 15:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:15.492 15:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.492 15:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:15.492 15:24:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.492 15:24:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.492 15:24:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.492 15:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:15.492 15:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:15.750 00:19:15.750 15:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:15.750 15:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:15.750 15:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.009 15:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.009 15:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.009 15:24:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.009 15:24:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.009 15:24:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.009 15:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:16.009 { 00:19:16.009 "cntlid": 57, 00:19:16.009 "qid": 0, 00:19:16.009 "state": "enabled", 00:19:16.009 "thread": "nvmf_tgt_poll_group_000", 00:19:16.009 "listen_address": { 00:19:16.009 "trtype": "TCP", 00:19:16.009 "adrfam": "IPv4", 00:19:16.009 "traddr": "10.0.0.2", 00:19:16.009 "trsvcid": "4420" 00:19:16.009 }, 00:19:16.009 "peer_address": { 00:19:16.009 "trtype": "TCP", 00:19:16.009 "adrfam": "IPv4", 00:19:16.009 "traddr": "10.0.0.1", 00:19:16.009 "trsvcid": "36428" 00:19:16.009 }, 00:19:16.009 "auth": { 00:19:16.009 "state": "completed", 00:19:16.009 "digest": "sha384", 00:19:16.009 "dhgroup": "ffdhe2048" 00:19:16.009 } 00:19:16.009 } 00:19:16.009 ]' 00:19:16.009 15:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:16.009 15:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:16.009 15:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:16.009 15:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:16.009 15:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:16.009 15:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.009 15:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.009 15:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.267 15:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZDhhOGEyYWYyZGRmNzA2OGQ4YmQyOTNiMzE0Yzk1Njc2MDFlYWY1ZTNlN2RlOWM3K0fCMQ==: --dhchap-ctrl-secret DHHC-1:03:N2NkNjk2YWE5NjI2NGIxNmU4ODIyNTIzOWIxYTA4ZmNhM2ExMDcxODQwODMwOGViNzdhOTA5NzczYTAyZjQ4McVZPBM=: 00:19:16.833 15:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.833 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.833 15:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:16.833 15:24:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.833 15:24:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.833 15:24:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.833 15:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:16.833 15:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:16.833 15:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:16.833 15:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:19:16.833 15:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:16.833 15:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:16.833 15:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:16.833 15:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:16.833 15:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.833 15:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.833 15:24:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.833 15:24:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.833 15:24:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.833 15:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.833 15:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.091 00:19:17.091 15:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:17.091 15:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.091 15:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:17.349 15:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.349 15:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.349 15:24:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.349 15:24:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.349 15:24:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.349 15:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:17.349 { 00:19:17.349 "cntlid": 59, 00:19:17.349 "qid": 0, 00:19:17.349 "state": "enabled", 00:19:17.349 "thread": "nvmf_tgt_poll_group_000", 00:19:17.349 "listen_address": { 00:19:17.349 "trtype": "TCP", 00:19:17.349 "adrfam": "IPv4", 00:19:17.349 "traddr": "10.0.0.2", 00:19:17.349 "trsvcid": "4420" 00:19:17.349 }, 00:19:17.349 "peer_address": { 00:19:17.349 "trtype": "TCP", 00:19:17.349 "adrfam": "IPv4", 00:19:17.349 "traddr": "10.0.0.1", 00:19:17.349 "trsvcid": "36462" 00:19:17.349 }, 00:19:17.349 "auth": { 00:19:17.349 "state": "completed", 00:19:17.349 "digest": "sha384", 00:19:17.349 "dhgroup": "ffdhe2048" 00:19:17.349 } 00:19:17.349 } 00:19:17.349 ]' 00:19:17.349 15:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:17.349 15:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:17.349 15:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:17.349 15:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:17.349 15:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:17.349 15:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.349 15:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.349 15:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.608 15:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:ZWFiNGZiMGJlNTU5MzcxZjAwNGViYmViYjQwMmE2NGSbZLbl: --dhchap-ctrl-secret DHHC-1:02:YmQyYmQzNzJkY2U5ZTM3NzQ4MTkwODZkZDYxYzA1NWM2OGJiNDY3Y2M1NzZkYjlmeHPAQA==: 00:19:18.189 15:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.189 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.189 15:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:18.189 15:24:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.189 15:24:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.189 15:24:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.189 15:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:18.189 15:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:18.189 15:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:18.495 15:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:19:18.495 15:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:18.495 15:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:18.495 15:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:18.495 15:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:18.495 15:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.495 15:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.495 15:24:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.495 15:24:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.495 15:24:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.495 15:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.495 15:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.495 00:19:18.754 15:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:18.754 15:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:18.754 15:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.754 15:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.754 15:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.754 15:24:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.754 15:24:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.754 15:24:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.754 15:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:18.754 { 00:19:18.754 "cntlid": 61, 00:19:18.754 "qid": 0, 00:19:18.754 "state": "enabled", 00:19:18.754 "thread": "nvmf_tgt_poll_group_000", 00:19:18.754 "listen_address": { 00:19:18.754 "trtype": "TCP", 00:19:18.754 "adrfam": "IPv4", 00:19:18.754 "traddr": "10.0.0.2", 00:19:18.754 "trsvcid": "4420" 00:19:18.754 }, 00:19:18.754 "peer_address": { 00:19:18.754 "trtype": "TCP", 00:19:18.754 "adrfam": "IPv4", 00:19:18.754 "traddr": "10.0.0.1", 00:19:18.754 "trsvcid": "55870" 00:19:18.754 }, 00:19:18.754 "auth": { 00:19:18.754 "state": "completed", 00:19:18.754 "digest": "sha384", 00:19:18.754 "dhgroup": "ffdhe2048" 00:19:18.754 } 00:19:18.754 } 00:19:18.754 ]' 00:19:18.754 15:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:18.754 15:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:18.754 15:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:19.014 15:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:19.014 15:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:19.014 15:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.014 15:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.014 15:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.014 15:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:ODczMTA2MzhiMmRlMGQ4NzE5OTU1YjdhMTM0NjRjNDhiNDc3ZDZiZmJlZjA5NWNkoTggUw==: --dhchap-ctrl-secret DHHC-1:01:ZTZkOWVkNjNjNzU0NWYwZmMyNjAwODBkOWRlZDdkZTf7PGQl: 00:19:19.581 15:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.581 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.581 15:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:19.581 15:24:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.581 15:24:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.581 15:24:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.581 15:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:19.581 15:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:19.581 15:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:19.839 15:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:19:19.839 15:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:19.839 15:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:19.839 15:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:19.839 15:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:19.839 15:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.839 15:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:19:19.839 15:24:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.839 15:24:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.839 15:24:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.839 15:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:19.840 15:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:20.098 00:19:20.098 15:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:20.098 15:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.098 15:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:20.357 15:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.357 15:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.357 15:24:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.357 15:24:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.357 15:24:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.357 15:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:20.357 { 00:19:20.357 "cntlid": 63, 00:19:20.357 "qid": 0, 00:19:20.357 "state": "enabled", 00:19:20.357 "thread": "nvmf_tgt_poll_group_000", 00:19:20.357 "listen_address": { 00:19:20.357 "trtype": "TCP", 00:19:20.357 "adrfam": "IPv4", 00:19:20.357 "traddr": "10.0.0.2", 00:19:20.357 "trsvcid": "4420" 00:19:20.357 }, 00:19:20.357 "peer_address": { 00:19:20.357 "trtype": "TCP", 00:19:20.357 "adrfam": "IPv4", 00:19:20.357 "traddr": "10.0.0.1", 00:19:20.357 "trsvcid": "55896" 00:19:20.357 }, 00:19:20.357 "auth": { 00:19:20.357 "state": "completed", 00:19:20.357 "digest": "sha384", 00:19:20.357 "dhgroup": "ffdhe2048" 00:19:20.357 } 00:19:20.357 } 00:19:20.357 ]' 00:19:20.357 15:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:20.357 15:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:20.357 15:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:20.357 15:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:20.357 15:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:20.357 15:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.357 15:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.357 15:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.616 15:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:NTE4NzgzNjczNTBlMTUzNWMyM2Y4MTI3NjNmN2RiMTkzYzRhYmIwNzgzZDg1MmZiYTdjNWRiNjg4ZDJlZGYwM/cmdi8=: 00:19:21.197 15:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.197 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.197 15:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:21.197 15:24:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.197 15:24:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.197 15:24:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.197 15:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:21.197 15:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:21.197 15:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:21.197 15:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:21.197 15:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:19:21.197 15:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:21.197 15:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:21.197 15:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:21.197 15:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:21.197 15:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.197 15:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.197 15:24:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.197 15:24:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.198 15:24:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.198 15:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.198 15:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.457 00:19:21.457 15:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:21.457 15:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:21.457 15:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.716 15:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.716 15:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.716 15:24:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.716 15:24:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.716 15:24:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.716 15:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:21.716 { 00:19:21.716 "cntlid": 65, 00:19:21.716 "qid": 0, 00:19:21.716 "state": "enabled", 00:19:21.716 "thread": "nvmf_tgt_poll_group_000", 00:19:21.716 "listen_address": { 00:19:21.716 "trtype": "TCP", 00:19:21.716 "adrfam": "IPv4", 00:19:21.716 "traddr": "10.0.0.2", 00:19:21.716 "trsvcid": "4420" 00:19:21.716 }, 00:19:21.716 "peer_address": { 00:19:21.716 "trtype": "TCP", 00:19:21.716 "adrfam": "IPv4", 00:19:21.716 "traddr": "10.0.0.1", 00:19:21.716 "trsvcid": "55922" 00:19:21.716 }, 00:19:21.716 "auth": { 00:19:21.716 "state": "completed", 00:19:21.716 "digest": "sha384", 00:19:21.716 "dhgroup": "ffdhe3072" 00:19:21.716 } 00:19:21.716 } 00:19:21.716 ]' 00:19:21.716 15:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:21.716 15:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:21.716 15:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:21.716 15:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:21.716 15:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:21.975 15:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.975 15:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.975 15:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.975 15:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZDhhOGEyYWYyZGRmNzA2OGQ4YmQyOTNiMzE0Yzk1Njc2MDFlYWY1ZTNlN2RlOWM3K0fCMQ==: --dhchap-ctrl-secret DHHC-1:03:N2NkNjk2YWE5NjI2NGIxNmU4ODIyNTIzOWIxYTA4ZmNhM2ExMDcxODQwODMwOGViNzdhOTA5NzczYTAyZjQ4McVZPBM=: 00:19:22.543 15:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.543 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.543 15:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:22.543 15:24:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.543 15:24:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.543 15:24:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.543 15:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:22.543 15:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:22.543 15:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:22.802 15:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:19:22.802 15:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:22.802 15:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:22.802 15:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:22.802 15:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:22.802 15:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.802 15:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:22.802 15:24:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.802 15:24:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.802 15:24:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.802 15:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:22.802 15:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.061 00:19:23.061 15:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:23.061 15:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:23.061 15:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.319 15:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.319 15:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.319 15:24:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.319 15:24:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.319 15:24:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.319 15:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:23.319 { 00:19:23.319 "cntlid": 67, 00:19:23.319 "qid": 0, 00:19:23.319 "state": "enabled", 00:19:23.319 "thread": "nvmf_tgt_poll_group_000", 00:19:23.319 "listen_address": { 00:19:23.319 "trtype": "TCP", 00:19:23.319 "adrfam": "IPv4", 00:19:23.319 "traddr": "10.0.0.2", 00:19:23.319 "trsvcid": "4420" 00:19:23.319 }, 00:19:23.319 "peer_address": { 00:19:23.319 "trtype": "TCP", 00:19:23.319 "adrfam": "IPv4", 00:19:23.319 "traddr": "10.0.0.1", 00:19:23.319 "trsvcid": "55936" 00:19:23.319 }, 00:19:23.319 "auth": { 00:19:23.319 "state": "completed", 00:19:23.319 "digest": "sha384", 00:19:23.319 "dhgroup": "ffdhe3072" 00:19:23.319 } 00:19:23.319 } 00:19:23.319 ]' 00:19:23.319 15:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:23.319 15:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:23.320 15:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:23.320 15:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:23.320 15:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:23.320 15:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.320 15:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.320 15:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.578 15:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:ZWFiNGZiMGJlNTU5MzcxZjAwNGViYmViYjQwMmE2NGSbZLbl: --dhchap-ctrl-secret DHHC-1:02:YmQyYmQzNzJkY2U5ZTM3NzQ4MTkwODZkZDYxYzA1NWM2OGJiNDY3Y2M1NzZkYjlmeHPAQA==: 00:19:24.145 15:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.145 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.145 15:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:24.145 15:24:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.145 15:24:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.145 15:24:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.145 15:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:24.145 15:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:24.145 15:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:24.145 15:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:19:24.145 15:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:24.145 15:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:24.145 15:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:24.145 15:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:24.145 15:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.145 15:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:24.145 15:24:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.145 15:24:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.145 15:24:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.145 15:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:24.145 15:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:24.403 00:19:24.403 15:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:24.403 15:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.403 15:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:24.662 15:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.662 15:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.662 15:24:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.662 15:24:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.662 15:24:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.662 15:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:24.662 { 00:19:24.662 "cntlid": 69, 00:19:24.662 "qid": 0, 00:19:24.662 "state": "enabled", 00:19:24.662 "thread": "nvmf_tgt_poll_group_000", 00:19:24.662 "listen_address": { 00:19:24.662 "trtype": "TCP", 00:19:24.662 "adrfam": "IPv4", 00:19:24.662 "traddr": "10.0.0.2", 00:19:24.662 "trsvcid": "4420" 00:19:24.662 }, 00:19:24.662 "peer_address": { 00:19:24.662 "trtype": "TCP", 00:19:24.662 "adrfam": "IPv4", 00:19:24.662 "traddr": "10.0.0.1", 00:19:24.662 "trsvcid": "55962" 00:19:24.662 }, 00:19:24.662 "auth": { 00:19:24.662 "state": "completed", 00:19:24.662 "digest": "sha384", 00:19:24.662 "dhgroup": "ffdhe3072" 00:19:24.662 } 00:19:24.662 } 00:19:24.662 ]' 00:19:24.662 15:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:24.662 15:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:24.662 15:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:24.662 15:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:24.662 15:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:24.921 15:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.921 15:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.921 15:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.921 15:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:ODczMTA2MzhiMmRlMGQ4NzE5OTU1YjdhMTM0NjRjNDhiNDc3ZDZiZmJlZjA5NWNkoTggUw==: --dhchap-ctrl-secret DHHC-1:01:ZTZkOWVkNjNjNzU0NWYwZmMyNjAwODBkOWRlZDdkZTf7PGQl: 00:19:25.488 15:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.488 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.488 15:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:25.488 15:24:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.488 15:24:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.488 15:24:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.488 15:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:25.488 15:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:25.488 15:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:25.748 15:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:19:25.748 15:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:25.748 15:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:25.748 15:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:25.748 15:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:25.748 15:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.748 15:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:19:25.748 15:24:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.748 15:24:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.748 15:24:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.748 15:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:25.748 15:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:26.007 00:19:26.007 15:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:26.007 15:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.007 15:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:26.266 15:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.266 15:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.266 15:24:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.266 15:24:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.266 15:24:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.266 15:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:26.266 { 00:19:26.266 "cntlid": 71, 00:19:26.266 "qid": 0, 00:19:26.266 "state": "enabled", 00:19:26.266 "thread": "nvmf_tgt_poll_group_000", 00:19:26.266 "listen_address": { 00:19:26.266 "trtype": "TCP", 00:19:26.266 "adrfam": "IPv4", 00:19:26.266 "traddr": "10.0.0.2", 00:19:26.266 "trsvcid": "4420" 00:19:26.266 }, 00:19:26.266 "peer_address": { 00:19:26.266 "trtype": "TCP", 00:19:26.266 "adrfam": "IPv4", 00:19:26.266 "traddr": "10.0.0.1", 00:19:26.266 "trsvcid": "55986" 00:19:26.266 }, 00:19:26.266 "auth": { 00:19:26.266 "state": "completed", 00:19:26.266 "digest": "sha384", 00:19:26.266 "dhgroup": "ffdhe3072" 00:19:26.266 } 00:19:26.266 } 00:19:26.266 ]' 00:19:26.266 15:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:26.266 15:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:26.266 15:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:26.266 15:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:26.266 15:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:26.266 15:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.266 15:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.266 15:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.524 15:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:NTE4NzgzNjczNTBlMTUzNWMyM2Y4MTI3NjNmN2RiMTkzYzRhYmIwNzgzZDg1MmZiYTdjNWRiNjg4ZDJlZGYwM/cmdi8=: 00:19:27.092 15:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.093 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.093 15:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:27.093 15:24:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.093 15:24:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.093 15:24:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.093 15:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:27.093 15:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:27.093 15:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:27.093 15:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:27.093 15:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:19:27.093 15:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:27.093 15:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:27.093 15:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:27.093 15:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:27.093 15:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.093 15:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:27.093 15:24:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.093 15:24:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.093 15:24:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.093 15:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:27.093 15:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:27.351 00:19:27.610 15:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:27.610 15:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:27.610 15:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.610 15:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.610 15:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.610 15:24:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.610 15:24:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.610 15:24:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.610 15:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:27.610 { 00:19:27.610 "cntlid": 73, 00:19:27.610 "qid": 0, 00:19:27.610 "state": "enabled", 00:19:27.610 "thread": "nvmf_tgt_poll_group_000", 00:19:27.610 "listen_address": { 00:19:27.610 "trtype": "TCP", 00:19:27.610 "adrfam": "IPv4", 00:19:27.610 "traddr": "10.0.0.2", 00:19:27.610 "trsvcid": "4420" 00:19:27.610 }, 00:19:27.610 "peer_address": { 00:19:27.610 "trtype": "TCP", 00:19:27.610 "adrfam": "IPv4", 00:19:27.610 "traddr": "10.0.0.1", 00:19:27.610 "trsvcid": "55996" 00:19:27.610 }, 00:19:27.610 "auth": { 00:19:27.610 "state": "completed", 00:19:27.610 "digest": "sha384", 00:19:27.610 "dhgroup": "ffdhe4096" 00:19:27.610 } 00:19:27.610 } 00:19:27.610 ]' 00:19:27.610 15:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:27.610 15:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:27.610 15:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:27.869 15:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:27.869 15:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:27.869 15:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.869 15:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.869 15:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.869 15:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZDhhOGEyYWYyZGRmNzA2OGQ4YmQyOTNiMzE0Yzk1Njc2MDFlYWY1ZTNlN2RlOWM3K0fCMQ==: --dhchap-ctrl-secret DHHC-1:03:N2NkNjk2YWE5NjI2NGIxNmU4ODIyNTIzOWIxYTA4ZmNhM2ExMDcxODQwODMwOGViNzdhOTA5NzczYTAyZjQ4McVZPBM=: 00:19:28.436 15:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.436 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.436 15:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:28.436 15:24:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.436 15:24:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.436 15:24:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.436 15:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:28.436 15:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:28.436 15:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:28.695 15:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:19:28.695 15:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:28.695 15:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:28.695 15:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:28.695 15:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:28.695 15:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.695 15:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.695 15:24:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.696 15:24:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.696 15:24:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.696 15:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.696 15:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.954 00:19:28.954 15:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:28.954 15:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.954 15:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:29.212 15:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.212 15:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.212 15:24:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.212 15:24:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.212 15:24:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.212 15:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:29.212 { 00:19:29.212 "cntlid": 75, 00:19:29.212 "qid": 0, 00:19:29.212 "state": "enabled", 00:19:29.212 "thread": "nvmf_tgt_poll_group_000", 00:19:29.212 "listen_address": { 00:19:29.212 "trtype": "TCP", 00:19:29.212 "adrfam": "IPv4", 00:19:29.212 "traddr": "10.0.0.2", 00:19:29.212 "trsvcid": "4420" 00:19:29.212 }, 00:19:29.212 "peer_address": { 00:19:29.212 "trtype": "TCP", 00:19:29.212 "adrfam": "IPv4", 00:19:29.212 "traddr": "10.0.0.1", 00:19:29.212 "trsvcid": "54724" 00:19:29.212 }, 00:19:29.212 "auth": { 00:19:29.212 "state": "completed", 00:19:29.212 "digest": "sha384", 00:19:29.212 "dhgroup": "ffdhe4096" 00:19:29.212 } 00:19:29.212 } 00:19:29.212 ]' 00:19:29.212 15:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:29.212 15:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:29.212 15:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:29.212 15:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:29.212 15:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:29.213 15:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.213 15:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.213 15:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.474 15:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:ZWFiNGZiMGJlNTU5MzcxZjAwNGViYmViYjQwMmE2NGSbZLbl: --dhchap-ctrl-secret DHHC-1:02:YmQyYmQzNzJkY2U5ZTM3NzQ4MTkwODZkZDYxYzA1NWM2OGJiNDY3Y2M1NzZkYjlmeHPAQA==: 00:19:30.041 15:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.041 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.041 15:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:30.042 15:24:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.042 15:24:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.042 15:24:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.042 15:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:30.042 15:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:30.042 15:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:30.301 15:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:19:30.301 15:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:30.301 15:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:30.301 15:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:30.301 15:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:30.301 15:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.301 15:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.301 15:24:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.301 15:24:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.301 15:24:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.301 15:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.301 15:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.560 00:19:30.560 15:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:30.560 15:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:30.560 15:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.560 15:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.560 15:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.560 15:24:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.560 15:24:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.821 15:24:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.821 15:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:30.821 { 00:19:30.821 "cntlid": 77, 00:19:30.821 "qid": 0, 00:19:30.821 "state": "enabled", 00:19:30.821 "thread": "nvmf_tgt_poll_group_000", 00:19:30.821 "listen_address": { 00:19:30.821 "trtype": "TCP", 00:19:30.821 "adrfam": "IPv4", 00:19:30.821 "traddr": "10.0.0.2", 00:19:30.821 "trsvcid": "4420" 00:19:30.821 }, 00:19:30.821 "peer_address": { 00:19:30.821 "trtype": "TCP", 00:19:30.821 "adrfam": "IPv4", 00:19:30.821 "traddr": "10.0.0.1", 00:19:30.821 "trsvcid": "54748" 00:19:30.821 }, 00:19:30.821 "auth": { 00:19:30.821 "state": "completed", 00:19:30.821 "digest": "sha384", 00:19:30.821 "dhgroup": "ffdhe4096" 00:19:30.821 } 00:19:30.821 } 00:19:30.821 ]' 00:19:30.821 15:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:30.821 15:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:30.821 15:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:30.821 15:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:30.821 15:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:30.821 15:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.821 15:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.821 15:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.081 15:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:ODczMTA2MzhiMmRlMGQ4NzE5OTU1YjdhMTM0NjRjNDhiNDc3ZDZiZmJlZjA5NWNkoTggUw==: --dhchap-ctrl-secret DHHC-1:01:ZTZkOWVkNjNjNzU0NWYwZmMyNjAwODBkOWRlZDdkZTf7PGQl: 00:19:31.647 15:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.647 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.647 15:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:31.647 15:24:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.647 15:24:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.647 15:24:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.647 15:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:31.647 15:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:31.647 15:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:31.647 15:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:19:31.647 15:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:31.647 15:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:31.647 15:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:31.647 15:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:31.647 15:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.647 15:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:19:31.648 15:24:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.648 15:24:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.648 15:24:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.648 15:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:31.648 15:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:31.907 00:19:31.907 15:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:31.907 15:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.907 15:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:32.165 15:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.166 15:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:32.166 15:24:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.166 15:24:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.166 15:24:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.166 15:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:32.166 { 00:19:32.166 "cntlid": 79, 00:19:32.166 "qid": 0, 00:19:32.166 "state": "enabled", 00:19:32.166 "thread": "nvmf_tgt_poll_group_000", 00:19:32.166 "listen_address": { 00:19:32.166 "trtype": "TCP", 00:19:32.166 "adrfam": "IPv4", 00:19:32.166 "traddr": "10.0.0.2", 00:19:32.166 "trsvcid": "4420" 00:19:32.166 }, 00:19:32.166 "peer_address": { 00:19:32.166 "trtype": "TCP", 00:19:32.166 "adrfam": "IPv4", 00:19:32.166 "traddr": "10.0.0.1", 00:19:32.166 "trsvcid": "54778" 00:19:32.166 }, 00:19:32.166 "auth": { 00:19:32.166 "state": "completed", 00:19:32.166 "digest": "sha384", 00:19:32.166 "dhgroup": "ffdhe4096" 00:19:32.166 } 00:19:32.166 } 00:19:32.166 ]' 00:19:32.166 15:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:32.166 15:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:32.166 15:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:32.424 15:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:32.424 15:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:32.424 15:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.424 15:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.424 15:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.424 15:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:NTE4NzgzNjczNTBlMTUzNWMyM2Y4MTI3NjNmN2RiMTkzYzRhYmIwNzgzZDg1MmZiYTdjNWRiNjg4ZDJlZGYwM/cmdi8=: 00:19:32.992 15:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.992 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.992 15:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:32.992 15:24:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.992 15:24:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.992 15:24:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.992 15:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:32.992 15:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:32.992 15:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:32.992 15:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:33.250 15:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:19:33.250 15:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:33.250 15:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:33.250 15:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:33.250 15:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:33.250 15:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.250 15:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.250 15:24:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.250 15:24:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.250 15:24:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.250 15:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.250 15:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.508 00:19:33.508 15:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:33.508 15:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.508 15:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:33.766 15:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.766 15:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.766 15:24:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.766 15:24:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.766 15:24:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.766 15:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:33.766 { 00:19:33.766 "cntlid": 81, 00:19:33.766 "qid": 0, 00:19:33.766 "state": "enabled", 00:19:33.766 "thread": "nvmf_tgt_poll_group_000", 00:19:33.766 "listen_address": { 00:19:33.766 "trtype": "TCP", 00:19:33.766 "adrfam": "IPv4", 00:19:33.766 "traddr": "10.0.0.2", 00:19:33.766 "trsvcid": "4420" 00:19:33.766 }, 00:19:33.766 "peer_address": { 00:19:33.766 "trtype": "TCP", 00:19:33.766 "adrfam": "IPv4", 00:19:33.766 "traddr": "10.0.0.1", 00:19:33.766 "trsvcid": "54790" 00:19:33.766 }, 00:19:33.766 "auth": { 00:19:33.766 "state": "completed", 00:19:33.766 "digest": "sha384", 00:19:33.766 "dhgroup": "ffdhe6144" 00:19:33.766 } 00:19:33.766 } 00:19:33.766 ]' 00:19:33.766 15:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:33.766 15:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:33.766 15:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:33.766 15:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:33.766 15:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:34.025 15:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.025 15:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.025 15:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.025 15:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZDhhOGEyYWYyZGRmNzA2OGQ4YmQyOTNiMzE0Yzk1Njc2MDFlYWY1ZTNlN2RlOWM3K0fCMQ==: --dhchap-ctrl-secret DHHC-1:03:N2NkNjk2YWE5NjI2NGIxNmU4ODIyNTIzOWIxYTA4ZmNhM2ExMDcxODQwODMwOGViNzdhOTA5NzczYTAyZjQ4McVZPBM=: 00:19:34.592 15:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.592 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.592 15:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:34.592 15:24:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.592 15:24:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.592 15:24:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.592 15:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:34.592 15:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:34.592 15:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:34.852 15:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:19:34.852 15:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:34.852 15:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:34.852 15:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:34.852 15:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:34.852 15:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.852 15:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:34.852 15:24:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.852 15:24:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.852 15:24:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.852 15:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:34.852 15:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.132 00:19:35.132 15:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:35.132 15:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:35.132 15:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.404 15:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.404 15:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.404 15:24:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.404 15:24:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.404 15:24:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.404 15:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:35.404 { 00:19:35.404 "cntlid": 83, 00:19:35.404 "qid": 0, 00:19:35.404 "state": "enabled", 00:19:35.404 "thread": "nvmf_tgt_poll_group_000", 00:19:35.404 "listen_address": { 00:19:35.404 "trtype": "TCP", 00:19:35.404 "adrfam": "IPv4", 00:19:35.404 "traddr": "10.0.0.2", 00:19:35.404 "trsvcid": "4420" 00:19:35.404 }, 00:19:35.404 "peer_address": { 00:19:35.404 "trtype": "TCP", 00:19:35.404 "adrfam": "IPv4", 00:19:35.404 "traddr": "10.0.0.1", 00:19:35.404 "trsvcid": "54824" 00:19:35.404 }, 00:19:35.404 "auth": { 00:19:35.404 "state": "completed", 00:19:35.404 "digest": "sha384", 00:19:35.404 "dhgroup": "ffdhe6144" 00:19:35.404 } 00:19:35.404 } 00:19:35.404 ]' 00:19:35.404 15:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:35.404 15:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:35.404 15:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:35.404 15:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:35.404 15:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:35.404 15:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.404 15:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.404 15:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.661 15:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:ZWFiNGZiMGJlNTU5MzcxZjAwNGViYmViYjQwMmE2NGSbZLbl: --dhchap-ctrl-secret DHHC-1:02:YmQyYmQzNzJkY2U5ZTM3NzQ4MTkwODZkZDYxYzA1NWM2OGJiNDY3Y2M1NzZkYjlmeHPAQA==: 00:19:36.226 15:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.226 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.226 15:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:36.226 15:24:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.226 15:24:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.226 15:24:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.226 15:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:36.226 15:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:36.226 15:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:36.483 15:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:19:36.483 15:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:36.484 15:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:36.484 15:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:36.484 15:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:36.484 15:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.484 15:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:36.484 15:24:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.484 15:24:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.484 15:24:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.484 15:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:36.484 15:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:36.740 00:19:36.740 15:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:36.740 15:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:36.740 15:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.008 15:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.008 15:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.008 15:24:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.008 15:24:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.008 15:24:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.008 15:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:37.008 { 00:19:37.008 "cntlid": 85, 00:19:37.008 "qid": 0, 00:19:37.008 "state": "enabled", 00:19:37.008 "thread": "nvmf_tgt_poll_group_000", 00:19:37.008 "listen_address": { 00:19:37.008 "trtype": "TCP", 00:19:37.008 "adrfam": "IPv4", 00:19:37.008 "traddr": "10.0.0.2", 00:19:37.008 "trsvcid": "4420" 00:19:37.008 }, 00:19:37.008 "peer_address": { 00:19:37.008 "trtype": "TCP", 00:19:37.008 "adrfam": "IPv4", 00:19:37.008 "traddr": "10.0.0.1", 00:19:37.008 "trsvcid": "54836" 00:19:37.008 }, 00:19:37.008 "auth": { 00:19:37.008 "state": "completed", 00:19:37.008 "digest": "sha384", 00:19:37.008 "dhgroup": "ffdhe6144" 00:19:37.008 } 00:19:37.008 } 00:19:37.008 ]' 00:19:37.008 15:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:37.008 15:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:37.008 15:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:37.008 15:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:37.008 15:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:37.008 15:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.008 15:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.008 15:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.266 15:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:ODczMTA2MzhiMmRlMGQ4NzE5OTU1YjdhMTM0NjRjNDhiNDc3ZDZiZmJlZjA5NWNkoTggUw==: --dhchap-ctrl-secret DHHC-1:01:ZTZkOWVkNjNjNzU0NWYwZmMyNjAwODBkOWRlZDdkZTf7PGQl: 00:19:37.832 15:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.832 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.832 15:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:37.832 15:24:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.832 15:24:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.832 15:24:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.832 15:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:37.832 15:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:37.832 15:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:37.832 15:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:19:37.832 15:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:37.832 15:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:37.832 15:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:37.832 15:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:37.832 15:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.832 15:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:19:37.832 15:24:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.832 15:24:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.832 15:24:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.832 15:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:37.832 15:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:38.397 00:19:38.397 15:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:38.397 15:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:38.397 15:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.397 15:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.397 15:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.397 15:24:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.397 15:24:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.397 15:24:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.397 15:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:38.397 { 00:19:38.397 "cntlid": 87, 00:19:38.397 "qid": 0, 00:19:38.397 "state": "enabled", 00:19:38.397 "thread": "nvmf_tgt_poll_group_000", 00:19:38.397 "listen_address": { 00:19:38.397 "trtype": "TCP", 00:19:38.397 "adrfam": "IPv4", 00:19:38.397 "traddr": "10.0.0.2", 00:19:38.397 "trsvcid": "4420" 00:19:38.397 }, 00:19:38.397 "peer_address": { 00:19:38.397 "trtype": "TCP", 00:19:38.397 "adrfam": "IPv4", 00:19:38.397 "traddr": "10.0.0.1", 00:19:38.397 "trsvcid": "58988" 00:19:38.397 }, 00:19:38.397 "auth": { 00:19:38.397 "state": "completed", 00:19:38.397 "digest": "sha384", 00:19:38.397 "dhgroup": "ffdhe6144" 00:19:38.397 } 00:19:38.397 } 00:19:38.397 ]' 00:19:38.397 15:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:38.397 15:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:38.397 15:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:38.655 15:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:38.655 15:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:38.655 15:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.655 15:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.655 15:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.655 15:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:NTE4NzgzNjczNTBlMTUzNWMyM2Y4MTI3NjNmN2RiMTkzYzRhYmIwNzgzZDg1MmZiYTdjNWRiNjg4ZDJlZGYwM/cmdi8=: 00:19:39.222 15:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.222 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.222 15:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:39.222 15:24:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.222 15:24:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.222 15:24:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.222 15:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:39.222 15:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:39.222 15:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:39.222 15:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:39.482 15:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:19:39.482 15:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:39.482 15:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:39.482 15:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:39.482 15:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:39.482 15:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.482 15:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.482 15:24:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.482 15:24:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.482 15:24:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.482 15:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.482 15:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:40.050 00:19:40.050 15:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:40.050 15:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.050 15:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:40.050 15:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.050 15:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.050 15:24:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.050 15:24:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.050 15:24:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.050 15:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:40.050 { 00:19:40.050 "cntlid": 89, 00:19:40.050 "qid": 0, 00:19:40.050 "state": "enabled", 00:19:40.050 "thread": "nvmf_tgt_poll_group_000", 00:19:40.050 "listen_address": { 00:19:40.050 "trtype": "TCP", 00:19:40.050 "adrfam": "IPv4", 00:19:40.050 "traddr": "10.0.0.2", 00:19:40.050 "trsvcid": "4420" 00:19:40.050 }, 00:19:40.050 "peer_address": { 00:19:40.050 "trtype": "TCP", 00:19:40.050 "adrfam": "IPv4", 00:19:40.050 "traddr": "10.0.0.1", 00:19:40.050 "trsvcid": "59024" 00:19:40.050 }, 00:19:40.050 "auth": { 00:19:40.050 "state": "completed", 00:19:40.050 "digest": "sha384", 00:19:40.050 "dhgroup": "ffdhe8192" 00:19:40.051 } 00:19:40.051 } 00:19:40.051 ]' 00:19:40.051 15:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:40.310 15:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:40.310 15:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:40.310 15:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:40.310 15:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:40.310 15:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.310 15:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.310 15:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.569 15:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZDhhOGEyYWYyZGRmNzA2OGQ4YmQyOTNiMzE0Yzk1Njc2MDFlYWY1ZTNlN2RlOWM3K0fCMQ==: --dhchap-ctrl-secret DHHC-1:03:N2NkNjk2YWE5NjI2NGIxNmU4ODIyNTIzOWIxYTA4ZmNhM2ExMDcxODQwODMwOGViNzdhOTA5NzczYTAyZjQ4McVZPBM=: 00:19:41.136 15:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.136 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.136 15:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:41.136 15:24:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.136 15:24:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.136 15:24:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.136 15:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:41.136 15:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:41.136 15:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:41.136 15:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:19:41.136 15:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:41.136 15:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:41.136 15:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:41.136 15:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:41.136 15:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.136 15:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.136 15:24:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.136 15:24:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.136 15:24:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.136 15:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.136 15:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.704 00:19:41.704 15:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:41.704 15:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.704 15:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:41.704 15:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.704 15:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.704 15:24:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.704 15:24:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.704 15:24:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.704 15:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:41.704 { 00:19:41.704 "cntlid": 91, 00:19:41.704 "qid": 0, 00:19:41.704 "state": "enabled", 00:19:41.704 "thread": "nvmf_tgt_poll_group_000", 00:19:41.704 "listen_address": { 00:19:41.704 "trtype": "TCP", 00:19:41.704 "adrfam": "IPv4", 00:19:41.704 "traddr": "10.0.0.2", 00:19:41.704 "trsvcid": "4420" 00:19:41.704 }, 00:19:41.704 "peer_address": { 00:19:41.704 "trtype": "TCP", 00:19:41.704 "adrfam": "IPv4", 00:19:41.704 "traddr": "10.0.0.1", 00:19:41.704 "trsvcid": "59048" 00:19:41.704 }, 00:19:41.704 "auth": { 00:19:41.704 "state": "completed", 00:19:41.704 "digest": "sha384", 00:19:41.704 "dhgroup": "ffdhe8192" 00:19:41.704 } 00:19:41.704 } 00:19:41.704 ]' 00:19:41.704 15:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:41.962 15:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:41.962 15:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:41.962 15:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:41.962 15:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:41.962 15:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.962 15:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.962 15:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.221 15:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:ZWFiNGZiMGJlNTU5MzcxZjAwNGViYmViYjQwMmE2NGSbZLbl: --dhchap-ctrl-secret DHHC-1:02:YmQyYmQzNzJkY2U5ZTM3NzQ4MTkwODZkZDYxYzA1NWM2OGJiNDY3Y2M1NzZkYjlmeHPAQA==: 00:19:42.788 15:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.788 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.788 15:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:42.788 15:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.788 15:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.788 15:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.788 15:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:42.788 15:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:42.788 15:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:42.788 15:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:19:42.788 15:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:42.788 15:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:42.788 15:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:42.788 15:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:42.788 15:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.788 15:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:42.788 15:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.788 15:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.788 15:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.788 15:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:42.788 15:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.355 00:19:43.355 15:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:43.355 15:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.355 15:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:43.355 15:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.355 15:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.355 15:24:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.355 15:24:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.355 15:24:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.355 15:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:43.355 { 00:19:43.355 "cntlid": 93, 00:19:43.355 "qid": 0, 00:19:43.355 "state": "enabled", 00:19:43.355 "thread": "nvmf_tgt_poll_group_000", 00:19:43.355 "listen_address": { 00:19:43.355 "trtype": "TCP", 00:19:43.355 "adrfam": "IPv4", 00:19:43.355 "traddr": "10.0.0.2", 00:19:43.355 "trsvcid": "4420" 00:19:43.355 }, 00:19:43.355 "peer_address": { 00:19:43.355 "trtype": "TCP", 00:19:43.355 "adrfam": "IPv4", 00:19:43.355 "traddr": "10.0.0.1", 00:19:43.355 "trsvcid": "59086" 00:19:43.355 }, 00:19:43.355 "auth": { 00:19:43.355 "state": "completed", 00:19:43.355 "digest": "sha384", 00:19:43.355 "dhgroup": "ffdhe8192" 00:19:43.355 } 00:19:43.355 } 00:19:43.355 ]' 00:19:43.355 15:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:43.614 15:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:43.614 15:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:43.614 15:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:43.614 15:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:43.614 15:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.614 15:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.614 15:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.872 15:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:ODczMTA2MzhiMmRlMGQ4NzE5OTU1YjdhMTM0NjRjNDhiNDc3ZDZiZmJlZjA5NWNkoTggUw==: --dhchap-ctrl-secret DHHC-1:01:ZTZkOWVkNjNjNzU0NWYwZmMyNjAwODBkOWRlZDdkZTf7PGQl: 00:19:44.440 15:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.440 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.440 15:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:44.440 15:24:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.440 15:24:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.440 15:24:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.440 15:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:44.440 15:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:44.440 15:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:44.440 15:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:19:44.440 15:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:44.440 15:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:44.440 15:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:44.440 15:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:44.440 15:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.440 15:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:19:44.440 15:24:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.440 15:24:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.440 15:24:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.440 15:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:44.440 15:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:45.008 00:19:45.008 15:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:45.008 15:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:45.008 15:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.008 15:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.008 15:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.008 15:24:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.008 15:24:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.008 15:24:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.008 15:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:45.008 { 00:19:45.008 "cntlid": 95, 00:19:45.008 "qid": 0, 00:19:45.008 "state": "enabled", 00:19:45.008 "thread": "nvmf_tgt_poll_group_000", 00:19:45.008 "listen_address": { 00:19:45.008 "trtype": "TCP", 00:19:45.008 "adrfam": "IPv4", 00:19:45.008 "traddr": "10.0.0.2", 00:19:45.008 "trsvcid": "4420" 00:19:45.008 }, 00:19:45.008 "peer_address": { 00:19:45.008 "trtype": "TCP", 00:19:45.008 "adrfam": "IPv4", 00:19:45.008 "traddr": "10.0.0.1", 00:19:45.008 "trsvcid": "59120" 00:19:45.008 }, 00:19:45.008 "auth": { 00:19:45.008 "state": "completed", 00:19:45.008 "digest": "sha384", 00:19:45.008 "dhgroup": "ffdhe8192" 00:19:45.008 } 00:19:45.008 } 00:19:45.008 ]' 00:19:45.267 15:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:45.267 15:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:45.267 15:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:45.267 15:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:45.267 15:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:45.267 15:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.267 15:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.267 15:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.526 15:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:NTE4NzgzNjczNTBlMTUzNWMyM2Y4MTI3NjNmN2RiMTkzYzRhYmIwNzgzZDg1MmZiYTdjNWRiNjg4ZDJlZGYwM/cmdi8=: 00:19:46.101 15:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.101 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.101 15:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:46.101 15:24:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.101 15:24:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.101 15:24:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.101 15:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:46.101 15:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:46.101 15:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:46.101 15:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:46.101 15:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:46.101 15:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:19:46.101 15:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:46.101 15:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:46.101 15:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:46.101 15:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:46.101 15:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.101 15:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.101 15:24:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.101 15:24:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.101 15:24:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.101 15:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.101 15:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.358 00:19:46.358 15:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:46.358 15:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:46.358 15:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.616 15:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.616 15:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.616 15:24:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.616 15:24:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.616 15:24:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.616 15:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:46.616 { 00:19:46.616 "cntlid": 97, 00:19:46.616 "qid": 0, 00:19:46.616 "state": "enabled", 00:19:46.616 "thread": "nvmf_tgt_poll_group_000", 00:19:46.616 "listen_address": { 00:19:46.616 "trtype": "TCP", 00:19:46.616 "adrfam": "IPv4", 00:19:46.616 "traddr": "10.0.0.2", 00:19:46.616 "trsvcid": "4420" 00:19:46.616 }, 00:19:46.616 "peer_address": { 00:19:46.616 "trtype": "TCP", 00:19:46.616 "adrfam": "IPv4", 00:19:46.616 "traddr": "10.0.0.1", 00:19:46.616 "trsvcid": "59136" 00:19:46.616 }, 00:19:46.616 "auth": { 00:19:46.616 "state": "completed", 00:19:46.616 "digest": "sha512", 00:19:46.616 "dhgroup": "null" 00:19:46.616 } 00:19:46.616 } 00:19:46.616 ]' 00:19:46.616 15:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:46.616 15:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:46.616 15:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:46.616 15:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:46.616 15:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:46.616 15:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.616 15:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.616 15:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.874 15:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZDhhOGEyYWYyZGRmNzA2OGQ4YmQyOTNiMzE0Yzk1Njc2MDFlYWY1ZTNlN2RlOWM3K0fCMQ==: --dhchap-ctrl-secret DHHC-1:03:N2NkNjk2YWE5NjI2NGIxNmU4ODIyNTIzOWIxYTA4ZmNhM2ExMDcxODQwODMwOGViNzdhOTA5NzczYTAyZjQ4McVZPBM=: 00:19:47.439 15:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.439 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.439 15:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:47.439 15:24:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.439 15:24:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.439 15:24:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.439 15:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:47.439 15:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:47.439 15:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:47.697 15:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:19:47.697 15:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:47.697 15:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:47.697 15:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:47.697 15:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:47.697 15:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.697 15:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.697 15:24:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.697 15:24:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.697 15:24:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.697 15:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.697 15:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.954 00:19:47.954 15:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:47.954 15:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:47.954 15:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.954 15:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.954 15:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.954 15:24:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.954 15:24:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.954 15:24:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.954 15:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:47.954 { 00:19:47.954 "cntlid": 99, 00:19:47.954 "qid": 0, 00:19:47.954 "state": "enabled", 00:19:47.954 "thread": "nvmf_tgt_poll_group_000", 00:19:47.954 "listen_address": { 00:19:47.954 "trtype": "TCP", 00:19:47.954 "adrfam": "IPv4", 00:19:47.954 "traddr": "10.0.0.2", 00:19:47.954 "trsvcid": "4420" 00:19:47.954 }, 00:19:47.954 "peer_address": { 00:19:47.954 "trtype": "TCP", 00:19:47.954 "adrfam": "IPv4", 00:19:47.954 "traddr": "10.0.0.1", 00:19:47.954 "trsvcid": "59126" 00:19:47.954 }, 00:19:47.954 "auth": { 00:19:47.954 "state": "completed", 00:19:47.954 "digest": "sha512", 00:19:47.954 "dhgroup": "null" 00:19:47.954 } 00:19:47.954 } 00:19:47.955 ]' 00:19:47.955 15:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:48.212 15:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:48.212 15:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:48.212 15:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:48.212 15:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:48.212 15:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.212 15:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.212 15:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.471 15:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:ZWFiNGZiMGJlNTU5MzcxZjAwNGViYmViYjQwMmE2NGSbZLbl: --dhchap-ctrl-secret DHHC-1:02:YmQyYmQzNzJkY2U5ZTM3NzQ4MTkwODZkZDYxYzA1NWM2OGJiNDY3Y2M1NzZkYjlmeHPAQA==: 00:19:49.038 15:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.038 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.038 15:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:49.038 15:24:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.038 15:24:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.038 15:24:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.038 15:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:49.038 15:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:49.038 15:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:49.038 15:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:19:49.038 15:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:49.038 15:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:49.038 15:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:49.038 15:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:49.038 15:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.038 15:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.038 15:24:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.038 15:24:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.038 15:24:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.038 15:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.038 15:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.296 00:19:49.297 15:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:49.297 15:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:49.297 15:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.555 15:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.555 15:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.555 15:24:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.555 15:24:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.555 15:24:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.555 15:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:49.555 { 00:19:49.555 "cntlid": 101, 00:19:49.555 "qid": 0, 00:19:49.555 "state": "enabled", 00:19:49.555 "thread": "nvmf_tgt_poll_group_000", 00:19:49.555 "listen_address": { 00:19:49.555 "trtype": "TCP", 00:19:49.555 "adrfam": "IPv4", 00:19:49.555 "traddr": "10.0.0.2", 00:19:49.555 "trsvcid": "4420" 00:19:49.555 }, 00:19:49.555 "peer_address": { 00:19:49.555 "trtype": "TCP", 00:19:49.555 "adrfam": "IPv4", 00:19:49.555 "traddr": "10.0.0.1", 00:19:49.555 "trsvcid": "59152" 00:19:49.555 }, 00:19:49.555 "auth": { 00:19:49.555 "state": "completed", 00:19:49.555 "digest": "sha512", 00:19:49.555 "dhgroup": "null" 00:19:49.555 } 00:19:49.555 } 00:19:49.555 ]' 00:19:49.555 15:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:49.555 15:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:49.555 15:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:49.555 15:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:49.555 15:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:49.555 15:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.555 15:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.555 15:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.813 15:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:ODczMTA2MzhiMmRlMGQ4NzE5OTU1YjdhMTM0NjRjNDhiNDc3ZDZiZmJlZjA5NWNkoTggUw==: --dhchap-ctrl-secret DHHC-1:01:ZTZkOWVkNjNjNzU0NWYwZmMyNjAwODBkOWRlZDdkZTf7PGQl: 00:19:50.380 15:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.380 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.380 15:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:50.380 15:24:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.380 15:24:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.380 15:24:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.380 15:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:50.380 15:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:50.380 15:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:50.638 15:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:19:50.638 15:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:50.638 15:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:50.638 15:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:50.638 15:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:50.638 15:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.638 15:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:19:50.638 15:24:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.638 15:24:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.638 15:24:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.638 15:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:50.638 15:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:50.638 00:19:50.897 15:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:50.897 15:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.897 15:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:50.897 15:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.897 15:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.897 15:24:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.897 15:24:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.897 15:24:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.897 15:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:50.897 { 00:19:50.897 "cntlid": 103, 00:19:50.897 "qid": 0, 00:19:50.897 "state": "enabled", 00:19:50.897 "thread": "nvmf_tgt_poll_group_000", 00:19:50.897 "listen_address": { 00:19:50.897 "trtype": "TCP", 00:19:50.897 "adrfam": "IPv4", 00:19:50.897 "traddr": "10.0.0.2", 00:19:50.897 "trsvcid": "4420" 00:19:50.897 }, 00:19:50.897 "peer_address": { 00:19:50.897 "trtype": "TCP", 00:19:50.897 "adrfam": "IPv4", 00:19:50.897 "traddr": "10.0.0.1", 00:19:50.897 "trsvcid": "59178" 00:19:50.897 }, 00:19:50.897 "auth": { 00:19:50.897 "state": "completed", 00:19:50.897 "digest": "sha512", 00:19:50.897 "dhgroup": "null" 00:19:50.897 } 00:19:50.897 } 00:19:50.897 ]' 00:19:50.897 15:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:50.897 15:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:50.897 15:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:51.155 15:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:51.155 15:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:51.155 15:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.155 15:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.155 15:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.155 15:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:NTE4NzgzNjczNTBlMTUzNWMyM2Y4MTI3NjNmN2RiMTkzYzRhYmIwNzgzZDg1MmZiYTdjNWRiNjg4ZDJlZGYwM/cmdi8=: 00:19:51.721 15:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.721 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.721 15:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:51.721 15:24:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.721 15:24:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.721 15:24:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.721 15:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:51.721 15:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:51.721 15:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:51.722 15:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:52.004 15:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:19:52.004 15:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:52.004 15:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:52.004 15:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:52.004 15:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:52.004 15:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.004 15:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.004 15:24:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.004 15:24:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.004 15:24:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.004 15:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.004 15:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.262 00:19:52.262 15:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:52.262 15:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.262 15:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:52.521 15:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.521 15:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.521 15:24:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.521 15:24:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.521 15:24:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.521 15:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:52.521 { 00:19:52.521 "cntlid": 105, 00:19:52.521 "qid": 0, 00:19:52.521 "state": "enabled", 00:19:52.521 "thread": "nvmf_tgt_poll_group_000", 00:19:52.521 "listen_address": { 00:19:52.521 "trtype": "TCP", 00:19:52.521 "adrfam": "IPv4", 00:19:52.521 "traddr": "10.0.0.2", 00:19:52.521 "trsvcid": "4420" 00:19:52.521 }, 00:19:52.521 "peer_address": { 00:19:52.521 "trtype": "TCP", 00:19:52.521 "adrfam": "IPv4", 00:19:52.521 "traddr": "10.0.0.1", 00:19:52.521 "trsvcid": "59210" 00:19:52.521 }, 00:19:52.521 "auth": { 00:19:52.521 "state": "completed", 00:19:52.521 "digest": "sha512", 00:19:52.521 "dhgroup": "ffdhe2048" 00:19:52.521 } 00:19:52.521 } 00:19:52.521 ]' 00:19:52.521 15:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:52.521 15:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:52.521 15:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:52.521 15:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:52.521 15:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:52.521 15:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.521 15:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.521 15:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.778 15:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZDhhOGEyYWYyZGRmNzA2OGQ4YmQyOTNiMzE0Yzk1Njc2MDFlYWY1ZTNlN2RlOWM3K0fCMQ==: --dhchap-ctrl-secret DHHC-1:03:N2NkNjk2YWE5NjI2NGIxNmU4ODIyNTIzOWIxYTA4ZmNhM2ExMDcxODQwODMwOGViNzdhOTA5NzczYTAyZjQ4McVZPBM=: 00:19:53.343 15:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.343 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.343 15:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:53.343 15:24:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.344 15:24:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.344 15:24:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.344 15:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:53.344 15:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:53.344 15:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:53.602 15:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:19:53.602 15:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:53.602 15:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:53.602 15:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:53.602 15:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:53.602 15:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.602 15:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.602 15:24:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.602 15:24:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.602 15:24:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.602 15:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.602 15:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.602 00:19:53.602 15:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:53.602 15:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:53.602 15:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.860 15:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.860 15:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.860 15:24:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.860 15:24:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.860 15:24:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.860 15:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:53.860 { 00:19:53.860 "cntlid": 107, 00:19:53.860 "qid": 0, 00:19:53.860 "state": "enabled", 00:19:53.860 "thread": "nvmf_tgt_poll_group_000", 00:19:53.860 "listen_address": { 00:19:53.860 "trtype": "TCP", 00:19:53.860 "adrfam": "IPv4", 00:19:53.860 "traddr": "10.0.0.2", 00:19:53.860 "trsvcid": "4420" 00:19:53.860 }, 00:19:53.860 "peer_address": { 00:19:53.860 "trtype": "TCP", 00:19:53.860 "adrfam": "IPv4", 00:19:53.860 "traddr": "10.0.0.1", 00:19:53.860 "trsvcid": "59238" 00:19:53.860 }, 00:19:53.860 "auth": { 00:19:53.860 "state": "completed", 00:19:53.860 "digest": "sha512", 00:19:53.860 "dhgroup": "ffdhe2048" 00:19:53.860 } 00:19:53.860 } 00:19:53.860 ]' 00:19:53.860 15:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:53.860 15:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:53.860 15:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:54.119 15:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:54.119 15:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:54.119 15:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.119 15:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.119 15:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.119 15:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:ZWFiNGZiMGJlNTU5MzcxZjAwNGViYmViYjQwMmE2NGSbZLbl: --dhchap-ctrl-secret DHHC-1:02:YmQyYmQzNzJkY2U5ZTM3NzQ4MTkwODZkZDYxYzA1NWM2OGJiNDY3Y2M1NzZkYjlmeHPAQA==: 00:19:54.686 15:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.686 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.686 15:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:54.686 15:24:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.686 15:24:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.686 15:24:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.686 15:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:54.686 15:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:54.686 15:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:54.944 15:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:19:54.944 15:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:54.944 15:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:54.944 15:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:54.944 15:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:54.944 15:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.944 15:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.944 15:24:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.944 15:24:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.944 15:24:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.944 15:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.944 15:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.203 00:19:55.203 15:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:55.203 15:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:55.203 15:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.463 15:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.463 15:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.463 15:24:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.463 15:24:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.463 15:24:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.463 15:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:55.463 { 00:19:55.463 "cntlid": 109, 00:19:55.463 "qid": 0, 00:19:55.463 "state": "enabled", 00:19:55.463 "thread": "nvmf_tgt_poll_group_000", 00:19:55.463 "listen_address": { 00:19:55.463 "trtype": "TCP", 00:19:55.463 "adrfam": "IPv4", 00:19:55.463 "traddr": "10.0.0.2", 00:19:55.463 "trsvcid": "4420" 00:19:55.463 }, 00:19:55.463 "peer_address": { 00:19:55.463 "trtype": "TCP", 00:19:55.463 "adrfam": "IPv4", 00:19:55.463 "traddr": "10.0.0.1", 00:19:55.463 "trsvcid": "59254" 00:19:55.463 }, 00:19:55.463 "auth": { 00:19:55.463 "state": "completed", 00:19:55.463 "digest": "sha512", 00:19:55.463 "dhgroup": "ffdhe2048" 00:19:55.463 } 00:19:55.463 } 00:19:55.463 ]' 00:19:55.463 15:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:55.463 15:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:55.463 15:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:55.463 15:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:55.463 15:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:55.463 15:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.463 15:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.463 15:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.721 15:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:ODczMTA2MzhiMmRlMGQ4NzE5OTU1YjdhMTM0NjRjNDhiNDc3ZDZiZmJlZjA5NWNkoTggUw==: --dhchap-ctrl-secret DHHC-1:01:ZTZkOWVkNjNjNzU0NWYwZmMyNjAwODBkOWRlZDdkZTf7PGQl: 00:19:56.288 15:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.288 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.288 15:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:56.288 15:25:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.288 15:25:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.288 15:25:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.288 15:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:56.288 15:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:56.288 15:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:56.547 15:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:19:56.547 15:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:56.547 15:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:56.547 15:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:56.547 15:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:56.547 15:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.547 15:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:19:56.547 15:25:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.547 15:25:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.547 15:25:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.547 15:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:56.547 15:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:56.547 00:19:56.547 15:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:56.547 15:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.547 15:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:56.806 15:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.806 15:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.806 15:25:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.806 15:25:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.806 15:25:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.806 15:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:56.806 { 00:19:56.806 "cntlid": 111, 00:19:56.806 "qid": 0, 00:19:56.806 "state": "enabled", 00:19:56.806 "thread": "nvmf_tgt_poll_group_000", 00:19:56.806 "listen_address": { 00:19:56.806 "trtype": "TCP", 00:19:56.806 "adrfam": "IPv4", 00:19:56.806 "traddr": "10.0.0.2", 00:19:56.806 "trsvcid": "4420" 00:19:56.806 }, 00:19:56.806 "peer_address": { 00:19:56.806 "trtype": "TCP", 00:19:56.806 "adrfam": "IPv4", 00:19:56.806 "traddr": "10.0.0.1", 00:19:56.806 "trsvcid": "59290" 00:19:56.806 }, 00:19:56.806 "auth": { 00:19:56.806 "state": "completed", 00:19:56.806 "digest": "sha512", 00:19:56.806 "dhgroup": "ffdhe2048" 00:19:56.806 } 00:19:56.806 } 00:19:56.806 ]' 00:19:56.806 15:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:56.806 15:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:56.806 15:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:57.065 15:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:57.065 15:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:57.065 15:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.065 15:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.065 15:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.065 15:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:NTE4NzgzNjczNTBlMTUzNWMyM2Y4MTI3NjNmN2RiMTkzYzRhYmIwNzgzZDg1MmZiYTdjNWRiNjg4ZDJlZGYwM/cmdi8=: 00:19:57.633 15:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.633 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.633 15:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:57.633 15:25:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.633 15:25:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.633 15:25:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.633 15:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:57.633 15:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:57.633 15:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:57.633 15:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:57.892 15:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:19:57.892 15:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:57.892 15:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:57.892 15:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:57.892 15:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:57.892 15:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.892 15:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:57.892 15:25:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.892 15:25:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.892 15:25:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.892 15:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:57.892 15:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.151 00:19:58.151 15:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:58.151 15:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.151 15:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:58.409 15:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.409 15:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.409 15:25:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.409 15:25:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.409 15:25:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.409 15:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:58.409 { 00:19:58.409 "cntlid": 113, 00:19:58.409 "qid": 0, 00:19:58.409 "state": "enabled", 00:19:58.409 "thread": "nvmf_tgt_poll_group_000", 00:19:58.409 "listen_address": { 00:19:58.409 "trtype": "TCP", 00:19:58.409 "adrfam": "IPv4", 00:19:58.409 "traddr": "10.0.0.2", 00:19:58.409 "trsvcid": "4420" 00:19:58.409 }, 00:19:58.409 "peer_address": { 00:19:58.409 "trtype": "TCP", 00:19:58.409 "adrfam": "IPv4", 00:19:58.409 "traddr": "10.0.0.1", 00:19:58.409 "trsvcid": "46172" 00:19:58.409 }, 00:19:58.409 "auth": { 00:19:58.409 "state": "completed", 00:19:58.409 "digest": "sha512", 00:19:58.409 "dhgroup": "ffdhe3072" 00:19:58.409 } 00:19:58.409 } 00:19:58.409 ]' 00:19:58.409 15:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:58.409 15:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:58.409 15:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:58.409 15:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:58.409 15:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:58.409 15:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.409 15:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.409 15:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.669 15:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZDhhOGEyYWYyZGRmNzA2OGQ4YmQyOTNiMzE0Yzk1Njc2MDFlYWY1ZTNlN2RlOWM3K0fCMQ==: --dhchap-ctrl-secret DHHC-1:03:N2NkNjk2YWE5NjI2NGIxNmU4ODIyNTIzOWIxYTA4ZmNhM2ExMDcxODQwODMwOGViNzdhOTA5NzczYTAyZjQ4McVZPBM=: 00:19:59.237 15:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.237 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.237 15:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:59.237 15:25:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.237 15:25:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.237 15:25:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.237 15:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:59.237 15:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:59.237 15:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:59.495 15:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:19:59.495 15:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:59.495 15:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:59.495 15:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:59.495 15:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:59.495 15:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.495 15:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.495 15:25:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.495 15:25:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.495 15:25:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.495 15:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.495 15:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.754 00:19:59.754 15:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:59.754 15:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:59.754 15:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.754 15:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.754 15:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.754 15:25:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.754 15:25:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.754 15:25:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.754 15:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:59.754 { 00:19:59.754 "cntlid": 115, 00:19:59.754 "qid": 0, 00:19:59.754 "state": "enabled", 00:19:59.754 "thread": "nvmf_tgt_poll_group_000", 00:19:59.754 "listen_address": { 00:19:59.754 "trtype": "TCP", 00:19:59.754 "adrfam": "IPv4", 00:19:59.754 "traddr": "10.0.0.2", 00:19:59.754 "trsvcid": "4420" 00:19:59.754 }, 00:19:59.754 "peer_address": { 00:19:59.754 "trtype": "TCP", 00:19:59.754 "adrfam": "IPv4", 00:19:59.754 "traddr": "10.0.0.1", 00:19:59.754 "trsvcid": "46188" 00:19:59.754 }, 00:19:59.754 "auth": { 00:19:59.754 "state": "completed", 00:19:59.754 "digest": "sha512", 00:19:59.754 "dhgroup": "ffdhe3072" 00:19:59.754 } 00:19:59.754 } 00:19:59.754 ]' 00:19:59.754 15:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:00.012 15:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:00.012 15:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:00.012 15:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:00.012 15:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:00.012 15:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.012 15:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.012 15:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.271 15:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:ZWFiNGZiMGJlNTU5MzcxZjAwNGViYmViYjQwMmE2NGSbZLbl: --dhchap-ctrl-secret DHHC-1:02:YmQyYmQzNzJkY2U5ZTM3NzQ4MTkwODZkZDYxYzA1NWM2OGJiNDY3Y2M1NzZkYjlmeHPAQA==: 00:20:00.840 15:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.840 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.840 15:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:00.840 15:25:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.840 15:25:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.840 15:25:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.840 15:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:00.840 15:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:00.840 15:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:00.840 15:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:20:00.840 15:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:00.840 15:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:00.840 15:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:00.840 15:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:00.840 15:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.840 15:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.840 15:25:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.840 15:25:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.840 15:25:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.840 15:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.840 15:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.098 00:20:01.098 15:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:01.098 15:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:01.098 15:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.356 15:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.356 15:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.356 15:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.356 15:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.356 15:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.356 15:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:01.356 { 00:20:01.356 "cntlid": 117, 00:20:01.356 "qid": 0, 00:20:01.356 "state": "enabled", 00:20:01.356 "thread": "nvmf_tgt_poll_group_000", 00:20:01.356 "listen_address": { 00:20:01.356 "trtype": "TCP", 00:20:01.356 "adrfam": "IPv4", 00:20:01.356 "traddr": "10.0.0.2", 00:20:01.356 "trsvcid": "4420" 00:20:01.356 }, 00:20:01.356 "peer_address": { 00:20:01.356 "trtype": "TCP", 00:20:01.356 "adrfam": "IPv4", 00:20:01.356 "traddr": "10.0.0.1", 00:20:01.356 "trsvcid": "46202" 00:20:01.356 }, 00:20:01.356 "auth": { 00:20:01.356 "state": "completed", 00:20:01.356 "digest": "sha512", 00:20:01.356 "dhgroup": "ffdhe3072" 00:20:01.356 } 00:20:01.356 } 00:20:01.356 ]' 00:20:01.356 15:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:01.356 15:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:01.356 15:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:01.356 15:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:01.356 15:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:01.356 15:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.356 15:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.356 15:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.615 15:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:ODczMTA2MzhiMmRlMGQ4NzE5OTU1YjdhMTM0NjRjNDhiNDc3ZDZiZmJlZjA5NWNkoTggUw==: --dhchap-ctrl-secret DHHC-1:01:ZTZkOWVkNjNjNzU0NWYwZmMyNjAwODBkOWRlZDdkZTf7PGQl: 00:20:02.182 15:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.182 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.182 15:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:02.182 15:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.182 15:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.182 15:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.182 15:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:02.182 15:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:02.182 15:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:02.441 15:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:20:02.441 15:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:02.441 15:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:02.441 15:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:02.441 15:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:02.441 15:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.441 15:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:20:02.441 15:25:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.441 15:25:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.441 15:25:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.441 15:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:02.441 15:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:02.700 00:20:02.700 15:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:02.700 15:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:02.700 15:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.700 15:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.700 15:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.700 15:25:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.700 15:25:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.700 15:25:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.700 15:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:02.700 { 00:20:02.700 "cntlid": 119, 00:20:02.700 "qid": 0, 00:20:02.700 "state": "enabled", 00:20:02.700 "thread": "nvmf_tgt_poll_group_000", 00:20:02.700 "listen_address": { 00:20:02.700 "trtype": "TCP", 00:20:02.700 "adrfam": "IPv4", 00:20:02.700 "traddr": "10.0.0.2", 00:20:02.700 "trsvcid": "4420" 00:20:02.700 }, 00:20:02.700 "peer_address": { 00:20:02.700 "trtype": "TCP", 00:20:02.700 "adrfam": "IPv4", 00:20:02.700 "traddr": "10.0.0.1", 00:20:02.700 "trsvcid": "46224" 00:20:02.700 }, 00:20:02.700 "auth": { 00:20:02.700 "state": "completed", 00:20:02.700 "digest": "sha512", 00:20:02.700 "dhgroup": "ffdhe3072" 00:20:02.700 } 00:20:02.700 } 00:20:02.700 ]' 00:20:02.700 15:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:02.959 15:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:02.959 15:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:02.959 15:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:02.959 15:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:02.959 15:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.959 15:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.959 15:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.218 15:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:NTE4NzgzNjczNTBlMTUzNWMyM2Y4MTI3NjNmN2RiMTkzYzRhYmIwNzgzZDg1MmZiYTdjNWRiNjg4ZDJlZGYwM/cmdi8=: 00:20:03.784 15:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.784 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.784 15:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:03.784 15:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.784 15:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.784 15:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.784 15:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:03.785 15:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:03.785 15:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:03.785 15:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:03.785 15:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:20:03.785 15:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:03.785 15:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:03.785 15:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:03.785 15:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:03.785 15:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.785 15:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.785 15:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.785 15:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.785 15:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.785 15:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.785 15:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.043 00:20:04.043 15:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:04.043 15:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.043 15:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:04.301 15:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.301 15:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.301 15:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.301 15:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.301 15:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.301 15:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:04.301 { 00:20:04.301 "cntlid": 121, 00:20:04.301 "qid": 0, 00:20:04.301 "state": "enabled", 00:20:04.301 "thread": "nvmf_tgt_poll_group_000", 00:20:04.301 "listen_address": { 00:20:04.301 "trtype": "TCP", 00:20:04.301 "adrfam": "IPv4", 00:20:04.301 "traddr": "10.0.0.2", 00:20:04.301 "trsvcid": "4420" 00:20:04.301 }, 00:20:04.301 "peer_address": { 00:20:04.301 "trtype": "TCP", 00:20:04.301 "adrfam": "IPv4", 00:20:04.301 "traddr": "10.0.0.1", 00:20:04.301 "trsvcid": "46250" 00:20:04.301 }, 00:20:04.301 "auth": { 00:20:04.301 "state": "completed", 00:20:04.301 "digest": "sha512", 00:20:04.301 "dhgroup": "ffdhe4096" 00:20:04.301 } 00:20:04.301 } 00:20:04.301 ]' 00:20:04.301 15:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:04.301 15:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:04.301 15:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:04.301 15:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:04.301 15:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:04.301 15:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.301 15:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.301 15:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.559 15:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZDhhOGEyYWYyZGRmNzA2OGQ4YmQyOTNiMzE0Yzk1Njc2MDFlYWY1ZTNlN2RlOWM3K0fCMQ==: --dhchap-ctrl-secret DHHC-1:03:N2NkNjk2YWE5NjI2NGIxNmU4ODIyNTIzOWIxYTA4ZmNhM2ExMDcxODQwODMwOGViNzdhOTA5NzczYTAyZjQ4McVZPBM=: 00:20:05.126 15:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.126 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.126 15:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:05.126 15:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.126 15:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.126 15:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.126 15:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:05.126 15:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:05.126 15:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:05.385 15:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:20:05.385 15:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:05.385 15:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:05.385 15:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:05.385 15:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:05.385 15:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.385 15:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.385 15:25:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.385 15:25:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.385 15:25:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.385 15:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.385 15:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.643 00:20:05.643 15:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:05.643 15:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.643 15:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:05.643 15:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.643 15:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.643 15:25:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.643 15:25:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.643 15:25:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.643 15:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:05.643 { 00:20:05.643 "cntlid": 123, 00:20:05.643 "qid": 0, 00:20:05.643 "state": "enabled", 00:20:05.643 "thread": "nvmf_tgt_poll_group_000", 00:20:05.643 "listen_address": { 00:20:05.643 "trtype": "TCP", 00:20:05.643 "adrfam": "IPv4", 00:20:05.643 "traddr": "10.0.0.2", 00:20:05.643 "trsvcid": "4420" 00:20:05.643 }, 00:20:05.643 "peer_address": { 00:20:05.643 "trtype": "TCP", 00:20:05.643 "adrfam": "IPv4", 00:20:05.643 "traddr": "10.0.0.1", 00:20:05.643 "trsvcid": "46276" 00:20:05.643 }, 00:20:05.643 "auth": { 00:20:05.643 "state": "completed", 00:20:05.643 "digest": "sha512", 00:20:05.643 "dhgroup": "ffdhe4096" 00:20:05.643 } 00:20:05.643 } 00:20:05.643 ]' 00:20:05.643 15:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:05.902 15:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:05.902 15:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:05.902 15:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:05.902 15:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:05.902 15:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.902 15:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.902 15:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.159 15:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:ZWFiNGZiMGJlNTU5MzcxZjAwNGViYmViYjQwMmE2NGSbZLbl: --dhchap-ctrl-secret DHHC-1:02:YmQyYmQzNzJkY2U5ZTM3NzQ4MTkwODZkZDYxYzA1NWM2OGJiNDY3Y2M1NzZkYjlmeHPAQA==: 00:20:06.723 15:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.723 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.723 15:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:06.723 15:25:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.723 15:25:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.723 15:25:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.723 15:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:06.723 15:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:06.723 15:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:06.723 15:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:20:06.723 15:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:06.723 15:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:06.723 15:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:06.723 15:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:06.723 15:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.723 15:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.723 15:25:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.723 15:25:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.723 15:25:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.723 15:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.723 15:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.980 00:20:06.980 15:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:06.980 15:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.980 15:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:07.237 15:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.237 15:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.237 15:25:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.237 15:25:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.237 15:25:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.237 15:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:07.237 { 00:20:07.237 "cntlid": 125, 00:20:07.237 "qid": 0, 00:20:07.237 "state": "enabled", 00:20:07.237 "thread": "nvmf_tgt_poll_group_000", 00:20:07.237 "listen_address": { 00:20:07.237 "trtype": "TCP", 00:20:07.237 "adrfam": "IPv4", 00:20:07.237 "traddr": "10.0.0.2", 00:20:07.237 "trsvcid": "4420" 00:20:07.237 }, 00:20:07.237 "peer_address": { 00:20:07.237 "trtype": "TCP", 00:20:07.237 "adrfam": "IPv4", 00:20:07.237 "traddr": "10.0.0.1", 00:20:07.237 "trsvcid": "46288" 00:20:07.237 }, 00:20:07.237 "auth": { 00:20:07.237 "state": "completed", 00:20:07.237 "digest": "sha512", 00:20:07.237 "dhgroup": "ffdhe4096" 00:20:07.237 } 00:20:07.237 } 00:20:07.237 ]' 00:20:07.237 15:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:07.238 15:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:07.238 15:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:07.238 15:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:07.238 15:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:07.238 15:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.238 15:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.238 15:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.495 15:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:ODczMTA2MzhiMmRlMGQ4NzE5OTU1YjdhMTM0NjRjNDhiNDc3ZDZiZmJlZjA5NWNkoTggUw==: --dhchap-ctrl-secret DHHC-1:01:ZTZkOWVkNjNjNzU0NWYwZmMyNjAwODBkOWRlZDdkZTf7PGQl: 00:20:08.061 15:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.061 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.061 15:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:08.061 15:25:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.061 15:25:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.061 15:25:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.061 15:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:08.061 15:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:08.061 15:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:08.318 15:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:20:08.318 15:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:08.318 15:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:08.318 15:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:08.318 15:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:08.318 15:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.318 15:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:20:08.318 15:25:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.318 15:25:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.318 15:25:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.318 15:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:08.318 15:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:08.615 00:20:08.615 15:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:08.615 15:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:08.615 15:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.889 15:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.889 15:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.889 15:25:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.889 15:25:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.889 15:25:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.889 15:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:08.889 { 00:20:08.889 "cntlid": 127, 00:20:08.889 "qid": 0, 00:20:08.889 "state": "enabled", 00:20:08.889 "thread": "nvmf_tgt_poll_group_000", 00:20:08.889 "listen_address": { 00:20:08.889 "trtype": "TCP", 00:20:08.889 "adrfam": "IPv4", 00:20:08.889 "traddr": "10.0.0.2", 00:20:08.889 "trsvcid": "4420" 00:20:08.889 }, 00:20:08.889 "peer_address": { 00:20:08.889 "trtype": "TCP", 00:20:08.889 "adrfam": "IPv4", 00:20:08.889 "traddr": "10.0.0.1", 00:20:08.889 "trsvcid": "37640" 00:20:08.889 }, 00:20:08.889 "auth": { 00:20:08.889 "state": "completed", 00:20:08.889 "digest": "sha512", 00:20:08.889 "dhgroup": "ffdhe4096" 00:20:08.889 } 00:20:08.889 } 00:20:08.889 ]' 00:20:08.889 15:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:08.889 15:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:08.889 15:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:08.889 15:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:08.889 15:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:08.889 15:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.889 15:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.889 15:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.147 15:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:NTE4NzgzNjczNTBlMTUzNWMyM2Y4MTI3NjNmN2RiMTkzYzRhYmIwNzgzZDg1MmZiYTdjNWRiNjg4ZDJlZGYwM/cmdi8=: 00:20:09.712 15:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.712 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.712 15:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:09.712 15:25:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.712 15:25:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.712 15:25:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.712 15:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:09.712 15:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:09.712 15:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:09.712 15:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:09.712 15:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:20:09.712 15:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:09.712 15:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:09.712 15:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:09.712 15:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:09.712 15:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.712 15:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.712 15:25:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.712 15:25:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.712 15:25:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.712 15:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.712 15:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.277 00:20:10.277 15:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:10.277 15:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.277 15:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:10.277 15:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.277 15:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.277 15:25:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.277 15:25:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.277 15:25:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.277 15:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:10.277 { 00:20:10.277 "cntlid": 129, 00:20:10.277 "qid": 0, 00:20:10.277 "state": "enabled", 00:20:10.277 "thread": "nvmf_tgt_poll_group_000", 00:20:10.277 "listen_address": { 00:20:10.277 "trtype": "TCP", 00:20:10.277 "adrfam": "IPv4", 00:20:10.277 "traddr": "10.0.0.2", 00:20:10.277 "trsvcid": "4420" 00:20:10.277 }, 00:20:10.277 "peer_address": { 00:20:10.277 "trtype": "TCP", 00:20:10.277 "adrfam": "IPv4", 00:20:10.277 "traddr": "10.0.0.1", 00:20:10.277 "trsvcid": "37676" 00:20:10.277 }, 00:20:10.277 "auth": { 00:20:10.277 "state": "completed", 00:20:10.277 "digest": "sha512", 00:20:10.277 "dhgroup": "ffdhe6144" 00:20:10.277 } 00:20:10.277 } 00:20:10.277 ]' 00:20:10.277 15:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:10.277 15:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:10.277 15:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:10.277 15:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:10.277 15:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:10.536 15:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.536 15:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.536 15:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.536 15:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZDhhOGEyYWYyZGRmNzA2OGQ4YmQyOTNiMzE0Yzk1Njc2MDFlYWY1ZTNlN2RlOWM3K0fCMQ==: --dhchap-ctrl-secret DHHC-1:03:N2NkNjk2YWE5NjI2NGIxNmU4ODIyNTIzOWIxYTA4ZmNhM2ExMDcxODQwODMwOGViNzdhOTA5NzczYTAyZjQ4McVZPBM=: 00:20:11.102 15:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.102 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.102 15:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:11.102 15:25:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.102 15:25:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.102 15:25:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.102 15:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:11.102 15:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:11.102 15:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:11.360 15:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:20:11.360 15:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:11.360 15:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:11.360 15:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:11.360 15:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:11.360 15:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.360 15:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.360 15:25:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.360 15:25:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.360 15:25:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.360 15:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.360 15:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.618 00:20:11.618 15:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:11.618 15:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.618 15:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:11.877 15:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.877 15:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.877 15:25:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.877 15:25:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.877 15:25:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.877 15:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:11.877 { 00:20:11.877 "cntlid": 131, 00:20:11.877 "qid": 0, 00:20:11.877 "state": "enabled", 00:20:11.877 "thread": "nvmf_tgt_poll_group_000", 00:20:11.877 "listen_address": { 00:20:11.877 "trtype": "TCP", 00:20:11.877 "adrfam": "IPv4", 00:20:11.877 "traddr": "10.0.0.2", 00:20:11.877 "trsvcid": "4420" 00:20:11.877 }, 00:20:11.877 "peer_address": { 00:20:11.877 "trtype": "TCP", 00:20:11.877 "adrfam": "IPv4", 00:20:11.877 "traddr": "10.0.0.1", 00:20:11.877 "trsvcid": "37706" 00:20:11.877 }, 00:20:11.877 "auth": { 00:20:11.877 "state": "completed", 00:20:11.877 "digest": "sha512", 00:20:11.877 "dhgroup": "ffdhe6144" 00:20:11.877 } 00:20:11.877 } 00:20:11.877 ]' 00:20:11.877 15:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:11.877 15:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:11.877 15:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:11.877 15:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:11.877 15:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:11.877 15:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.877 15:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.877 15:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.136 15:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:ZWFiNGZiMGJlNTU5MzcxZjAwNGViYmViYjQwMmE2NGSbZLbl: --dhchap-ctrl-secret DHHC-1:02:YmQyYmQzNzJkY2U5ZTM3NzQ4MTkwODZkZDYxYzA1NWM2OGJiNDY3Y2M1NzZkYjlmeHPAQA==: 00:20:12.703 15:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.703 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.703 15:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:12.703 15:25:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.703 15:25:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.703 15:25:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.703 15:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:12.703 15:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:12.703 15:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:12.961 15:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:20:12.961 15:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:12.961 15:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:12.961 15:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:12.961 15:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:12.961 15:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.961 15:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.961 15:25:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.961 15:25:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.961 15:25:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.961 15:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.961 15:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.219 00:20:13.219 15:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:13.219 15:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:13.219 15:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.477 15:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.477 15:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.477 15:25:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.477 15:25:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.477 15:25:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.477 15:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:13.477 { 00:20:13.477 "cntlid": 133, 00:20:13.477 "qid": 0, 00:20:13.477 "state": "enabled", 00:20:13.477 "thread": "nvmf_tgt_poll_group_000", 00:20:13.477 "listen_address": { 00:20:13.477 "trtype": "TCP", 00:20:13.477 "adrfam": "IPv4", 00:20:13.477 "traddr": "10.0.0.2", 00:20:13.477 "trsvcid": "4420" 00:20:13.477 }, 00:20:13.477 "peer_address": { 00:20:13.477 "trtype": "TCP", 00:20:13.477 "adrfam": "IPv4", 00:20:13.477 "traddr": "10.0.0.1", 00:20:13.477 "trsvcid": "37726" 00:20:13.477 }, 00:20:13.477 "auth": { 00:20:13.477 "state": "completed", 00:20:13.477 "digest": "sha512", 00:20:13.477 "dhgroup": "ffdhe6144" 00:20:13.477 } 00:20:13.477 } 00:20:13.477 ]' 00:20:13.477 15:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:13.477 15:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:13.477 15:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:13.477 15:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:13.477 15:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:13.477 15:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.477 15:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.477 15:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.736 15:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:ODczMTA2MzhiMmRlMGQ4NzE5OTU1YjdhMTM0NjRjNDhiNDc3ZDZiZmJlZjA5NWNkoTggUw==: --dhchap-ctrl-secret DHHC-1:01:ZTZkOWVkNjNjNzU0NWYwZmMyNjAwODBkOWRlZDdkZTf7PGQl: 00:20:14.303 15:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.303 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.303 15:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:14.303 15:25:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.303 15:25:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.303 15:25:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.303 15:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:14.303 15:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:14.303 15:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:14.562 15:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:20:14.562 15:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:14.562 15:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:14.562 15:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:14.562 15:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:14.562 15:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.562 15:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:20:14.562 15:25:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.562 15:25:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.562 15:25:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.562 15:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:14.562 15:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:14.820 00:20:14.821 15:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:14.821 15:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:14.821 15:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.079 15:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.079 15:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.079 15:25:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.079 15:25:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.079 15:25:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.079 15:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:15.079 { 00:20:15.079 "cntlid": 135, 00:20:15.079 "qid": 0, 00:20:15.079 "state": "enabled", 00:20:15.080 "thread": "nvmf_tgt_poll_group_000", 00:20:15.080 "listen_address": { 00:20:15.080 "trtype": "TCP", 00:20:15.080 "adrfam": "IPv4", 00:20:15.080 "traddr": "10.0.0.2", 00:20:15.080 "trsvcid": "4420" 00:20:15.080 }, 00:20:15.080 "peer_address": { 00:20:15.080 "trtype": "TCP", 00:20:15.080 "adrfam": "IPv4", 00:20:15.080 "traddr": "10.0.0.1", 00:20:15.080 "trsvcid": "37756" 00:20:15.080 }, 00:20:15.080 "auth": { 00:20:15.080 "state": "completed", 00:20:15.080 "digest": "sha512", 00:20:15.080 "dhgroup": "ffdhe6144" 00:20:15.080 } 00:20:15.080 } 00:20:15.080 ]' 00:20:15.080 15:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:15.080 15:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:15.080 15:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:15.080 15:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:15.080 15:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:15.080 15:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.080 15:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.080 15:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.339 15:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:NTE4NzgzNjczNTBlMTUzNWMyM2Y4MTI3NjNmN2RiMTkzYzRhYmIwNzgzZDg1MmZiYTdjNWRiNjg4ZDJlZGYwM/cmdi8=: 00:20:15.907 15:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.907 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.907 15:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:15.907 15:25:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.907 15:25:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.907 15:25:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.907 15:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:15.907 15:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:15.907 15:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:15.907 15:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:15.907 15:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:20:15.907 15:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:15.907 15:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:15.907 15:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:15.907 15:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:15.907 15:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.907 15:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:15.907 15:25:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.907 15:25:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.166 15:25:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.166 15:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.166 15:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.425 00:20:16.425 15:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:16.425 15:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:16.425 15:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.684 15:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.684 15:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.684 15:25:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.684 15:25:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.684 15:25:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.684 15:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:16.684 { 00:20:16.684 "cntlid": 137, 00:20:16.684 "qid": 0, 00:20:16.684 "state": "enabled", 00:20:16.684 "thread": "nvmf_tgt_poll_group_000", 00:20:16.684 "listen_address": { 00:20:16.684 "trtype": "TCP", 00:20:16.684 "adrfam": "IPv4", 00:20:16.684 "traddr": "10.0.0.2", 00:20:16.684 "trsvcid": "4420" 00:20:16.684 }, 00:20:16.684 "peer_address": { 00:20:16.684 "trtype": "TCP", 00:20:16.684 "adrfam": "IPv4", 00:20:16.684 "traddr": "10.0.0.1", 00:20:16.684 "trsvcid": "37772" 00:20:16.684 }, 00:20:16.684 "auth": { 00:20:16.684 "state": "completed", 00:20:16.684 "digest": "sha512", 00:20:16.684 "dhgroup": "ffdhe8192" 00:20:16.684 } 00:20:16.684 } 00:20:16.684 ]' 00:20:16.684 15:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:16.684 15:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:16.684 15:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:16.684 15:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:16.684 15:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:16.685 15:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.685 15:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.685 15:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.944 15:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZDhhOGEyYWYyZGRmNzA2OGQ4YmQyOTNiMzE0Yzk1Njc2MDFlYWY1ZTNlN2RlOWM3K0fCMQ==: --dhchap-ctrl-secret DHHC-1:03:N2NkNjk2YWE5NjI2NGIxNmU4ODIyNTIzOWIxYTA4ZmNhM2ExMDcxODQwODMwOGViNzdhOTA5NzczYTAyZjQ4McVZPBM=: 00:20:17.513 15:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.513 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.513 15:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:17.513 15:25:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.513 15:25:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.513 15:25:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.513 15:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:17.513 15:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:17.513 15:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:17.772 15:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:20:17.772 15:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:17.772 15:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:17.772 15:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:17.772 15:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:17.772 15:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.772 15:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.772 15:25:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.772 15:25:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.772 15:25:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.772 15:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.772 15:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.031 00:20:18.031 15:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:18.031 15:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:18.290 15:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.290 15:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.290 15:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.290 15:25:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.290 15:25:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.290 15:25:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.290 15:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:18.290 { 00:20:18.290 "cntlid": 139, 00:20:18.290 "qid": 0, 00:20:18.290 "state": "enabled", 00:20:18.290 "thread": "nvmf_tgt_poll_group_000", 00:20:18.290 "listen_address": { 00:20:18.290 "trtype": "TCP", 00:20:18.290 "adrfam": "IPv4", 00:20:18.290 "traddr": "10.0.0.2", 00:20:18.290 "trsvcid": "4420" 00:20:18.290 }, 00:20:18.290 "peer_address": { 00:20:18.290 "trtype": "TCP", 00:20:18.290 "adrfam": "IPv4", 00:20:18.290 "traddr": "10.0.0.1", 00:20:18.290 "trsvcid": "37514" 00:20:18.290 }, 00:20:18.290 "auth": { 00:20:18.290 "state": "completed", 00:20:18.290 "digest": "sha512", 00:20:18.290 "dhgroup": "ffdhe8192" 00:20:18.290 } 00:20:18.290 } 00:20:18.290 ]' 00:20:18.290 15:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:18.290 15:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:18.290 15:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:18.549 15:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:18.549 15:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:18.549 15:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.549 15:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.549 15:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.549 15:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:ZWFiNGZiMGJlNTU5MzcxZjAwNGViYmViYjQwMmE2NGSbZLbl: --dhchap-ctrl-secret DHHC-1:02:YmQyYmQzNzJkY2U5ZTM3NzQ4MTkwODZkZDYxYzA1NWM2OGJiNDY3Y2M1NzZkYjlmeHPAQA==: 00:20:19.117 15:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.117 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.117 15:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:19.117 15:25:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.117 15:25:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.117 15:25:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.117 15:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:19.117 15:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:19.117 15:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:19.377 15:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:20:19.377 15:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:19.377 15:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:19.377 15:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:19.377 15:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:19.377 15:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.377 15:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.377 15:25:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.377 15:25:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.377 15:25:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.377 15:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.377 15:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.944 00:20:19.944 15:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:19.944 15:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:19.944 15:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.944 15:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.944 15:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.944 15:25:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.944 15:25:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.944 15:25:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.944 15:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:19.944 { 00:20:19.944 "cntlid": 141, 00:20:19.944 "qid": 0, 00:20:19.944 "state": "enabled", 00:20:19.944 "thread": "nvmf_tgt_poll_group_000", 00:20:19.944 "listen_address": { 00:20:19.944 "trtype": "TCP", 00:20:19.944 "adrfam": "IPv4", 00:20:19.944 "traddr": "10.0.0.2", 00:20:19.944 "trsvcid": "4420" 00:20:19.944 }, 00:20:19.944 "peer_address": { 00:20:19.944 "trtype": "TCP", 00:20:19.944 "adrfam": "IPv4", 00:20:19.944 "traddr": "10.0.0.1", 00:20:19.944 "trsvcid": "37538" 00:20:19.944 }, 00:20:19.944 "auth": { 00:20:19.944 "state": "completed", 00:20:19.944 "digest": "sha512", 00:20:19.944 "dhgroup": "ffdhe8192" 00:20:19.944 } 00:20:19.944 } 00:20:19.944 ]' 00:20:19.944 15:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:20.202 15:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:20.202 15:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:20.202 15:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:20.202 15:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:20.203 15:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.203 15:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.203 15:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.461 15:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:ODczMTA2MzhiMmRlMGQ4NzE5OTU1YjdhMTM0NjRjNDhiNDc3ZDZiZmJlZjA5NWNkoTggUw==: --dhchap-ctrl-secret DHHC-1:01:ZTZkOWVkNjNjNzU0NWYwZmMyNjAwODBkOWRlZDdkZTf7PGQl: 00:20:21.031 15:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.031 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.031 15:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:21.031 15:25:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.031 15:25:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.031 15:25:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.031 15:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:21.031 15:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:21.031 15:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:21.031 15:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:20:21.031 15:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:21.031 15:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:21.031 15:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:21.031 15:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:21.031 15:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.031 15:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:20:21.031 15:25:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.031 15:25:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.031 15:25:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.031 15:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:21.031 15:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:21.598 00:20:21.598 15:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:21.598 15:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:21.598 15:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.598 15:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.598 15:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.598 15:25:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.598 15:25:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.598 15:25:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.598 15:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:21.598 { 00:20:21.598 "cntlid": 143, 00:20:21.598 "qid": 0, 00:20:21.598 "state": "enabled", 00:20:21.598 "thread": "nvmf_tgt_poll_group_000", 00:20:21.598 "listen_address": { 00:20:21.598 "trtype": "TCP", 00:20:21.598 "adrfam": "IPv4", 00:20:21.598 "traddr": "10.0.0.2", 00:20:21.598 "trsvcid": "4420" 00:20:21.598 }, 00:20:21.598 "peer_address": { 00:20:21.598 "trtype": "TCP", 00:20:21.598 "adrfam": "IPv4", 00:20:21.598 "traddr": "10.0.0.1", 00:20:21.598 "trsvcid": "37564" 00:20:21.598 }, 00:20:21.598 "auth": { 00:20:21.598 "state": "completed", 00:20:21.598 "digest": "sha512", 00:20:21.598 "dhgroup": "ffdhe8192" 00:20:21.598 } 00:20:21.598 } 00:20:21.598 ]' 00:20:21.598 15:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:21.856 15:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:21.856 15:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:21.856 15:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:21.856 15:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:21.856 15:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.856 15:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.856 15:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.114 15:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:NTE4NzgzNjczNTBlMTUzNWMyM2Y4MTI3NjNmN2RiMTkzYzRhYmIwNzgzZDg1MmZiYTdjNWRiNjg4ZDJlZGYwM/cmdi8=: 00:20:22.681 15:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.681 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.681 15:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:22.681 15:25:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.681 15:25:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.681 15:25:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.681 15:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:20:22.681 15:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:20:22.681 15:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:20:22.681 15:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:22.681 15:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:22.681 15:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:22.681 15:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:20:22.681 15:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:22.681 15:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:22.681 15:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:22.681 15:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:22.681 15:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.681 15:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.681 15:25:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.681 15:25:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.681 15:25:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.681 15:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:22.681 15:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.248 00:20:23.248 15:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:23.248 15:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:23.248 15:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.506 15:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.506 15:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.506 15:25:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.506 15:25:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.506 15:25:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.506 15:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:23.506 { 00:20:23.506 "cntlid": 145, 00:20:23.506 "qid": 0, 00:20:23.506 "state": "enabled", 00:20:23.506 "thread": "nvmf_tgt_poll_group_000", 00:20:23.506 "listen_address": { 00:20:23.506 "trtype": "TCP", 00:20:23.506 "adrfam": "IPv4", 00:20:23.506 "traddr": "10.0.0.2", 00:20:23.506 "trsvcid": "4420" 00:20:23.506 }, 00:20:23.506 "peer_address": { 00:20:23.506 "trtype": "TCP", 00:20:23.506 "adrfam": "IPv4", 00:20:23.506 "traddr": "10.0.0.1", 00:20:23.506 "trsvcid": "37602" 00:20:23.506 }, 00:20:23.506 "auth": { 00:20:23.506 "state": "completed", 00:20:23.506 "digest": "sha512", 00:20:23.506 "dhgroup": "ffdhe8192" 00:20:23.506 } 00:20:23.506 } 00:20:23.506 ]' 00:20:23.506 15:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:23.506 15:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:23.506 15:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:23.506 15:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:23.506 15:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:23.506 15:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.506 15:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.506 15:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.763 15:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZDhhOGEyYWYyZGRmNzA2OGQ4YmQyOTNiMzE0Yzk1Njc2MDFlYWY1ZTNlN2RlOWM3K0fCMQ==: --dhchap-ctrl-secret DHHC-1:03:N2NkNjk2YWE5NjI2NGIxNmU4ODIyNTIzOWIxYTA4ZmNhM2ExMDcxODQwODMwOGViNzdhOTA5NzczYTAyZjQ4McVZPBM=: 00:20:24.329 15:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.329 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.329 15:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:24.329 15:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.329 15:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.330 15:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.330 15:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:20:24.330 15:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.330 15:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.330 15:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.330 15:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:24.330 15:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:24.330 15:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:24.330 15:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:24.330 15:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:24.330 15:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:24.330 15:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:24.330 15:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:24.330 15:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:24.589 request: 00:20:24.589 { 00:20:24.589 "name": "nvme0", 00:20:24.589 "trtype": "tcp", 00:20:24.589 "traddr": "10.0.0.2", 00:20:24.589 "adrfam": "ipv4", 00:20:24.589 "trsvcid": "4420", 00:20:24.589 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:24.589 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:20:24.589 "prchk_reftag": false, 00:20:24.589 "prchk_guard": false, 00:20:24.589 "hdgst": false, 00:20:24.589 "ddgst": false, 00:20:24.589 "dhchap_key": "key2", 00:20:24.589 "method": "bdev_nvme_attach_controller", 00:20:24.589 "req_id": 1 00:20:24.589 } 00:20:24.589 Got JSON-RPC error response 00:20:24.589 response: 00:20:24.589 { 00:20:24.589 "code": -5, 00:20:24.589 "message": "Input/output error" 00:20:24.589 } 00:20:24.589 15:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:24.589 15:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:24.589 15:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:24.589 15:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:24.589 15:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:24.589 15:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.589 15:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.589 15:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.589 15:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.589 15:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.589 15:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.848 15:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.848 15:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:24.848 15:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:24.848 15:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:24.848 15:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:24.848 15:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:24.848 15:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:24.848 15:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:24.848 15:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:24.848 15:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:25.107 request: 00:20:25.107 { 00:20:25.107 "name": "nvme0", 00:20:25.107 "trtype": "tcp", 00:20:25.107 "traddr": "10.0.0.2", 00:20:25.107 "adrfam": "ipv4", 00:20:25.107 "trsvcid": "4420", 00:20:25.107 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:25.107 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:20:25.107 "prchk_reftag": false, 00:20:25.107 "prchk_guard": false, 00:20:25.107 "hdgst": false, 00:20:25.107 "ddgst": false, 00:20:25.107 "dhchap_key": "key1", 00:20:25.107 "dhchap_ctrlr_key": "ckey2", 00:20:25.107 "method": "bdev_nvme_attach_controller", 00:20:25.107 "req_id": 1 00:20:25.107 } 00:20:25.107 Got JSON-RPC error response 00:20:25.107 response: 00:20:25.107 { 00:20:25.107 "code": -5, 00:20:25.107 "message": "Input/output error" 00:20:25.107 } 00:20:25.107 15:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:25.107 15:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:25.107 15:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:25.107 15:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:25.107 15:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:25.107 15:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.107 15:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.107 15:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.107 15:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:20:25.107 15:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.107 15:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.107 15:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.107 15:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.107 15:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:25.107 15:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.107 15:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:25.107 15:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:25.107 15:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:25.107 15:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:25.107 15:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.107 15:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.712 request: 00:20:25.712 { 00:20:25.712 "name": "nvme0", 00:20:25.712 "trtype": "tcp", 00:20:25.712 "traddr": "10.0.0.2", 00:20:25.712 "adrfam": "ipv4", 00:20:25.712 "trsvcid": "4420", 00:20:25.712 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:25.712 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:20:25.712 "prchk_reftag": false, 00:20:25.712 "prchk_guard": false, 00:20:25.712 "hdgst": false, 00:20:25.712 "ddgst": false, 00:20:25.712 "dhchap_key": "key1", 00:20:25.712 "dhchap_ctrlr_key": "ckey1", 00:20:25.712 "method": "bdev_nvme_attach_controller", 00:20:25.712 "req_id": 1 00:20:25.712 } 00:20:25.712 Got JSON-RPC error response 00:20:25.712 response: 00:20:25.712 { 00:20:25.712 "code": -5, 00:20:25.712 "message": "Input/output error" 00:20:25.712 } 00:20:25.712 15:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:25.712 15:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:25.712 15:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:25.712 15:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:25.712 15:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:25.712 15:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.712 15:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.712 15:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.712 15:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 3055710 00:20:25.712 15:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3055710 ']' 00:20:25.712 15:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3055710 00:20:25.712 15:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:20:25.712 15:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:25.712 15:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3055710 00:20:25.712 15:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:25.712 15:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:25.712 15:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3055710' 00:20:25.712 killing process with pid 3055710 00:20:25.712 15:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3055710 00:20:25.712 15:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3055710 00:20:25.712 15:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:20:25.712 15:25:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:25.712 15:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:25.712 15:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.712 15:25:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3076939 00:20:25.712 15:25:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:20:25.712 15:25:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3076939 00:20:25.712 15:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3076939 ']' 00:20:25.712 15:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:25.712 15:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:25.712 15:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:25.971 15:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:25.971 15:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.908 15:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:26.908 15:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:20:26.908 15:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:26.908 15:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:26.908 15:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.908 15:25:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:26.908 15:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:26.908 15:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 3076939 00:20:26.908 15:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3076939 ']' 00:20:26.908 15:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:26.908 15:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:26.908 15:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:26.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:26.908 15:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:26.908 15:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.908 15:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:26.908 15:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:20:26.908 15:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:20:26.908 15:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.908 15:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.908 15:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.908 15:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:20:26.908 15:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:26.908 15:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:26.908 15:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:26.908 15:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:26.908 15:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.908 15:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:20:26.908 15:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.908 15:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.167 15:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.167 15:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:27.167 15:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:27.426 00:20:27.426 15:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:27.426 15:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:27.426 15:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.686 15:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.686 15:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.686 15:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.686 15:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.686 15:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.686 15:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:27.686 { 00:20:27.686 "cntlid": 1, 00:20:27.686 "qid": 0, 00:20:27.686 "state": "enabled", 00:20:27.686 "thread": "nvmf_tgt_poll_group_000", 00:20:27.686 "listen_address": { 00:20:27.686 "trtype": "TCP", 00:20:27.686 "adrfam": "IPv4", 00:20:27.686 "traddr": "10.0.0.2", 00:20:27.686 "trsvcid": "4420" 00:20:27.686 }, 00:20:27.686 "peer_address": { 00:20:27.686 "trtype": "TCP", 00:20:27.686 "adrfam": "IPv4", 00:20:27.686 "traddr": "10.0.0.1", 00:20:27.686 "trsvcid": "37668" 00:20:27.686 }, 00:20:27.686 "auth": { 00:20:27.686 "state": "completed", 00:20:27.686 "digest": "sha512", 00:20:27.686 "dhgroup": "ffdhe8192" 00:20:27.686 } 00:20:27.686 } 00:20:27.686 ]' 00:20:27.686 15:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:27.686 15:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:27.686 15:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:27.686 15:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:27.686 15:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:27.945 15:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.945 15:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.945 15:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.945 15:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:NTE4NzgzNjczNTBlMTUzNWMyM2Y4MTI3NjNmN2RiMTkzYzRhYmIwNzgzZDg1MmZiYTdjNWRiNjg4ZDJlZGYwM/cmdi8=: 00:20:28.512 15:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.512 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.512 15:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:28.512 15:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.512 15:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.512 15:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.512 15:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:20:28.512 15:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.512 15:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.512 15:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.512 15:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:20:28.512 15:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:20:28.771 15:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:28.771 15:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:28.771 15:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:28.772 15:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:28.772 15:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:28.772 15:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:28.772 15:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:28.772 15:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:28.772 15:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:29.031 request: 00:20:29.031 { 00:20:29.031 "name": "nvme0", 00:20:29.031 "trtype": "tcp", 00:20:29.031 "traddr": "10.0.0.2", 00:20:29.031 "adrfam": "ipv4", 00:20:29.031 "trsvcid": "4420", 00:20:29.031 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:29.031 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:20:29.031 "prchk_reftag": false, 00:20:29.031 "prchk_guard": false, 00:20:29.031 "hdgst": false, 00:20:29.031 "ddgst": false, 00:20:29.031 "dhchap_key": "key3", 00:20:29.031 "method": "bdev_nvme_attach_controller", 00:20:29.031 "req_id": 1 00:20:29.031 } 00:20:29.031 Got JSON-RPC error response 00:20:29.031 response: 00:20:29.031 { 00:20:29.031 "code": -5, 00:20:29.031 "message": "Input/output error" 00:20:29.031 } 00:20:29.031 15:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:29.031 15:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:29.031 15:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:29.031 15:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:29.031 15:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:20:29.031 15:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:20:29.031 15:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:29.031 15:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:29.031 15:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:29.031 15:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:29.031 15:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:29.031 15:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:29.031 15:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:29.031 15:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:29.031 15:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:29.031 15:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:29.031 15:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:29.290 request: 00:20:29.290 { 00:20:29.290 "name": "nvme0", 00:20:29.290 "trtype": "tcp", 00:20:29.290 "traddr": "10.0.0.2", 00:20:29.290 "adrfam": "ipv4", 00:20:29.290 "trsvcid": "4420", 00:20:29.290 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:29.290 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:20:29.290 "prchk_reftag": false, 00:20:29.290 "prchk_guard": false, 00:20:29.290 "hdgst": false, 00:20:29.290 "ddgst": false, 00:20:29.290 "dhchap_key": "key3", 00:20:29.290 "method": "bdev_nvme_attach_controller", 00:20:29.290 "req_id": 1 00:20:29.290 } 00:20:29.290 Got JSON-RPC error response 00:20:29.290 response: 00:20:29.290 { 00:20:29.290 "code": -5, 00:20:29.290 "message": "Input/output error" 00:20:29.290 } 00:20:29.290 15:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:29.290 15:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:29.290 15:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:29.290 15:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:29.290 15:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:20:29.290 15:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:20:29.290 15:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:20:29.290 15:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:29.290 15:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:29.290 15:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:29.548 15:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:29.548 15:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.548 15:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.548 15:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.548 15:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:29.548 15:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.548 15:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.548 15:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.548 15:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:29.548 15:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:29.548 15:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:29.548 15:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:29.548 15:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:29.548 15:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:29.548 15:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:29.548 15:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:29.548 15:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:29.548 request: 00:20:29.548 { 00:20:29.548 "name": "nvme0", 00:20:29.548 "trtype": "tcp", 00:20:29.548 "traddr": "10.0.0.2", 00:20:29.548 "adrfam": "ipv4", 00:20:29.548 "trsvcid": "4420", 00:20:29.548 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:29.548 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:20:29.548 "prchk_reftag": false, 00:20:29.548 "prchk_guard": false, 00:20:29.548 "hdgst": false, 00:20:29.548 "ddgst": false, 00:20:29.548 "dhchap_key": "key0", 00:20:29.548 "dhchap_ctrlr_key": "key1", 00:20:29.548 "method": "bdev_nvme_attach_controller", 00:20:29.548 "req_id": 1 00:20:29.548 } 00:20:29.548 Got JSON-RPC error response 00:20:29.548 response: 00:20:29.548 { 00:20:29.548 "code": -5, 00:20:29.548 "message": "Input/output error" 00:20:29.548 } 00:20:29.548 15:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:29.548 15:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:29.548 15:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:29.548 15:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:29.548 15:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:29.548 15:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:29.806 00:20:29.806 15:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:20:29.806 15:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:20:29.806 15:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.064 15:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.064 15:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.064 15:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.342 15:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:20:30.342 15:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:20:30.342 15:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3055939 00:20:30.342 15:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3055939 ']' 00:20:30.342 15:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3055939 00:20:30.342 15:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:20:30.342 15:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:30.342 15:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3055939 00:20:30.342 15:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:30.342 15:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:30.342 15:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3055939' 00:20:30.342 killing process with pid 3055939 00:20:30.342 15:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3055939 00:20:30.342 15:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3055939 00:20:30.601 15:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:20:30.601 15:25:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:30.601 15:25:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:20:30.601 15:25:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:30.601 15:25:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:20:30.601 15:25:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:30.601 15:25:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:30.601 rmmod nvme_tcp 00:20:30.601 rmmod nvme_fabrics 00:20:30.601 rmmod nvme_keyring 00:20:30.601 15:25:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:30.601 15:25:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:20:30.601 15:25:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:20:30.601 15:25:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 3076939 ']' 00:20:30.601 15:25:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 3076939 00:20:30.601 15:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3076939 ']' 00:20:30.601 15:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3076939 00:20:30.601 15:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:20:30.601 15:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:30.601 15:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3076939 00:20:30.860 15:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:30.860 15:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:30.860 15:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3076939' 00:20:30.860 killing process with pid 3076939 00:20:30.860 15:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3076939 00:20:30.860 15:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3076939 00:20:30.860 15:25:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:30.860 15:25:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:30.860 15:25:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:30.860 15:25:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:30.860 15:25:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:30.860 15:25:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:30.860 15:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:30.860 15:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:33.398 15:25:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:33.398 15:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.NSG /tmp/spdk.key-sha256.xSs /tmp/spdk.key-sha384.ezA /tmp/spdk.key-sha512.GL9 /tmp/spdk.key-sha512.W2e /tmp/spdk.key-sha384.fbV /tmp/spdk.key-sha256.NcN '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:20:33.398 00:20:33.398 real 2m9.532s 00:20:33.398 user 4m48.464s 00:20:33.398 sys 0m28.701s 00:20:33.398 15:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:33.398 15:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.398 ************************************ 00:20:33.398 END TEST nvmf_auth_target 00:20:33.398 ************************************ 00:20:33.398 15:25:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:33.398 15:25:36 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:20:33.398 15:25:36 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:33.398 15:25:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:20:33.398 15:25:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:33.398 15:25:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:33.398 ************************************ 00:20:33.398 START TEST nvmf_bdevio_no_huge 00:20:33.398 ************************************ 00:20:33.398 15:25:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:33.398 * Looking for test storage... 00:20:33.398 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:33.398 15:25:36 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:33.398 15:25:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:20:33.398 15:25:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:33.398 15:25:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:33.398 15:25:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:33.398 15:25:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:33.398 15:25:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:33.398 15:25:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:33.398 15:25:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:33.398 15:25:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:33.398 15:25:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:33.398 15:25:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:33.398 15:25:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:33.398 15:25:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:20:33.398 15:25:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:33.398 15:25:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:33.398 15:25:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:33.398 15:25:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:33.398 15:25:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:33.398 15:25:36 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:33.398 15:25:36 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:33.398 15:25:36 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:33.398 15:25:36 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.398 15:25:36 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.398 15:25:36 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.398 15:25:36 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:20:33.398 15:25:36 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.398 15:25:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:20:33.398 15:25:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:33.398 15:25:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:33.398 15:25:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:33.398 15:25:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:33.398 15:25:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:33.398 15:25:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:33.398 15:25:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:33.398 15:25:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:33.398 15:25:36 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:33.398 15:25:36 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:33.398 15:25:36 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:20:33.398 15:25:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:33.398 15:25:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:33.398 15:25:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:33.398 15:25:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:33.398 15:25:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:33.398 15:25:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:33.398 15:25:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:33.398 15:25:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:33.398 15:25:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:33.398 15:25:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:33.399 15:25:36 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:20:33.399 15:25:36 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:39.964 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:39.964 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:39.964 Found net devices under 0000:af:00.0: cvl_0_0 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:39.964 Found net devices under 0000:af:00.1: cvl_0_1 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:39.964 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:40.223 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:40.223 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:40.223 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:40.223 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:40.223 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:20:40.223 00:20:40.223 --- 10.0.0.2 ping statistics --- 00:20:40.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.223 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:20:40.223 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:40.223 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:40.223 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:20:40.223 00:20:40.223 --- 10.0.0.1 ping statistics --- 00:20:40.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.223 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:20:40.223 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:40.223 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:20:40.223 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:40.223 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:40.223 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:40.223 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:40.223 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:40.223 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:40.223 15:25:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:40.223 15:25:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:40.223 15:25:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:40.223 15:25:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:40.223 15:25:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:40.223 15:25:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=3081556 00:20:40.223 15:25:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:20:40.223 15:25:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 3081556 00:20:40.223 15:25:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 3081556 ']' 00:20:40.223 15:25:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:40.223 15:25:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:40.223 15:25:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:40.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:40.223 15:25:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:40.223 15:25:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:40.223 [2024-07-15 15:25:44.064973] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:20:40.223 [2024-07-15 15:25:44.065019] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:20:40.482 [2024-07-15 15:25:44.142790] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:40.482 [2024-07-15 15:25:44.237737] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:40.482 [2024-07-15 15:25:44.237777] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:40.482 [2024-07-15 15:25:44.237786] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:40.482 [2024-07-15 15:25:44.237794] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:40.482 [2024-07-15 15:25:44.237820] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:40.482 [2024-07-15 15:25:44.237938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:40.482 [2024-07-15 15:25:44.238054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:20:40.482 [2024-07-15 15:25:44.238507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:40.482 [2024-07-15 15:25:44.238507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:20:41.047 15:25:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:41.047 15:25:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:20:41.047 15:25:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:41.047 15:25:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:41.047 15:25:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:41.047 15:25:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:41.047 15:25:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:41.047 15:25:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.047 15:25:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:41.047 [2024-07-15 15:25:44.917479] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:41.047 15:25:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.047 15:25:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:41.047 15:25:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.047 15:25:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:41.047 Malloc0 00:20:41.047 15:25:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.048 15:25:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:41.048 15:25:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.048 15:25:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:41.048 15:25:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.048 15:25:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:41.048 15:25:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.048 15:25:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:41.305 15:25:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.305 15:25:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:41.305 15:25:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.305 15:25:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:41.305 [2024-07-15 15:25:44.962250] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:41.305 15:25:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.305 15:25:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:20:41.305 15:25:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:41.305 15:25:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:20:41.305 15:25:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:20:41.305 15:25:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:41.306 15:25:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:41.306 { 00:20:41.306 "params": { 00:20:41.306 "name": "Nvme$subsystem", 00:20:41.306 "trtype": "$TEST_TRANSPORT", 00:20:41.306 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.306 "adrfam": "ipv4", 00:20:41.306 "trsvcid": "$NVMF_PORT", 00:20:41.306 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.306 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.306 "hdgst": ${hdgst:-false}, 00:20:41.306 "ddgst": ${ddgst:-false} 00:20:41.306 }, 00:20:41.306 "method": "bdev_nvme_attach_controller" 00:20:41.306 } 00:20:41.306 EOF 00:20:41.306 )") 00:20:41.306 15:25:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:20:41.306 15:25:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:20:41.306 15:25:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:20:41.306 15:25:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:41.306 "params": { 00:20:41.306 "name": "Nvme1", 00:20:41.306 "trtype": "tcp", 00:20:41.306 "traddr": "10.0.0.2", 00:20:41.306 "adrfam": "ipv4", 00:20:41.306 "trsvcid": "4420", 00:20:41.306 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:41.306 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:41.306 "hdgst": false, 00:20:41.306 "ddgst": false 00:20:41.306 }, 00:20:41.306 "method": "bdev_nvme_attach_controller" 00:20:41.306 }' 00:20:41.306 [2024-07-15 15:25:45.002856] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:20:41.306 [2024-07-15 15:25:45.002903] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3081593 ] 00:20:41.306 [2024-07-15 15:25:45.077937] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:41.306 [2024-07-15 15:25:45.178907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:41.306 [2024-07-15 15:25:45.179003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:41.306 [2024-07-15 15:25:45.179003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:41.564 I/O targets: 00:20:41.564 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:41.564 00:20:41.564 00:20:41.564 CUnit - A unit testing framework for C - Version 2.1-3 00:20:41.564 http://cunit.sourceforge.net/ 00:20:41.564 00:20:41.564 00:20:41.564 Suite: bdevio tests on: Nvme1n1 00:20:41.564 Test: blockdev write read block ...passed 00:20:41.564 Test: blockdev write zeroes read block ...passed 00:20:41.564 Test: blockdev write zeroes read no split ...passed 00:20:41.822 Test: blockdev write zeroes read split ...passed 00:20:41.822 Test: blockdev write zeroes read split partial ...passed 00:20:41.822 Test: blockdev reset ...[2024-07-15 15:25:45.556144] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:41.822 [2024-07-15 15:25:45.556208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16c0670 (9): Bad file descriptor 00:20:41.822 [2024-07-15 15:25:45.650114] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:41.822 passed 00:20:41.822 Test: blockdev write read 8 blocks ...passed 00:20:41.822 Test: blockdev write read size > 128k ...passed 00:20:41.822 Test: blockdev write read invalid size ...passed 00:20:42.080 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:42.080 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:42.080 Test: blockdev write read max offset ...passed 00:20:42.080 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:42.080 Test: blockdev writev readv 8 blocks ...passed 00:20:42.080 Test: blockdev writev readv 30 x 1block ...passed 00:20:42.080 Test: blockdev writev readv block ...passed 00:20:42.080 Test: blockdev writev readv size > 128k ...passed 00:20:42.080 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:42.080 Test: blockdev comparev and writev ...[2024-07-15 15:25:45.909711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:42.080 [2024-07-15 15:25:45.909739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.080 [2024-07-15 15:25:45.909755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:42.080 [2024-07-15 15:25:45.909766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:42.080 [2024-07-15 15:25:45.910116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:42.080 [2024-07-15 15:25:45.910128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:42.080 [2024-07-15 15:25:45.910146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:42.080 [2024-07-15 15:25:45.910155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:42.080 [2024-07-15 15:25:45.910478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:42.080 [2024-07-15 15:25:45.910491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:42.080 [2024-07-15 15:25:45.910505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:42.080 [2024-07-15 15:25:45.910514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:42.080 [2024-07-15 15:25:45.910850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:42.080 [2024-07-15 15:25:45.910862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:42.080 [2024-07-15 15:25:45.910876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:42.080 [2024-07-15 15:25:45.910886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:42.080 passed 00:20:42.339 Test: blockdev nvme passthru rw ...passed 00:20:42.339 Test: blockdev nvme passthru vendor specific ...[2024-07-15 15:25:45.993395] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:42.339 [2024-07-15 15:25:45.993411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:42.339 [2024-07-15 15:25:45.993621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:42.339 [2024-07-15 15:25:45.993632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:42.339 [2024-07-15 15:25:45.993834] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:42.339 [2024-07-15 15:25:45.993847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:42.339 [2024-07-15 15:25:45.994059] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:42.339 [2024-07-15 15:25:45.994071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:42.339 passed 00:20:42.339 Test: blockdev nvme admin passthru ...passed 00:20:42.339 Test: blockdev copy ...passed 00:20:42.339 00:20:42.339 Run Summary: Type Total Ran Passed Failed Inactive 00:20:42.339 suites 1 1 n/a 0 0 00:20:42.339 tests 23 23 23 0 0 00:20:42.339 asserts 152 152 152 0 n/a 00:20:42.339 00:20:42.339 Elapsed time = 1.435 seconds 00:20:42.598 15:25:46 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:42.598 15:25:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.598 15:25:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:42.598 15:25:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.598 15:25:46 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:42.598 15:25:46 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:20:42.598 15:25:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:42.598 15:25:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:20:42.598 15:25:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:42.598 15:25:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:20:42.598 15:25:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:42.598 15:25:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:42.598 rmmod nvme_tcp 00:20:42.598 rmmod nvme_fabrics 00:20:42.598 rmmod nvme_keyring 00:20:42.598 15:25:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:42.598 15:25:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:20:42.598 15:25:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:20:42.598 15:25:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 3081556 ']' 00:20:42.598 15:25:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 3081556 00:20:42.598 15:25:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 3081556 ']' 00:20:42.598 15:25:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 3081556 00:20:42.598 15:25:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:20:42.598 15:25:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:42.598 15:25:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3081556 00:20:42.598 15:25:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:20:42.598 15:25:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:20:42.598 15:25:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3081556' 00:20:42.598 killing process with pid 3081556 00:20:42.598 15:25:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 3081556 00:20:42.598 15:25:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 3081556 00:20:43.166 15:25:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:43.166 15:25:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:43.166 15:25:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:43.166 15:25:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:43.166 15:25:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:43.166 15:25:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:43.166 15:25:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:43.166 15:25:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:45.070 15:25:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:45.070 00:20:45.070 real 0m12.067s 00:20:45.070 user 0m14.218s 00:20:45.070 sys 0m6.517s 00:20:45.070 15:25:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:45.070 15:25:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:45.070 ************************************ 00:20:45.071 END TEST nvmf_bdevio_no_huge 00:20:45.071 ************************************ 00:20:45.071 15:25:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:45.071 15:25:48 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:45.071 15:25:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:45.071 15:25:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:45.071 15:25:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:45.330 ************************************ 00:20:45.330 START TEST nvmf_tls 00:20:45.330 ************************************ 00:20:45.330 15:25:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:45.330 * Looking for test storage... 00:20:45.330 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:45.330 15:25:49 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:45.330 15:25:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:20:45.330 15:25:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:45.330 15:25:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:45.330 15:25:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:45.330 15:25:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:45.330 15:25:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:45.330 15:25:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:45.330 15:25:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:45.330 15:25:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:45.330 15:25:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:45.330 15:25:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:45.330 15:25:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:45.330 15:25:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:20:45.330 15:25:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:45.330 15:25:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:45.330 15:25:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:45.330 15:25:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:45.330 15:25:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:45.330 15:25:49 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:45.330 15:25:49 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:45.330 15:25:49 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:45.330 15:25:49 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.330 15:25:49 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.330 15:25:49 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.330 15:25:49 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:20:45.330 15:25:49 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.330 15:25:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:20:45.330 15:25:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:45.330 15:25:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:45.330 15:25:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:45.330 15:25:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:45.330 15:25:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:45.330 15:25:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:45.330 15:25:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:45.330 15:25:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:45.330 15:25:49 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:45.330 15:25:49 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:20:45.330 15:25:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:45.330 15:25:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:45.330 15:25:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:45.330 15:25:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:45.331 15:25:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:45.331 15:25:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:45.331 15:25:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:45.331 15:25:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:45.331 15:25:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:45.331 15:25:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:45.331 15:25:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:20:45.331 15:25:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:51.928 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:51.928 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:20:51.928 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:51.928 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:51.928 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:51.928 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:51.928 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:51.928 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:20:51.928 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:51.928 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:20:51.928 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:51.929 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:51.929 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:51.929 Found net devices under 0000:af:00.0: cvl_0_0 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:51.929 Found net devices under 0000:af:00.1: cvl_0_1 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:51.929 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:52.188 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:52.188 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:52.188 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:52.188 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:52.188 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:20:52.188 00:20:52.188 --- 10.0.0.2 ping statistics --- 00:20:52.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:52.188 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:20:52.188 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:52.188 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:52.188 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:20:52.188 00:20:52.188 --- 10.0.0.1 ping statistics --- 00:20:52.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:52.188 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:20:52.188 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:52.188 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:20:52.188 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:52.188 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:52.188 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:52.188 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:52.188 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:52.188 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:52.188 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:52.188 15:25:55 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:52.188 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:52.188 15:25:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:52.188 15:25:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:52.188 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3085535 00:20:52.188 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:52.188 15:25:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3085535 00:20:52.188 15:25:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3085535 ']' 00:20:52.188 15:25:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:52.188 15:25:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:52.188 15:25:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:52.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:52.188 15:25:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:52.188 15:25:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:52.189 [2024-07-15 15:25:55.975518] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:20:52.189 [2024-07-15 15:25:55.975572] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:52.189 EAL: No free 2048 kB hugepages reported on node 1 00:20:52.189 [2024-07-15 15:25:56.049254] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:52.447 [2024-07-15 15:25:56.125843] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:52.447 [2024-07-15 15:25:56.125882] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:52.447 [2024-07-15 15:25:56.125892] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:52.447 [2024-07-15 15:25:56.125900] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:52.447 [2024-07-15 15:25:56.125907] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:52.447 [2024-07-15 15:25:56.125931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:53.015 15:25:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:53.015 15:25:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:53.015 15:25:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:53.015 15:25:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:53.015 15:25:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:53.015 15:25:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:53.015 15:25:56 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:20:53.015 15:25:56 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:53.273 true 00:20:53.273 15:25:56 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:20:53.273 15:25:56 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:53.273 15:25:57 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:20:53.273 15:25:57 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:20:53.273 15:25:57 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:53.532 15:25:57 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:53.533 15:25:57 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:20:53.792 15:25:57 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:20:53.792 15:25:57 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:20:53.792 15:25:57 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:53.792 15:25:57 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:53.792 15:25:57 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:20:54.051 15:25:57 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:20:54.051 15:25:57 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:20:54.051 15:25:57 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:54.051 15:25:57 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:20:54.310 15:25:57 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:20:54.310 15:25:57 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:20:54.310 15:25:57 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:54.310 15:25:58 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:54.310 15:25:58 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:20:54.569 15:25:58 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:20:54.569 15:25:58 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:20:54.569 15:25:58 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:54.829 15:25:58 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:54.829 15:25:58 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:20:54.829 15:25:58 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:20:54.829 15:25:58 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:20:54.829 15:25:58 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:54.829 15:25:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:54.829 15:25:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:54.829 15:25:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:54.829 15:25:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:20:54.829 15:25:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:54.829 15:25:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:54.829 15:25:58 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:54.829 15:25:58 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:54.829 15:25:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:54.829 15:25:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:54.829 15:25:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:54.829 15:25:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:20:54.829 15:25:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:54.829 15:25:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:55.088 15:25:58 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:55.088 15:25:58 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:20:55.088 15:25:58 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.yklIYzCxqz 00:20:55.088 15:25:58 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:55.088 15:25:58 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.Z3rp7fRWXU 00:20:55.088 15:25:58 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:55.088 15:25:58 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:55.088 15:25:58 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.yklIYzCxqz 00:20:55.088 15:25:58 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.Z3rp7fRWXU 00:20:55.088 15:25:58 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:55.088 15:25:58 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:55.348 15:25:59 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.yklIYzCxqz 00:20:55.348 15:25:59 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.yklIYzCxqz 00:20:55.348 15:25:59 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:55.608 [2024-07-15 15:25:59.334389] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:55.608 15:25:59 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:55.867 15:25:59 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:55.867 [2024-07-15 15:25:59.675241] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:55.867 [2024-07-15 15:25:59.675457] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:55.867 15:25:59 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:56.126 malloc0 00:20:56.126 15:25:59 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:56.386 15:26:00 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.yklIYzCxqz 00:20:56.386 [2024-07-15 15:26:00.180826] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:56.386 15:26:00 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.yklIYzCxqz 00:20:56.386 EAL: No free 2048 kB hugepages reported on node 1 00:21:08.589 Initializing NVMe Controllers 00:21:08.589 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:08.589 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:08.589 Initialization complete. Launching workers. 00:21:08.589 ======================================================== 00:21:08.589 Latency(us) 00:21:08.589 Device Information : IOPS MiB/s Average min max 00:21:08.589 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16461.50 64.30 3888.29 768.48 6504.51 00:21:08.589 ======================================================== 00:21:08.589 Total : 16461.50 64.30 3888.29 768.48 6504.51 00:21:08.589 00:21:08.589 15:26:10 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.yklIYzCxqz 00:21:08.589 15:26:10 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:08.589 15:26:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:08.589 15:26:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:08.589 15:26:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.yklIYzCxqz' 00:21:08.589 15:26:10 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:08.589 15:26:10 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3087995 00:21:08.589 15:26:10 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:08.590 15:26:10 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:08.590 15:26:10 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3087995 /var/tmp/bdevperf.sock 00:21:08.590 15:26:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3087995 ']' 00:21:08.590 15:26:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:08.590 15:26:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:08.590 15:26:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:08.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:08.590 15:26:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:08.590 15:26:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:08.590 [2024-07-15 15:26:10.339726] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:21:08.590 [2024-07-15 15:26:10.339779] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3087995 ] 00:21:08.590 EAL: No free 2048 kB hugepages reported on node 1 00:21:08.590 [2024-07-15 15:26:10.406728] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:08.590 [2024-07-15 15:26:10.479855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:08.590 15:26:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:08.590 15:26:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:08.590 15:26:11 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.yklIYzCxqz 00:21:08.590 [2024-07-15 15:26:11.273533] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:08.590 [2024-07-15 15:26:11.273612] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:08.590 TLSTESTn1 00:21:08.590 15:26:11 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:08.590 Running I/O for 10 seconds... 00:21:18.587 00:21:18.587 Latency(us) 00:21:18.587 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:18.587 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:18.587 Verification LBA range: start 0x0 length 0x2000 00:21:18.587 TLSTESTn1 : 10.03 4394.10 17.16 0.00 0.00 29075.48 6107.96 82627.79 00:21:18.587 =================================================================================================================== 00:21:18.587 Total : 4394.10 17.16 0.00 0.00 29075.48 6107.96 82627.79 00:21:18.587 0 00:21:18.587 15:26:21 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:18.587 15:26:21 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3087995 00:21:18.587 15:26:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3087995 ']' 00:21:18.587 15:26:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3087995 00:21:18.587 15:26:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:18.587 15:26:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:18.587 15:26:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3087995 00:21:18.587 15:26:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:18.587 15:26:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:18.587 15:26:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3087995' 00:21:18.587 killing process with pid 3087995 00:21:18.587 15:26:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3087995 00:21:18.587 Received shutdown signal, test time was about 10.000000 seconds 00:21:18.587 00:21:18.587 Latency(us) 00:21:18.587 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:18.587 =================================================================================================================== 00:21:18.587 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:18.587 [2024-07-15 15:26:21.572426] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:18.587 15:26:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3087995 00:21:18.587 15:26:21 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Z3rp7fRWXU 00:21:18.587 15:26:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:18.587 15:26:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Z3rp7fRWXU 00:21:18.587 15:26:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:18.587 15:26:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:18.587 15:26:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:18.587 15:26:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:18.587 15:26:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Z3rp7fRWXU 00:21:18.587 15:26:21 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:18.587 15:26:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:18.587 15:26:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:18.587 15:26:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Z3rp7fRWXU' 00:21:18.587 15:26:21 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:18.587 15:26:21 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3089977 00:21:18.587 15:26:21 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:18.587 15:26:21 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:18.587 15:26:21 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3089977 /var/tmp/bdevperf.sock 00:21:18.587 15:26:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3089977 ']' 00:21:18.587 15:26:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:18.588 15:26:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:18.588 15:26:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:18.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:18.588 15:26:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:18.588 15:26:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:18.588 [2024-07-15 15:26:21.805032] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:21:18.588 [2024-07-15 15:26:21.805085] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3089977 ] 00:21:18.588 EAL: No free 2048 kB hugepages reported on node 1 00:21:18.588 [2024-07-15 15:26:21.871009] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.588 [2024-07-15 15:26:21.937201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:18.847 15:26:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:18.847 15:26:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:18.847 15:26:22 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Z3rp7fRWXU 00:21:18.847 [2024-07-15 15:26:22.747647] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:18.847 [2024-07-15 15:26:22.747731] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:19.105 [2024-07-15 15:26:22.755629] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:19.105 [2024-07-15 15:26:22.755972] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b25e0 (107): Transport endpoint is not connected 00:21:19.105 [2024-07-15 15:26:22.756965] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b25e0 (9): Bad file descriptor 00:21:19.105 [2024-07-15 15:26:22.757967] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:19.105 [2024-07-15 15:26:22.757979] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:19.105 [2024-07-15 15:26:22.757991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:19.105 request: 00:21:19.105 { 00:21:19.105 "name": "TLSTEST", 00:21:19.105 "trtype": "tcp", 00:21:19.105 "traddr": "10.0.0.2", 00:21:19.105 "adrfam": "ipv4", 00:21:19.105 "trsvcid": "4420", 00:21:19.105 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:19.106 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:19.106 "prchk_reftag": false, 00:21:19.106 "prchk_guard": false, 00:21:19.106 "hdgst": false, 00:21:19.106 "ddgst": false, 00:21:19.106 "psk": "/tmp/tmp.Z3rp7fRWXU", 00:21:19.106 "method": "bdev_nvme_attach_controller", 00:21:19.106 "req_id": 1 00:21:19.106 } 00:21:19.106 Got JSON-RPC error response 00:21:19.106 response: 00:21:19.106 { 00:21:19.106 "code": -5, 00:21:19.106 "message": "Input/output error" 00:21:19.106 } 00:21:19.106 15:26:22 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3089977 00:21:19.106 15:26:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3089977 ']' 00:21:19.106 15:26:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3089977 00:21:19.106 15:26:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:19.106 15:26:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:19.106 15:26:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3089977 00:21:19.106 15:26:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:19.106 15:26:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:19.106 15:26:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3089977' 00:21:19.106 killing process with pid 3089977 00:21:19.106 15:26:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3089977 00:21:19.106 Received shutdown signal, test time was about 10.000000 seconds 00:21:19.106 00:21:19.106 Latency(us) 00:21:19.106 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:19.106 =================================================================================================================== 00:21:19.106 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:19.106 [2024-07-15 15:26:22.827618] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:19.106 15:26:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3089977 00:21:19.106 15:26:22 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:19.106 15:26:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:19.106 15:26:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:19.106 15:26:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:19.106 15:26:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:19.106 15:26:22 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.yklIYzCxqz 00:21:19.106 15:26:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:19.106 15:26:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.yklIYzCxqz 00:21:19.106 15:26:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:19.106 15:26:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:19.106 15:26:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:19.106 15:26:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:19.106 15:26:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.yklIYzCxqz 00:21:19.106 15:26:22 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:19.106 15:26:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:19.106 15:26:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:21:19.106 15:26:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.yklIYzCxqz' 00:21:19.106 15:26:23 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:19.106 15:26:23 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3090094 00:21:19.106 15:26:23 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:19.106 15:26:23 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:19.106 15:26:23 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3090094 /var/tmp/bdevperf.sock 00:21:19.106 15:26:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3090094 ']' 00:21:19.106 15:26:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:19.106 15:26:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:19.106 15:26:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:19.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:19.106 15:26:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:19.106 15:26:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:19.364 [2024-07-15 15:26:23.048704] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:21:19.364 [2024-07-15 15:26:23.048757] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3090094 ] 00:21:19.364 EAL: No free 2048 kB hugepages reported on node 1 00:21:19.364 [2024-07-15 15:26:23.116140] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.364 [2024-07-15 15:26:23.183423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:20.296 15:26:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:20.296 15:26:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:20.296 15:26:23 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.yklIYzCxqz 00:21:20.296 [2024-07-15 15:26:23.993730] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:20.296 [2024-07-15 15:26:23.993812] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:20.296 [2024-07-15 15:26:23.998713] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:20.296 [2024-07-15 15:26:23.998738] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:20.296 [2024-07-15 15:26:23.998765] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:20.296 [2024-07-15 15:26:23.999025] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xccc5e0 (107): Transport endpoint is not connected 00:21:20.296 [2024-07-15 15:26:24.000017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xccc5e0 (9): Bad file descriptor 00:21:20.296 [2024-07-15 15:26:24.001018] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:20.296 [2024-07-15 15:26:24.001030] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:20.296 [2024-07-15 15:26:24.001041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:20.296 request: 00:21:20.296 { 00:21:20.296 "name": "TLSTEST", 00:21:20.296 "trtype": "tcp", 00:21:20.296 "traddr": "10.0.0.2", 00:21:20.296 "adrfam": "ipv4", 00:21:20.296 "trsvcid": "4420", 00:21:20.296 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.296 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:20.296 "prchk_reftag": false, 00:21:20.296 "prchk_guard": false, 00:21:20.296 "hdgst": false, 00:21:20.296 "ddgst": false, 00:21:20.296 "psk": "/tmp/tmp.yklIYzCxqz", 00:21:20.296 "method": "bdev_nvme_attach_controller", 00:21:20.296 "req_id": 1 00:21:20.296 } 00:21:20.296 Got JSON-RPC error response 00:21:20.296 response: 00:21:20.296 { 00:21:20.296 "code": -5, 00:21:20.296 "message": "Input/output error" 00:21:20.296 } 00:21:20.296 15:26:24 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3090094 00:21:20.296 15:26:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3090094 ']' 00:21:20.296 15:26:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3090094 00:21:20.296 15:26:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:20.296 15:26:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:20.296 15:26:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3090094 00:21:20.296 15:26:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:20.296 15:26:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:20.296 15:26:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3090094' 00:21:20.296 killing process with pid 3090094 00:21:20.296 15:26:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3090094 00:21:20.296 Received shutdown signal, test time was about 10.000000 seconds 00:21:20.296 00:21:20.296 Latency(us) 00:21:20.296 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.296 =================================================================================================================== 00:21:20.296 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:20.296 [2024-07-15 15:26:24.073231] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:20.296 15:26:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3090094 00:21:20.554 15:26:24 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:20.554 15:26:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:20.554 15:26:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:20.554 15:26:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:20.554 15:26:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:20.554 15:26:24 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.yklIYzCxqz 00:21:20.554 15:26:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:20.554 15:26:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.yklIYzCxqz 00:21:20.554 15:26:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:20.554 15:26:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:20.554 15:26:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:20.554 15:26:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:20.554 15:26:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.yklIYzCxqz 00:21:20.554 15:26:24 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:20.554 15:26:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:21:20.554 15:26:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:20.554 15:26:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.yklIYzCxqz' 00:21:20.554 15:26:24 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:20.554 15:26:24 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3090348 00:21:20.554 15:26:24 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:20.554 15:26:24 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:20.554 15:26:24 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3090348 /var/tmp/bdevperf.sock 00:21:20.554 15:26:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3090348 ']' 00:21:20.554 15:26:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:20.554 15:26:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:20.554 15:26:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:20.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:20.554 15:26:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:20.554 15:26:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:20.554 [2024-07-15 15:26:24.294288] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:21:20.554 [2024-07-15 15:26:24.294340] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3090348 ] 00:21:20.554 EAL: No free 2048 kB hugepages reported on node 1 00:21:20.554 [2024-07-15 15:26:24.360028] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.554 [2024-07-15 15:26:24.423276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:21.489 15:26:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:21.489 15:26:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:21.489 15:26:25 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.yklIYzCxqz 00:21:21.489 [2024-07-15 15:26:25.253002] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:21.489 [2024-07-15 15:26:25.253082] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:21.489 [2024-07-15 15:26:25.261926] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:21.489 [2024-07-15 15:26:25.261953] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:21.489 [2024-07-15 15:26:25.261980] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:21.489 [2024-07-15 15:26:25.262300] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f845e0 (107): Transport endpoint is not connected 00:21:21.489 [2024-07-15 15:26:25.263293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f845e0 (9): Bad file descriptor 00:21:21.489 [2024-07-15 15:26:25.264294] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:21.489 [2024-07-15 15:26:25.264305] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:21.489 [2024-07-15 15:26:25.264316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:21.489 request: 00:21:21.489 { 00:21:21.489 "name": "TLSTEST", 00:21:21.489 "trtype": "tcp", 00:21:21.489 "traddr": "10.0.0.2", 00:21:21.489 "adrfam": "ipv4", 00:21:21.489 "trsvcid": "4420", 00:21:21.489 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:21.489 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:21.489 "prchk_reftag": false, 00:21:21.489 "prchk_guard": false, 00:21:21.489 "hdgst": false, 00:21:21.489 "ddgst": false, 00:21:21.489 "psk": "/tmp/tmp.yklIYzCxqz", 00:21:21.489 "method": "bdev_nvme_attach_controller", 00:21:21.489 "req_id": 1 00:21:21.489 } 00:21:21.489 Got JSON-RPC error response 00:21:21.489 response: 00:21:21.489 { 00:21:21.489 "code": -5, 00:21:21.489 "message": "Input/output error" 00:21:21.489 } 00:21:21.489 15:26:25 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3090348 00:21:21.489 15:26:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3090348 ']' 00:21:21.489 15:26:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3090348 00:21:21.489 15:26:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:21.489 15:26:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:21.489 15:26:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3090348 00:21:21.489 15:26:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:21.489 15:26:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:21.489 15:26:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3090348' 00:21:21.489 killing process with pid 3090348 00:21:21.489 15:26:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3090348 00:21:21.489 Received shutdown signal, test time was about 10.000000 seconds 00:21:21.489 00:21:21.489 Latency(us) 00:21:21.489 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:21.490 =================================================================================================================== 00:21:21.490 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:21.490 [2024-07-15 15:26:25.328938] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:21.490 15:26:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3090348 00:21:21.748 15:26:25 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:21.748 15:26:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:21.748 15:26:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:21.748 15:26:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:21.748 15:26:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:21.748 15:26:25 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:21.748 15:26:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:21.748 15:26:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:21.748 15:26:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:21.748 15:26:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:21.748 15:26:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:21.748 15:26:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:21.748 15:26:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:21.748 15:26:25 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:21.748 15:26:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:21.748 15:26:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:21.748 15:26:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:21:21.748 15:26:25 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:21.748 15:26:25 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3090614 00:21:21.748 15:26:25 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:21.748 15:26:25 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:21.748 15:26:25 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3090614 /var/tmp/bdevperf.sock 00:21:21.748 15:26:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3090614 ']' 00:21:21.748 15:26:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:21.748 15:26:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:21.748 15:26:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:21.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:21.748 15:26:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:21.748 15:26:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:21.748 [2024-07-15 15:26:25.549126] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:21:21.749 [2024-07-15 15:26:25.549175] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3090614 ] 00:21:21.749 EAL: No free 2048 kB hugepages reported on node 1 00:21:21.749 [2024-07-15 15:26:25.613690] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.007 [2024-07-15 15:26:25.677936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:22.574 15:26:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:22.574 15:26:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:22.574 15:26:26 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:22.833 [2024-07-15 15:26:26.505091] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:22.833 [2024-07-15 15:26:26.507043] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a47b50 (9): Bad file descriptor 00:21:22.833 [2024-07-15 15:26:26.508041] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.833 [2024-07-15 15:26:26.508054] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:22.833 [2024-07-15 15:26:26.508065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.833 request: 00:21:22.833 { 00:21:22.833 "name": "TLSTEST", 00:21:22.833 "trtype": "tcp", 00:21:22.833 "traddr": "10.0.0.2", 00:21:22.833 "adrfam": "ipv4", 00:21:22.833 "trsvcid": "4420", 00:21:22.833 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:22.833 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:22.833 "prchk_reftag": false, 00:21:22.833 "prchk_guard": false, 00:21:22.833 "hdgst": false, 00:21:22.833 "ddgst": false, 00:21:22.833 "method": "bdev_nvme_attach_controller", 00:21:22.833 "req_id": 1 00:21:22.833 } 00:21:22.833 Got JSON-RPC error response 00:21:22.833 response: 00:21:22.833 { 00:21:22.833 "code": -5, 00:21:22.833 "message": "Input/output error" 00:21:22.833 } 00:21:22.833 15:26:26 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3090614 00:21:22.833 15:26:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3090614 ']' 00:21:22.833 15:26:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3090614 00:21:22.833 15:26:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:22.833 15:26:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:22.833 15:26:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3090614 00:21:22.833 15:26:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:22.833 15:26:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:22.833 15:26:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3090614' 00:21:22.833 killing process with pid 3090614 00:21:22.833 15:26:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3090614 00:21:22.833 Received shutdown signal, test time was about 10.000000 seconds 00:21:22.833 00:21:22.833 Latency(us) 00:21:22.833 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:22.833 =================================================================================================================== 00:21:22.833 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:22.833 15:26:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3090614 00:21:23.092 15:26:26 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:23.092 15:26:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:23.092 15:26:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:23.092 15:26:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:23.092 15:26:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:23.092 15:26:26 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 3085535 00:21:23.092 15:26:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3085535 ']' 00:21:23.092 15:26:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3085535 00:21:23.092 15:26:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:23.092 15:26:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:23.092 15:26:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3085535 00:21:23.092 15:26:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:23.092 15:26:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:23.092 15:26:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3085535' 00:21:23.092 killing process with pid 3085535 00:21:23.092 15:26:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3085535 00:21:23.092 [2024-07-15 15:26:26.794320] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:23.092 15:26:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3085535 00:21:23.092 15:26:26 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:21:23.092 15:26:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:21:23.092 15:26:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:23.092 15:26:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:23.092 15:26:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:21:23.092 15:26:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:21:23.092 15:26:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:23.351 15:26:27 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:23.351 15:26:27 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:21:23.351 15:26:27 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.ehJcbKIws5 00:21:23.351 15:26:27 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:23.351 15:26:27 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.ehJcbKIws5 00:21:23.351 15:26:27 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:21:23.351 15:26:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:23.351 15:26:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:23.351 15:26:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:23.351 15:26:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3090897 00:21:23.352 15:26:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:23.352 15:26:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3090897 00:21:23.352 15:26:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3090897 ']' 00:21:23.352 15:26:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:23.352 15:26:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:23.352 15:26:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:23.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:23.352 15:26:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:23.352 15:26:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:23.352 [2024-07-15 15:26:27.093788] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:21:23.352 [2024-07-15 15:26:27.093845] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:23.352 EAL: No free 2048 kB hugepages reported on node 1 00:21:23.352 [2024-07-15 15:26:27.167875] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.352 [2024-07-15 15:26:27.239453] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:23.352 [2024-07-15 15:26:27.239490] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:23.352 [2024-07-15 15:26:27.239499] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:23.352 [2024-07-15 15:26:27.239507] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:23.352 [2024-07-15 15:26:27.239531] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:23.352 [2024-07-15 15:26:27.239550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:24.318 15:26:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:24.318 15:26:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:24.318 15:26:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:24.318 15:26:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:24.318 15:26:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:24.318 15:26:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:24.318 15:26:27 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.ehJcbKIws5 00:21:24.318 15:26:27 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ehJcbKIws5 00:21:24.318 15:26:27 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:24.318 [2024-07-15 15:26:28.086396] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:24.318 15:26:28 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:24.601 15:26:28 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:24.601 [2024-07-15 15:26:28.411227] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:24.601 [2024-07-15 15:26:28.411421] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:24.601 15:26:28 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:24.861 malloc0 00:21:24.861 15:26:28 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:24.861 15:26:28 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ehJcbKIws5 00:21:25.120 [2024-07-15 15:26:28.884700] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:25.120 15:26:28 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ehJcbKIws5 00:21:25.120 15:26:28 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:25.120 15:26:28 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:25.120 15:26:28 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:25.120 15:26:28 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ehJcbKIws5' 00:21:25.120 15:26:28 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:25.120 15:26:28 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3091194 00:21:25.120 15:26:28 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:25.120 15:26:28 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:25.120 15:26:28 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3091194 /var/tmp/bdevperf.sock 00:21:25.120 15:26:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3091194 ']' 00:21:25.120 15:26:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:25.120 15:26:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:25.120 15:26:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:25.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:25.120 15:26:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:25.120 15:26:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:25.120 [2024-07-15 15:26:28.939607] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:21:25.120 [2024-07-15 15:26:28.939659] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3091194 ] 00:21:25.120 EAL: No free 2048 kB hugepages reported on node 1 00:21:25.120 [2024-07-15 15:26:29.004909] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:25.379 [2024-07-15 15:26:29.079303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:25.946 15:26:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:25.946 15:26:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:25.946 15:26:29 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ehJcbKIws5 00:21:26.205 [2024-07-15 15:26:29.905972] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:26.205 [2024-07-15 15:26:29.906044] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:26.205 TLSTESTn1 00:21:26.205 15:26:29 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:26.205 Running I/O for 10 seconds... 00:21:38.413 00:21:38.413 Latency(us) 00:21:38.413 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:38.413 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:38.413 Verification LBA range: start 0x0 length 0x2000 00:21:38.413 TLSTESTn1 : 10.03 4425.04 17.29 0.00 0.00 28873.93 6474.96 69625.45 00:21:38.413 =================================================================================================================== 00:21:38.413 Total : 4425.04 17.29 0.00 0.00 28873.93 6474.96 69625.45 00:21:38.413 0 00:21:38.413 15:26:40 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:38.413 15:26:40 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3091194 00:21:38.413 15:26:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3091194 ']' 00:21:38.413 15:26:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3091194 00:21:38.413 15:26:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:38.413 15:26:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:38.413 15:26:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3091194 00:21:38.413 15:26:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:38.413 15:26:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:38.413 15:26:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3091194' 00:21:38.413 killing process with pid 3091194 00:21:38.413 15:26:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3091194 00:21:38.413 Received shutdown signal, test time was about 10.000000 seconds 00:21:38.413 00:21:38.413 Latency(us) 00:21:38.413 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:38.413 =================================================================================================================== 00:21:38.413 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:38.414 [2024-07-15 15:26:40.212604] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:38.414 15:26:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3091194 00:21:38.414 15:26:40 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.ehJcbKIws5 00:21:38.414 15:26:40 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ehJcbKIws5 00:21:38.414 15:26:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:38.414 15:26:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ehJcbKIws5 00:21:38.414 15:26:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:38.414 15:26:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:38.414 15:26:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:38.414 15:26:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:38.414 15:26:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ehJcbKIws5 00:21:38.414 15:26:40 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:38.414 15:26:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:38.414 15:26:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:38.414 15:26:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ehJcbKIws5' 00:21:38.414 15:26:40 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:38.414 15:26:40 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3093095 00:21:38.414 15:26:40 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:38.414 15:26:40 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:38.414 15:26:40 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3093095 /var/tmp/bdevperf.sock 00:21:38.414 15:26:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3093095 ']' 00:21:38.414 15:26:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:38.414 15:26:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:38.414 15:26:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:38.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:38.414 15:26:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:38.414 15:26:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:38.414 [2024-07-15 15:26:40.446019] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:21:38.414 [2024-07-15 15:26:40.446075] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3093095 ] 00:21:38.414 EAL: No free 2048 kB hugepages reported on node 1 00:21:38.414 [2024-07-15 15:26:40.511633] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:38.414 [2024-07-15 15:26:40.586264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:38.414 15:26:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:38.414 15:26:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:38.414 15:26:41 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ehJcbKIws5 00:21:38.414 [2024-07-15 15:26:41.404352] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:38.414 [2024-07-15 15:26:41.404397] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:38.414 [2024-07-15 15:26:41.404405] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.ehJcbKIws5 00:21:38.414 request: 00:21:38.414 { 00:21:38.414 "name": "TLSTEST", 00:21:38.414 "trtype": "tcp", 00:21:38.414 "traddr": "10.0.0.2", 00:21:38.414 "adrfam": "ipv4", 00:21:38.414 "trsvcid": "4420", 00:21:38.414 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:38.414 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:38.414 "prchk_reftag": false, 00:21:38.414 "prchk_guard": false, 00:21:38.414 "hdgst": false, 00:21:38.414 "ddgst": false, 00:21:38.414 "psk": "/tmp/tmp.ehJcbKIws5", 00:21:38.414 "method": "bdev_nvme_attach_controller", 00:21:38.414 "req_id": 1 00:21:38.414 } 00:21:38.414 Got JSON-RPC error response 00:21:38.414 response: 00:21:38.414 { 00:21:38.414 "code": -1, 00:21:38.414 "message": "Operation not permitted" 00:21:38.414 } 00:21:38.414 15:26:41 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3093095 00:21:38.414 15:26:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3093095 ']' 00:21:38.414 15:26:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3093095 00:21:38.414 15:26:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:38.414 15:26:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:38.414 15:26:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3093095 00:21:38.414 15:26:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:38.414 15:26:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:38.414 15:26:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3093095' 00:21:38.414 killing process with pid 3093095 00:21:38.414 15:26:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3093095 00:21:38.414 Received shutdown signal, test time was about 10.000000 seconds 00:21:38.414 00:21:38.414 Latency(us) 00:21:38.414 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:38.414 =================================================================================================================== 00:21:38.414 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:38.414 15:26:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3093095 00:21:38.414 15:26:41 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:38.414 15:26:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:38.414 15:26:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:38.414 15:26:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:38.414 15:26:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:38.414 15:26:41 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 3090897 00:21:38.414 15:26:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3090897 ']' 00:21:38.414 15:26:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3090897 00:21:38.414 15:26:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:38.414 15:26:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:38.414 15:26:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3090897 00:21:38.414 15:26:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:38.414 15:26:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:38.414 15:26:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3090897' 00:21:38.414 killing process with pid 3090897 00:21:38.414 15:26:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3090897 00:21:38.414 [2024-07-15 15:26:41.715401] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:38.414 15:26:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3090897 00:21:38.414 15:26:41 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:21:38.414 15:26:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:38.414 15:26:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:38.414 15:26:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:38.414 15:26:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3093344 00:21:38.414 15:26:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:38.414 15:26:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3093344 00:21:38.414 15:26:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3093344 ']' 00:21:38.414 15:26:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:38.414 15:26:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:38.414 15:26:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:38.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:38.414 15:26:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:38.414 15:26:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:38.414 [2024-07-15 15:26:41.961663] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:21:38.414 [2024-07-15 15:26:41.961714] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:38.414 EAL: No free 2048 kB hugepages reported on node 1 00:21:38.414 [2024-07-15 15:26:42.036879] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:38.414 [2024-07-15 15:26:42.107753] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:38.414 [2024-07-15 15:26:42.107793] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:38.414 [2024-07-15 15:26:42.107802] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:38.414 [2024-07-15 15:26:42.107812] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:38.414 [2024-07-15 15:26:42.107819] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:38.414 [2024-07-15 15:26:42.107845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:38.982 15:26:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:38.982 15:26:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:38.982 15:26:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:38.982 15:26:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:38.982 15:26:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:38.982 15:26:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:38.982 15:26:42 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.ehJcbKIws5 00:21:38.982 15:26:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:38.982 15:26:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.ehJcbKIws5 00:21:38.982 15:26:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:21:38.982 15:26:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:38.982 15:26:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:21:38.982 15:26:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:38.982 15:26:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.ehJcbKIws5 00:21:38.982 15:26:42 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ehJcbKIws5 00:21:38.982 15:26:42 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:39.241 [2024-07-15 15:26:42.958217] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:39.241 15:26:42 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:39.241 15:26:43 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:39.499 [2024-07-15 15:26:43.283033] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:39.499 [2024-07-15 15:26:43.283233] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:39.499 15:26:43 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:39.758 malloc0 00:21:39.758 15:26:43 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:39.758 15:26:43 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ehJcbKIws5 00:21:40.016 [2024-07-15 15:26:43.796821] tcp.c:3603:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:40.016 [2024-07-15 15:26:43.796855] tcp.c:3689:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:21:40.016 [2024-07-15 15:26:43.796881] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:21:40.016 request: 00:21:40.016 { 00:21:40.016 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:40.016 "host": "nqn.2016-06.io.spdk:host1", 00:21:40.016 "psk": "/tmp/tmp.ehJcbKIws5", 00:21:40.016 "method": "nvmf_subsystem_add_host", 00:21:40.016 "req_id": 1 00:21:40.016 } 00:21:40.016 Got JSON-RPC error response 00:21:40.016 response: 00:21:40.016 { 00:21:40.016 "code": -32603, 00:21:40.016 "message": "Internal error" 00:21:40.016 } 00:21:40.016 15:26:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:40.016 15:26:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:40.016 15:26:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:40.016 15:26:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:40.016 15:26:43 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 3093344 00:21:40.016 15:26:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3093344 ']' 00:21:40.016 15:26:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3093344 00:21:40.016 15:26:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:40.016 15:26:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:40.016 15:26:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3093344 00:21:40.016 15:26:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:40.016 15:26:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:40.016 15:26:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3093344' 00:21:40.017 killing process with pid 3093344 00:21:40.017 15:26:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3093344 00:21:40.017 15:26:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3093344 00:21:40.275 15:26:44 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.ehJcbKIws5 00:21:40.275 15:26:44 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:21:40.275 15:26:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:40.275 15:26:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:40.275 15:26:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:40.275 15:26:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3093866 00:21:40.275 15:26:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:40.275 15:26:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3093866 00:21:40.275 15:26:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3093866 ']' 00:21:40.275 15:26:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:40.275 15:26:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:40.275 15:26:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:40.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:40.275 15:26:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:40.275 15:26:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:40.275 [2024-07-15 15:26:44.124967] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:21:40.275 [2024-07-15 15:26:44.125017] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:40.275 EAL: No free 2048 kB hugepages reported on node 1 00:21:40.535 [2024-07-15 15:26:44.198781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:40.535 [2024-07-15 15:26:44.268866] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:40.535 [2024-07-15 15:26:44.268909] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:40.535 [2024-07-15 15:26:44.268919] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:40.535 [2024-07-15 15:26:44.268928] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:40.535 [2024-07-15 15:26:44.268951] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:40.535 [2024-07-15 15:26:44.268984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:41.102 15:26:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:41.102 15:26:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:41.102 15:26:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:41.102 15:26:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:41.102 15:26:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:41.102 15:26:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:41.102 15:26:44 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.ehJcbKIws5 00:21:41.102 15:26:44 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ehJcbKIws5 00:21:41.102 15:26:44 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:41.361 [2024-07-15 15:26:45.110260] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:41.361 15:26:45 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:41.620 15:26:45 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:41.620 [2024-07-15 15:26:45.463172] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:41.620 [2024-07-15 15:26:45.463362] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:41.620 15:26:45 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:41.879 malloc0 00:21:41.879 15:26:45 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:42.138 15:26:45 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ehJcbKIws5 00:21:42.138 [2024-07-15 15:26:45.976853] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:42.138 15:26:45 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:42.138 15:26:45 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=3094159 00:21:42.138 15:26:45 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:42.138 15:26:45 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 3094159 /var/tmp/bdevperf.sock 00:21:42.138 15:26:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3094159 ']' 00:21:42.138 15:26:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:42.138 15:26:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:42.138 15:26:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:42.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:42.138 15:26:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:42.138 15:26:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:42.138 [2024-07-15 15:26:46.027255] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:21:42.138 [2024-07-15 15:26:46.027305] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3094159 ] 00:21:42.396 EAL: No free 2048 kB hugepages reported on node 1 00:21:42.396 [2024-07-15 15:26:46.092864] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.396 [2024-07-15 15:26:46.162351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:42.964 15:26:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:42.964 15:26:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:42.964 15:26:46 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ehJcbKIws5 00:21:43.223 [2024-07-15 15:26:46.996002] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:43.223 [2024-07-15 15:26:46.996081] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:43.223 TLSTESTn1 00:21:43.223 15:26:47 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:21:43.483 15:26:47 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:21:43.483 "subsystems": [ 00:21:43.483 { 00:21:43.483 "subsystem": "keyring", 00:21:43.483 "config": [] 00:21:43.483 }, 00:21:43.483 { 00:21:43.483 "subsystem": "iobuf", 00:21:43.483 "config": [ 00:21:43.483 { 00:21:43.483 "method": "iobuf_set_options", 00:21:43.483 "params": { 00:21:43.483 "small_pool_count": 8192, 00:21:43.483 "large_pool_count": 1024, 00:21:43.483 "small_bufsize": 8192, 00:21:43.483 "large_bufsize": 135168 00:21:43.483 } 00:21:43.483 } 00:21:43.483 ] 00:21:43.483 }, 00:21:43.483 { 00:21:43.483 "subsystem": "sock", 00:21:43.483 "config": [ 00:21:43.483 { 00:21:43.483 "method": "sock_set_default_impl", 00:21:43.483 "params": { 00:21:43.483 "impl_name": "posix" 00:21:43.483 } 00:21:43.483 }, 00:21:43.483 { 00:21:43.483 "method": "sock_impl_set_options", 00:21:43.483 "params": { 00:21:43.483 "impl_name": "ssl", 00:21:43.483 "recv_buf_size": 4096, 00:21:43.483 "send_buf_size": 4096, 00:21:43.483 "enable_recv_pipe": true, 00:21:43.483 "enable_quickack": false, 00:21:43.483 "enable_placement_id": 0, 00:21:43.483 "enable_zerocopy_send_server": true, 00:21:43.483 "enable_zerocopy_send_client": false, 00:21:43.483 "zerocopy_threshold": 0, 00:21:43.483 "tls_version": 0, 00:21:43.483 "enable_ktls": false 00:21:43.483 } 00:21:43.483 }, 00:21:43.483 { 00:21:43.483 "method": "sock_impl_set_options", 00:21:43.483 "params": { 00:21:43.483 "impl_name": "posix", 00:21:43.483 "recv_buf_size": 2097152, 00:21:43.483 "send_buf_size": 2097152, 00:21:43.483 "enable_recv_pipe": true, 00:21:43.483 "enable_quickack": false, 00:21:43.483 "enable_placement_id": 0, 00:21:43.483 "enable_zerocopy_send_server": true, 00:21:43.483 "enable_zerocopy_send_client": false, 00:21:43.483 "zerocopy_threshold": 0, 00:21:43.483 "tls_version": 0, 00:21:43.483 "enable_ktls": false 00:21:43.483 } 00:21:43.483 } 00:21:43.483 ] 00:21:43.483 }, 00:21:43.483 { 00:21:43.483 "subsystem": "vmd", 00:21:43.483 "config": [] 00:21:43.483 }, 00:21:43.483 { 00:21:43.483 "subsystem": "accel", 00:21:43.483 "config": [ 00:21:43.484 { 00:21:43.484 "method": "accel_set_options", 00:21:43.484 "params": { 00:21:43.484 "small_cache_size": 128, 00:21:43.484 "large_cache_size": 16, 00:21:43.484 "task_count": 2048, 00:21:43.484 "sequence_count": 2048, 00:21:43.484 "buf_count": 2048 00:21:43.484 } 00:21:43.484 } 00:21:43.484 ] 00:21:43.484 }, 00:21:43.484 { 00:21:43.484 "subsystem": "bdev", 00:21:43.484 "config": [ 00:21:43.484 { 00:21:43.484 "method": "bdev_set_options", 00:21:43.484 "params": { 00:21:43.484 "bdev_io_pool_size": 65535, 00:21:43.484 "bdev_io_cache_size": 256, 00:21:43.484 "bdev_auto_examine": true, 00:21:43.484 "iobuf_small_cache_size": 128, 00:21:43.484 "iobuf_large_cache_size": 16 00:21:43.484 } 00:21:43.484 }, 00:21:43.484 { 00:21:43.484 "method": "bdev_raid_set_options", 00:21:43.484 "params": { 00:21:43.484 "process_window_size_kb": 1024 00:21:43.484 } 00:21:43.484 }, 00:21:43.484 { 00:21:43.484 "method": "bdev_iscsi_set_options", 00:21:43.484 "params": { 00:21:43.484 "timeout_sec": 30 00:21:43.484 } 00:21:43.484 }, 00:21:43.484 { 00:21:43.484 "method": "bdev_nvme_set_options", 00:21:43.484 "params": { 00:21:43.484 "action_on_timeout": "none", 00:21:43.484 "timeout_us": 0, 00:21:43.484 "timeout_admin_us": 0, 00:21:43.484 "keep_alive_timeout_ms": 10000, 00:21:43.484 "arbitration_burst": 0, 00:21:43.484 "low_priority_weight": 0, 00:21:43.484 "medium_priority_weight": 0, 00:21:43.484 "high_priority_weight": 0, 00:21:43.484 "nvme_adminq_poll_period_us": 10000, 00:21:43.484 "nvme_ioq_poll_period_us": 0, 00:21:43.484 "io_queue_requests": 0, 00:21:43.484 "delay_cmd_submit": true, 00:21:43.484 "transport_retry_count": 4, 00:21:43.484 "bdev_retry_count": 3, 00:21:43.484 "transport_ack_timeout": 0, 00:21:43.484 "ctrlr_loss_timeout_sec": 0, 00:21:43.484 "reconnect_delay_sec": 0, 00:21:43.484 "fast_io_fail_timeout_sec": 0, 00:21:43.484 "disable_auto_failback": false, 00:21:43.484 "generate_uuids": false, 00:21:43.484 "transport_tos": 0, 00:21:43.484 "nvme_error_stat": false, 00:21:43.484 "rdma_srq_size": 0, 00:21:43.484 "io_path_stat": false, 00:21:43.484 "allow_accel_sequence": false, 00:21:43.484 "rdma_max_cq_size": 0, 00:21:43.484 "rdma_cm_event_timeout_ms": 0, 00:21:43.484 "dhchap_digests": [ 00:21:43.484 "sha256", 00:21:43.484 "sha384", 00:21:43.484 "sha512" 00:21:43.484 ], 00:21:43.484 "dhchap_dhgroups": [ 00:21:43.484 "null", 00:21:43.484 "ffdhe2048", 00:21:43.484 "ffdhe3072", 00:21:43.484 "ffdhe4096", 00:21:43.484 "ffdhe6144", 00:21:43.484 "ffdhe8192" 00:21:43.484 ] 00:21:43.484 } 00:21:43.484 }, 00:21:43.484 { 00:21:43.484 "method": "bdev_nvme_set_hotplug", 00:21:43.484 "params": { 00:21:43.484 "period_us": 100000, 00:21:43.484 "enable": false 00:21:43.484 } 00:21:43.484 }, 00:21:43.484 { 00:21:43.484 "method": "bdev_malloc_create", 00:21:43.484 "params": { 00:21:43.484 "name": "malloc0", 00:21:43.484 "num_blocks": 8192, 00:21:43.484 "block_size": 4096, 00:21:43.484 "physical_block_size": 4096, 00:21:43.484 "uuid": "435de785-cba5-4a6e-bd64-9642a8c0f809", 00:21:43.484 "optimal_io_boundary": 0 00:21:43.484 } 00:21:43.484 }, 00:21:43.484 { 00:21:43.484 "method": "bdev_wait_for_examine" 00:21:43.484 } 00:21:43.484 ] 00:21:43.484 }, 00:21:43.484 { 00:21:43.484 "subsystem": "nbd", 00:21:43.484 "config": [] 00:21:43.484 }, 00:21:43.484 { 00:21:43.484 "subsystem": "scheduler", 00:21:43.484 "config": [ 00:21:43.484 { 00:21:43.484 "method": "framework_set_scheduler", 00:21:43.484 "params": { 00:21:43.484 "name": "static" 00:21:43.484 } 00:21:43.484 } 00:21:43.484 ] 00:21:43.484 }, 00:21:43.484 { 00:21:43.484 "subsystem": "nvmf", 00:21:43.484 "config": [ 00:21:43.484 { 00:21:43.484 "method": "nvmf_set_config", 00:21:43.484 "params": { 00:21:43.484 "discovery_filter": "match_any", 00:21:43.484 "admin_cmd_passthru": { 00:21:43.484 "identify_ctrlr": false 00:21:43.484 } 00:21:43.484 } 00:21:43.484 }, 00:21:43.484 { 00:21:43.484 "method": "nvmf_set_max_subsystems", 00:21:43.484 "params": { 00:21:43.484 "max_subsystems": 1024 00:21:43.484 } 00:21:43.484 }, 00:21:43.484 { 00:21:43.484 "method": "nvmf_set_crdt", 00:21:43.484 "params": { 00:21:43.484 "crdt1": 0, 00:21:43.484 "crdt2": 0, 00:21:43.484 "crdt3": 0 00:21:43.484 } 00:21:43.484 }, 00:21:43.484 { 00:21:43.484 "method": "nvmf_create_transport", 00:21:43.484 "params": { 00:21:43.484 "trtype": "TCP", 00:21:43.484 "max_queue_depth": 128, 00:21:43.484 "max_io_qpairs_per_ctrlr": 127, 00:21:43.484 "in_capsule_data_size": 4096, 00:21:43.484 "max_io_size": 131072, 00:21:43.484 "io_unit_size": 131072, 00:21:43.484 "max_aq_depth": 128, 00:21:43.484 "num_shared_buffers": 511, 00:21:43.484 "buf_cache_size": 4294967295, 00:21:43.484 "dif_insert_or_strip": false, 00:21:43.484 "zcopy": false, 00:21:43.484 "c2h_success": false, 00:21:43.484 "sock_priority": 0, 00:21:43.484 "abort_timeout_sec": 1, 00:21:43.484 "ack_timeout": 0, 00:21:43.484 "data_wr_pool_size": 0 00:21:43.484 } 00:21:43.484 }, 00:21:43.484 { 00:21:43.484 "method": "nvmf_create_subsystem", 00:21:43.484 "params": { 00:21:43.484 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:43.484 "allow_any_host": false, 00:21:43.484 "serial_number": "SPDK00000000000001", 00:21:43.484 "model_number": "SPDK bdev Controller", 00:21:43.484 "max_namespaces": 10, 00:21:43.484 "min_cntlid": 1, 00:21:43.484 "max_cntlid": 65519, 00:21:43.484 "ana_reporting": false 00:21:43.484 } 00:21:43.484 }, 00:21:43.484 { 00:21:43.484 "method": "nvmf_subsystem_add_host", 00:21:43.484 "params": { 00:21:43.484 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:43.484 "host": "nqn.2016-06.io.spdk:host1", 00:21:43.484 "psk": "/tmp/tmp.ehJcbKIws5" 00:21:43.484 } 00:21:43.484 }, 00:21:43.484 { 00:21:43.484 "method": "nvmf_subsystem_add_ns", 00:21:43.484 "params": { 00:21:43.484 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:43.484 "namespace": { 00:21:43.484 "nsid": 1, 00:21:43.484 "bdev_name": "malloc0", 00:21:43.484 "nguid": "435DE785CBA54A6EBD649642A8C0F809", 00:21:43.484 "uuid": "435de785-cba5-4a6e-bd64-9642a8c0f809", 00:21:43.484 "no_auto_visible": false 00:21:43.484 } 00:21:43.484 } 00:21:43.484 }, 00:21:43.484 { 00:21:43.484 "method": "nvmf_subsystem_add_listener", 00:21:43.484 "params": { 00:21:43.484 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:43.484 "listen_address": { 00:21:43.484 "trtype": "TCP", 00:21:43.484 "adrfam": "IPv4", 00:21:43.484 "traddr": "10.0.0.2", 00:21:43.484 "trsvcid": "4420" 00:21:43.484 }, 00:21:43.484 "secure_channel": true 00:21:43.484 } 00:21:43.484 } 00:21:43.484 ] 00:21:43.484 } 00:21:43.484 ] 00:21:43.484 }' 00:21:43.484 15:26:47 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:43.744 15:26:47 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:21:43.744 "subsystems": [ 00:21:43.744 { 00:21:43.744 "subsystem": "keyring", 00:21:43.744 "config": [] 00:21:43.744 }, 00:21:43.744 { 00:21:43.744 "subsystem": "iobuf", 00:21:43.744 "config": [ 00:21:43.744 { 00:21:43.744 "method": "iobuf_set_options", 00:21:43.744 "params": { 00:21:43.744 "small_pool_count": 8192, 00:21:43.744 "large_pool_count": 1024, 00:21:43.744 "small_bufsize": 8192, 00:21:43.744 "large_bufsize": 135168 00:21:43.744 } 00:21:43.744 } 00:21:43.744 ] 00:21:43.744 }, 00:21:43.744 { 00:21:43.744 "subsystem": "sock", 00:21:43.744 "config": [ 00:21:43.744 { 00:21:43.744 "method": "sock_set_default_impl", 00:21:43.744 "params": { 00:21:43.744 "impl_name": "posix" 00:21:43.744 } 00:21:43.744 }, 00:21:43.744 { 00:21:43.744 "method": "sock_impl_set_options", 00:21:43.744 "params": { 00:21:43.744 "impl_name": "ssl", 00:21:43.744 "recv_buf_size": 4096, 00:21:43.744 "send_buf_size": 4096, 00:21:43.744 "enable_recv_pipe": true, 00:21:43.744 "enable_quickack": false, 00:21:43.744 "enable_placement_id": 0, 00:21:43.744 "enable_zerocopy_send_server": true, 00:21:43.744 "enable_zerocopy_send_client": false, 00:21:43.744 "zerocopy_threshold": 0, 00:21:43.744 "tls_version": 0, 00:21:43.744 "enable_ktls": false 00:21:43.744 } 00:21:43.744 }, 00:21:43.744 { 00:21:43.744 "method": "sock_impl_set_options", 00:21:43.744 "params": { 00:21:43.744 "impl_name": "posix", 00:21:43.744 "recv_buf_size": 2097152, 00:21:43.744 "send_buf_size": 2097152, 00:21:43.744 "enable_recv_pipe": true, 00:21:43.744 "enable_quickack": false, 00:21:43.744 "enable_placement_id": 0, 00:21:43.744 "enable_zerocopy_send_server": true, 00:21:43.744 "enable_zerocopy_send_client": false, 00:21:43.744 "zerocopy_threshold": 0, 00:21:43.744 "tls_version": 0, 00:21:43.744 "enable_ktls": false 00:21:43.744 } 00:21:43.744 } 00:21:43.744 ] 00:21:43.744 }, 00:21:43.744 { 00:21:43.744 "subsystem": "vmd", 00:21:43.744 "config": [] 00:21:43.744 }, 00:21:43.744 { 00:21:43.744 "subsystem": "accel", 00:21:43.744 "config": [ 00:21:43.744 { 00:21:43.744 "method": "accel_set_options", 00:21:43.744 "params": { 00:21:43.744 "small_cache_size": 128, 00:21:43.744 "large_cache_size": 16, 00:21:43.744 "task_count": 2048, 00:21:43.744 "sequence_count": 2048, 00:21:43.744 "buf_count": 2048 00:21:43.744 } 00:21:43.744 } 00:21:43.744 ] 00:21:43.744 }, 00:21:43.744 { 00:21:43.744 "subsystem": "bdev", 00:21:43.744 "config": [ 00:21:43.744 { 00:21:43.744 "method": "bdev_set_options", 00:21:43.744 "params": { 00:21:43.744 "bdev_io_pool_size": 65535, 00:21:43.744 "bdev_io_cache_size": 256, 00:21:43.744 "bdev_auto_examine": true, 00:21:43.744 "iobuf_small_cache_size": 128, 00:21:43.744 "iobuf_large_cache_size": 16 00:21:43.744 } 00:21:43.744 }, 00:21:43.744 { 00:21:43.744 "method": "bdev_raid_set_options", 00:21:43.744 "params": { 00:21:43.744 "process_window_size_kb": 1024 00:21:43.744 } 00:21:43.744 }, 00:21:43.744 { 00:21:43.744 "method": "bdev_iscsi_set_options", 00:21:43.744 "params": { 00:21:43.744 "timeout_sec": 30 00:21:43.744 } 00:21:43.744 }, 00:21:43.744 { 00:21:43.744 "method": "bdev_nvme_set_options", 00:21:43.744 "params": { 00:21:43.744 "action_on_timeout": "none", 00:21:43.744 "timeout_us": 0, 00:21:43.744 "timeout_admin_us": 0, 00:21:43.744 "keep_alive_timeout_ms": 10000, 00:21:43.744 "arbitration_burst": 0, 00:21:43.744 "low_priority_weight": 0, 00:21:43.744 "medium_priority_weight": 0, 00:21:43.744 "high_priority_weight": 0, 00:21:43.744 "nvme_adminq_poll_period_us": 10000, 00:21:43.744 "nvme_ioq_poll_period_us": 0, 00:21:43.744 "io_queue_requests": 512, 00:21:43.744 "delay_cmd_submit": true, 00:21:43.744 "transport_retry_count": 4, 00:21:43.744 "bdev_retry_count": 3, 00:21:43.744 "transport_ack_timeout": 0, 00:21:43.744 "ctrlr_loss_timeout_sec": 0, 00:21:43.744 "reconnect_delay_sec": 0, 00:21:43.744 "fast_io_fail_timeout_sec": 0, 00:21:43.744 "disable_auto_failback": false, 00:21:43.744 "generate_uuids": false, 00:21:43.744 "transport_tos": 0, 00:21:43.744 "nvme_error_stat": false, 00:21:43.744 "rdma_srq_size": 0, 00:21:43.744 "io_path_stat": false, 00:21:43.744 "allow_accel_sequence": false, 00:21:43.744 "rdma_max_cq_size": 0, 00:21:43.744 "rdma_cm_event_timeout_ms": 0, 00:21:43.744 "dhchap_digests": [ 00:21:43.744 "sha256", 00:21:43.745 "sha384", 00:21:43.745 "sha512" 00:21:43.745 ], 00:21:43.745 "dhchap_dhgroups": [ 00:21:43.745 "null", 00:21:43.745 "ffdhe2048", 00:21:43.745 "ffdhe3072", 00:21:43.745 "ffdhe4096", 00:21:43.745 "ffdhe6144", 00:21:43.745 "ffdhe8192" 00:21:43.745 ] 00:21:43.745 } 00:21:43.745 }, 00:21:43.745 { 00:21:43.745 "method": "bdev_nvme_attach_controller", 00:21:43.745 "params": { 00:21:43.745 "name": "TLSTEST", 00:21:43.745 "trtype": "TCP", 00:21:43.745 "adrfam": "IPv4", 00:21:43.745 "traddr": "10.0.0.2", 00:21:43.745 "trsvcid": "4420", 00:21:43.745 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:43.745 "prchk_reftag": false, 00:21:43.745 "prchk_guard": false, 00:21:43.745 "ctrlr_loss_timeout_sec": 0, 00:21:43.745 "reconnect_delay_sec": 0, 00:21:43.745 "fast_io_fail_timeout_sec": 0, 00:21:43.745 "psk": "/tmp/tmp.ehJcbKIws5", 00:21:43.745 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:43.745 "hdgst": false, 00:21:43.745 "ddgst": false 00:21:43.745 } 00:21:43.745 }, 00:21:43.745 { 00:21:43.745 "method": "bdev_nvme_set_hotplug", 00:21:43.745 "params": { 00:21:43.745 "period_us": 100000, 00:21:43.745 "enable": false 00:21:43.745 } 00:21:43.745 }, 00:21:43.745 { 00:21:43.745 "method": "bdev_wait_for_examine" 00:21:43.745 } 00:21:43.745 ] 00:21:43.745 }, 00:21:43.745 { 00:21:43.745 "subsystem": "nbd", 00:21:43.745 "config": [] 00:21:43.745 } 00:21:43.745 ] 00:21:43.745 }' 00:21:43.745 15:26:47 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 3094159 00:21:43.745 15:26:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3094159 ']' 00:21:43.745 15:26:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3094159 00:21:43.745 15:26:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:43.745 15:26:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:43.745 15:26:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3094159 00:21:43.745 15:26:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:43.745 15:26:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:43.745 15:26:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3094159' 00:21:43.745 killing process with pid 3094159 00:21:43.745 15:26:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3094159 00:21:43.745 Received shutdown signal, test time was about 10.000000 seconds 00:21:43.745 00:21:43.745 Latency(us) 00:21:43.745 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:43.745 =================================================================================================================== 00:21:43.745 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:43.745 [2024-07-15 15:26:47.610890] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:43.745 15:26:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3094159 00:21:44.004 15:26:47 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 3093866 00:21:44.004 15:26:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3093866 ']' 00:21:44.004 15:26:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3093866 00:21:44.004 15:26:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:44.004 15:26:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:44.004 15:26:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3093866 00:21:44.004 15:26:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:44.004 15:26:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:44.004 15:26:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3093866' 00:21:44.004 killing process with pid 3093866 00:21:44.004 15:26:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3093866 00:21:44.004 [2024-07-15 15:26:47.845947] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:44.004 15:26:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3093866 00:21:44.293 15:26:48 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:44.293 15:26:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:44.293 15:26:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:44.293 15:26:48 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:21:44.293 "subsystems": [ 00:21:44.293 { 00:21:44.293 "subsystem": "keyring", 00:21:44.293 "config": [] 00:21:44.293 }, 00:21:44.293 { 00:21:44.293 "subsystem": "iobuf", 00:21:44.293 "config": [ 00:21:44.293 { 00:21:44.293 "method": "iobuf_set_options", 00:21:44.293 "params": { 00:21:44.293 "small_pool_count": 8192, 00:21:44.293 "large_pool_count": 1024, 00:21:44.293 "small_bufsize": 8192, 00:21:44.293 "large_bufsize": 135168 00:21:44.293 } 00:21:44.293 } 00:21:44.293 ] 00:21:44.293 }, 00:21:44.293 { 00:21:44.293 "subsystem": "sock", 00:21:44.293 "config": [ 00:21:44.293 { 00:21:44.293 "method": "sock_set_default_impl", 00:21:44.293 "params": { 00:21:44.293 "impl_name": "posix" 00:21:44.293 } 00:21:44.293 }, 00:21:44.293 { 00:21:44.293 "method": "sock_impl_set_options", 00:21:44.293 "params": { 00:21:44.293 "impl_name": "ssl", 00:21:44.293 "recv_buf_size": 4096, 00:21:44.293 "send_buf_size": 4096, 00:21:44.293 "enable_recv_pipe": true, 00:21:44.293 "enable_quickack": false, 00:21:44.293 "enable_placement_id": 0, 00:21:44.293 "enable_zerocopy_send_server": true, 00:21:44.293 "enable_zerocopy_send_client": false, 00:21:44.293 "zerocopy_threshold": 0, 00:21:44.293 "tls_version": 0, 00:21:44.293 "enable_ktls": false 00:21:44.293 } 00:21:44.293 }, 00:21:44.293 { 00:21:44.293 "method": "sock_impl_set_options", 00:21:44.293 "params": { 00:21:44.293 "impl_name": "posix", 00:21:44.293 "recv_buf_size": 2097152, 00:21:44.293 "send_buf_size": 2097152, 00:21:44.293 "enable_recv_pipe": true, 00:21:44.293 "enable_quickack": false, 00:21:44.293 "enable_placement_id": 0, 00:21:44.293 "enable_zerocopy_send_server": true, 00:21:44.293 "enable_zerocopy_send_client": false, 00:21:44.293 "zerocopy_threshold": 0, 00:21:44.293 "tls_version": 0, 00:21:44.293 "enable_ktls": false 00:21:44.293 } 00:21:44.293 } 00:21:44.293 ] 00:21:44.293 }, 00:21:44.293 { 00:21:44.293 "subsystem": "vmd", 00:21:44.293 "config": [] 00:21:44.293 }, 00:21:44.293 { 00:21:44.293 "subsystem": "accel", 00:21:44.293 "config": [ 00:21:44.293 { 00:21:44.293 "method": "accel_set_options", 00:21:44.293 "params": { 00:21:44.293 "small_cache_size": 128, 00:21:44.293 "large_cache_size": 16, 00:21:44.293 "task_count": 2048, 00:21:44.293 "sequence_count": 2048, 00:21:44.293 "buf_count": 2048 00:21:44.293 } 00:21:44.293 } 00:21:44.293 ] 00:21:44.293 }, 00:21:44.293 { 00:21:44.293 "subsystem": "bdev", 00:21:44.293 "config": [ 00:21:44.293 { 00:21:44.293 "method": "bdev_set_options", 00:21:44.293 "params": { 00:21:44.293 "bdev_io_pool_size": 65535, 00:21:44.293 "bdev_io_cache_size": 256, 00:21:44.293 "bdev_auto_examine": true, 00:21:44.293 "iobuf_small_cache_size": 128, 00:21:44.293 "iobuf_large_cache_size": 16 00:21:44.293 } 00:21:44.293 }, 00:21:44.293 { 00:21:44.293 "method": "bdev_raid_set_options", 00:21:44.293 "params": { 00:21:44.293 "process_window_size_kb": 1024 00:21:44.293 } 00:21:44.293 }, 00:21:44.293 { 00:21:44.293 "method": "bdev_iscsi_set_options", 00:21:44.293 "params": { 00:21:44.293 "timeout_sec": 30 00:21:44.293 } 00:21:44.293 }, 00:21:44.293 { 00:21:44.293 "method": "bdev_nvme_set_options", 00:21:44.293 "params": { 00:21:44.293 "action_on_timeout": "none", 00:21:44.293 "timeout_us": 0, 00:21:44.293 "timeout_admin_us": 0, 00:21:44.293 "keep_alive_timeout_ms": 10000, 00:21:44.293 "arbitration_burst": 0, 00:21:44.293 "low_priority_weight": 0, 00:21:44.293 "medium_priority_weight": 0, 00:21:44.293 "high_priority_weight": 0, 00:21:44.293 "nvme_adminq_poll_period_us": 10000, 00:21:44.293 "nvme_ioq_poll_period_us": 0, 00:21:44.293 "io_queue_requests": 0, 00:21:44.293 "delay_cmd_submit": true, 00:21:44.293 "transport_retry_count": 4, 00:21:44.293 "bdev_retry_count": 3, 00:21:44.293 "transport_ack_timeout": 0, 00:21:44.293 "ctrlr_loss_timeout_sec": 0, 00:21:44.293 "reconnect_delay_sec": 0, 00:21:44.293 "fast_io_fail_timeout_sec": 0, 00:21:44.293 "disable_auto_failback": false, 00:21:44.293 "generate_uuids": false, 00:21:44.293 "transport_tos": 0, 00:21:44.293 "nvme_error_stat": false, 00:21:44.293 "rdma_srq_size": 0, 00:21:44.293 "io_path_stat": false, 00:21:44.293 "allow_accel_sequence": false, 00:21:44.293 "rdma_max_cq_size": 0, 00:21:44.293 "rdma_cm_event_timeout_ms": 0, 00:21:44.293 "dhchap_digests": [ 00:21:44.293 "sha256", 00:21:44.293 "sha384", 00:21:44.293 "sha512" 00:21:44.293 ], 00:21:44.293 "dhchap_dhgroups": [ 00:21:44.293 "null", 00:21:44.293 "ffdhe2048", 00:21:44.293 "ffdhe3072", 00:21:44.293 "ffdhe4096", 00:21:44.293 "ffdhe6144", 00:21:44.293 "ffdhe8192" 00:21:44.293 ] 00:21:44.293 } 00:21:44.293 }, 00:21:44.293 { 00:21:44.293 "method": "bdev_nvme_set_hotplug", 00:21:44.293 "params": { 00:21:44.293 "period_us": 100000, 00:21:44.293 "enable": false 00:21:44.293 } 00:21:44.293 }, 00:21:44.293 { 00:21:44.293 "method": "bdev_malloc_create", 00:21:44.293 "params": { 00:21:44.293 "name": "malloc0", 00:21:44.293 "num_blocks": 8192, 00:21:44.293 "block_size": 4096, 00:21:44.293 "physical_block_size": 4096, 00:21:44.293 "uuid": "435de785-cba5-4a6e-bd64-9642a8c0f809", 00:21:44.293 "optimal_io_boundary": 0 00:21:44.293 } 00:21:44.293 }, 00:21:44.293 { 00:21:44.293 "method": "bdev_wait_for_examine" 00:21:44.293 } 00:21:44.293 ] 00:21:44.293 }, 00:21:44.293 { 00:21:44.293 "subsystem": "nbd", 00:21:44.293 "config": [] 00:21:44.293 }, 00:21:44.293 { 00:21:44.293 "subsystem": "scheduler", 00:21:44.293 "config": [ 00:21:44.293 { 00:21:44.293 "method": "framework_set_scheduler", 00:21:44.293 "params": { 00:21:44.293 "name": "static" 00:21:44.293 } 00:21:44.293 } 00:21:44.293 ] 00:21:44.293 }, 00:21:44.293 { 00:21:44.293 "subsystem": "nvmf", 00:21:44.293 "config": [ 00:21:44.293 { 00:21:44.293 "method": "nvmf_set_config", 00:21:44.293 "params": { 00:21:44.293 "discovery_filter": "match_any", 00:21:44.293 "admin_cmd_passthru": { 00:21:44.293 "identify_ctrlr": false 00:21:44.293 } 00:21:44.293 } 00:21:44.293 }, 00:21:44.293 { 00:21:44.293 "method": "nvmf_set_max_subsystems", 00:21:44.293 "params": { 00:21:44.293 "max_subsystems": 1024 00:21:44.293 } 00:21:44.293 }, 00:21:44.293 { 00:21:44.293 "method": "nvmf_set_crdt", 00:21:44.293 "params": { 00:21:44.293 "crdt1": 0, 00:21:44.293 "crdt2": 0, 00:21:44.294 "crdt3": 0 00:21:44.294 } 00:21:44.294 }, 00:21:44.294 { 00:21:44.294 "method": "nvmf_create_transport", 00:21:44.294 "params": { 00:21:44.294 "trtype": "TCP", 00:21:44.294 "max_queue_depth": 128, 00:21:44.294 "max_io_qpairs_per_ctrlr": 127, 00:21:44.294 "in_capsule_data_size": 4096, 00:21:44.294 "max_io_size": 131072, 00:21:44.294 "io_unit_size": 131072, 00:21:44.294 "max_aq_depth": 128, 00:21:44.294 "num_shared_buffers": 511, 00:21:44.294 "buf_cache_size": 4294967295, 00:21:44.294 "dif_insert_or_strip": false, 00:21:44.294 "zcopy": false, 00:21:44.294 "c2h_success": false, 00:21:44.294 "sock_priority": 0, 00:21:44.294 "abort_timeout_sec": 1, 00:21:44.294 "ack_timeout": 0, 00:21:44.294 "data_wr_pool_size": 0 00:21:44.294 } 00:21:44.294 }, 00:21:44.294 { 00:21:44.294 "method": "nvmf_create_subsystem", 00:21:44.294 "params": { 00:21:44.294 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:44.294 "allow_any_host": false, 00:21:44.294 "serial_number": "SPDK00000000000001", 00:21:44.294 "model_number": "SPDK bdev Controller", 00:21:44.294 "max_namespaces": 10, 00:21:44.294 "min_cntlid": 1, 00:21:44.294 "max_cntlid": 65519, 00:21:44.294 "ana_reporting": false 00:21:44.294 } 00:21:44.294 }, 00:21:44.294 { 00:21:44.294 "method": "nvmf_subsystem_add_host", 00:21:44.294 "params": { 00:21:44.294 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:44.294 "host": "nqn.2016-06.io.spdk:host1", 00:21:44.294 "psk": "/tmp/tmp.ehJcbKIws5" 00:21:44.294 } 00:21:44.294 }, 00:21:44.294 { 00:21:44.294 "method": "nvmf_subsystem_add_ns", 00:21:44.294 "params": { 00:21:44.294 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:44.294 "namespace": { 00:21:44.294 "nsid": 1, 00:21:44.294 "bdev_name": "malloc0", 00:21:44.294 "nguid": "435DE785CBA54A6EBD649642A8C0F809", 00:21:44.294 "uuid": "435de785-cba5-4a6e-bd64-9642a8c0f809", 00:21:44.294 "no_auto_visible": false 00:21:44.294 } 00:21:44.294 } 00:21:44.294 }, 00:21:44.294 { 00:21:44.294 "method": "nvmf_subsystem_add_listener", 00:21:44.294 "params": { 00:21:44.294 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:44.294 "listen_address": { 00:21:44.294 "trtype": "TCP", 00:21:44.294 "adrfam": "IPv4", 00:21:44.294 "traddr": "10.0.0.2", 00:21:44.294 "trsvcid": "4420" 00:21:44.294 }, 00:21:44.294 "secure_channel": true 00:21:44.294 } 00:21:44.294 } 00:21:44.294 ] 00:21:44.294 } 00:21:44.294 ] 00:21:44.294 }' 00:21:44.294 15:26:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:44.294 15:26:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3094448 00:21:44.294 15:26:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:44.294 15:26:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3094448 00:21:44.294 15:26:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3094448 ']' 00:21:44.294 15:26:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:44.294 15:26:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:44.294 15:26:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:44.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:44.294 15:26:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:44.294 15:26:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:44.294 [2024-07-15 15:26:48.090408] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:21:44.294 [2024-07-15 15:26:48.090462] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:44.294 EAL: No free 2048 kB hugepages reported on node 1 00:21:44.294 [2024-07-15 15:26:48.165039] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.553 [2024-07-15 15:26:48.237578] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:44.553 [2024-07-15 15:26:48.237617] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:44.553 [2024-07-15 15:26:48.237627] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:44.553 [2024-07-15 15:26:48.237636] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:44.553 [2024-07-15 15:26:48.237659] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:44.553 [2024-07-15 15:26:48.237719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:44.553 [2024-07-15 15:26:48.439786] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:44.553 [2024-07-15 15:26:48.455751] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:44.811 [2024-07-15 15:26:48.471807] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:44.811 [2024-07-15 15:26:48.486979] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:45.071 15:26:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:45.071 15:26:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:45.071 15:26:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:45.071 15:26:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:45.071 15:26:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:45.071 15:26:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:45.071 15:26:48 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:45.071 15:26:48 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=3094722 00:21:45.071 15:26:48 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 3094722 /var/tmp/bdevperf.sock 00:21:45.071 15:26:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3094722 ']' 00:21:45.071 15:26:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:45.071 15:26:48 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:21:45.071 "subsystems": [ 00:21:45.071 { 00:21:45.071 "subsystem": "keyring", 00:21:45.071 "config": [] 00:21:45.071 }, 00:21:45.071 { 00:21:45.071 "subsystem": "iobuf", 00:21:45.071 "config": [ 00:21:45.071 { 00:21:45.071 "method": "iobuf_set_options", 00:21:45.071 "params": { 00:21:45.071 "small_pool_count": 8192, 00:21:45.071 "large_pool_count": 1024, 00:21:45.071 "small_bufsize": 8192, 00:21:45.071 "large_bufsize": 135168 00:21:45.071 } 00:21:45.071 } 00:21:45.071 ] 00:21:45.071 }, 00:21:45.071 { 00:21:45.071 "subsystem": "sock", 00:21:45.071 "config": [ 00:21:45.071 { 00:21:45.071 "method": "sock_set_default_impl", 00:21:45.071 "params": { 00:21:45.071 "impl_name": "posix" 00:21:45.071 } 00:21:45.071 }, 00:21:45.071 { 00:21:45.071 "method": "sock_impl_set_options", 00:21:45.071 "params": { 00:21:45.071 "impl_name": "ssl", 00:21:45.071 "recv_buf_size": 4096, 00:21:45.071 "send_buf_size": 4096, 00:21:45.071 "enable_recv_pipe": true, 00:21:45.071 "enable_quickack": false, 00:21:45.071 "enable_placement_id": 0, 00:21:45.071 "enable_zerocopy_send_server": true, 00:21:45.071 "enable_zerocopy_send_client": false, 00:21:45.071 "zerocopy_threshold": 0, 00:21:45.071 "tls_version": 0, 00:21:45.071 "enable_ktls": false 00:21:45.071 } 00:21:45.071 }, 00:21:45.071 { 00:21:45.071 "method": "sock_impl_set_options", 00:21:45.071 "params": { 00:21:45.071 "impl_name": "posix", 00:21:45.071 "recv_buf_size": 2097152, 00:21:45.071 "send_buf_size": 2097152, 00:21:45.071 "enable_recv_pipe": true, 00:21:45.071 "enable_quickack": false, 00:21:45.071 "enable_placement_id": 0, 00:21:45.071 "enable_zerocopy_send_server": true, 00:21:45.071 "enable_zerocopy_send_client": false, 00:21:45.071 "zerocopy_threshold": 0, 00:21:45.071 "tls_version": 0, 00:21:45.071 "enable_ktls": false 00:21:45.071 } 00:21:45.071 } 00:21:45.071 ] 00:21:45.071 }, 00:21:45.071 { 00:21:45.071 "subsystem": "vmd", 00:21:45.071 "config": [] 00:21:45.071 }, 00:21:45.071 { 00:21:45.071 "subsystem": "accel", 00:21:45.071 "config": [ 00:21:45.071 { 00:21:45.071 "method": "accel_set_options", 00:21:45.071 "params": { 00:21:45.071 "small_cache_size": 128, 00:21:45.071 "large_cache_size": 16, 00:21:45.071 "task_count": 2048, 00:21:45.071 "sequence_count": 2048, 00:21:45.071 "buf_count": 2048 00:21:45.071 } 00:21:45.071 } 00:21:45.071 ] 00:21:45.071 }, 00:21:45.071 { 00:21:45.071 "subsystem": "bdev", 00:21:45.071 "config": [ 00:21:45.071 { 00:21:45.071 "method": "bdev_set_options", 00:21:45.071 "params": { 00:21:45.071 "bdev_io_pool_size": 65535, 00:21:45.071 "bdev_io_cache_size": 256, 00:21:45.071 "bdev_auto_examine": true, 00:21:45.071 "iobuf_small_cache_size": 128, 00:21:45.071 "iobuf_large_cache_size": 16 00:21:45.071 } 00:21:45.071 }, 00:21:45.071 { 00:21:45.071 "method": "bdev_raid_set_options", 00:21:45.071 "params": { 00:21:45.071 "process_window_size_kb": 1024 00:21:45.071 } 00:21:45.071 }, 00:21:45.071 { 00:21:45.071 "method": "bdev_iscsi_set_options", 00:21:45.071 "params": { 00:21:45.071 "timeout_sec": 30 00:21:45.071 } 00:21:45.071 }, 00:21:45.071 { 00:21:45.071 "method": "bdev_nvme_set_options", 00:21:45.071 "params": { 00:21:45.071 "action_on_timeout": "none", 00:21:45.071 "timeout_us": 0, 00:21:45.071 "timeout_admin_us": 0, 00:21:45.071 "keep_alive_timeout_ms": 10000, 00:21:45.071 "arbitration_burst": 0, 00:21:45.071 "low_priority_weight": 0, 00:21:45.071 "medium_priority_weight": 0, 00:21:45.071 "high_priority_weight": 0, 00:21:45.071 "nvme_adminq_poll_period_us": 10000, 00:21:45.071 "nvme_ioq_poll_period_us": 0, 00:21:45.071 "io_queue_requests": 512, 00:21:45.071 "delay_cmd_submit": true, 00:21:45.071 "transport_retry_count": 4, 00:21:45.071 "bdev_retry_count": 3, 00:21:45.071 "transport_ack_timeout": 0, 00:21:45.071 "ctrlr_loss_timeout_sec": 0, 00:21:45.071 "reconnect_delay_sec": 0, 00:21:45.071 "fast_io_fail_timeout_sec": 0, 00:21:45.071 "disable_auto_failback": false, 00:21:45.071 "generate_uuids": false, 00:21:45.071 "transport_tos": 0, 00:21:45.071 "nvme_error_stat": false, 00:21:45.071 "rdma_srq_size": 0, 00:21:45.071 "io_path_stat": false, 00:21:45.071 "allow_accel_sequence": false, 00:21:45.071 "rdma_max_cq_size": 0, 00:21:45.071 "rdma_cm_event_timeout_ms": 0, 00:21:45.071 "dhchap_digests": [ 00:21:45.071 "sha256", 00:21:45.071 "sha384", 00:21:45.071 "sha512" 00:21:45.071 ], 00:21:45.071 "dhchap_dhgroups": [ 00:21:45.071 "null", 00:21:45.071 "ffdhe2048", 00:21:45.071 "ffdhe3072", 00:21:45.071 "ffdhe4096", 00:21:45.071 "ffdhe6144", 00:21:45.071 "ffdhe8192" 00:21:45.071 ] 00:21:45.071 } 00:21:45.071 }, 00:21:45.071 { 00:21:45.071 "method": "bdev_nvme_attach_controller", 00:21:45.071 "params": { 00:21:45.071 "name": "TLSTEST", 00:21:45.071 "trtype": "TCP", 00:21:45.071 "adrfam": "IPv4", 00:21:45.071 "traddr": "10.0.0.2", 00:21:45.071 "trsvcid": "4420", 00:21:45.071 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:45.071 "prchk_reftag": false, 00:21:45.071 "prchk_guard": false, 00:21:45.071 "ctrlr_loss_timeout_sec": 0, 00:21:45.071 "reconnect_delay_sec": 0, 00:21:45.071 "fast_io_fail_timeout_sec": 0, 00:21:45.071 "psk": "/tmp/tmp.ehJcbKIws5", 00:21:45.071 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:45.071 "hdgst": false, 00:21:45.071 "ddgst": false 00:21:45.071 } 00:21:45.071 }, 00:21:45.071 { 00:21:45.071 "method": "bdev_nvme_set_hotplug", 00:21:45.071 "params": { 00:21:45.071 "period_us": 100000, 00:21:45.071 "enable": false 00:21:45.071 } 00:21:45.071 }, 00:21:45.071 { 00:21:45.071 "method": "bdev_wait_for_examine" 00:21:45.071 } 00:21:45.071 ] 00:21:45.071 }, 00:21:45.071 { 00:21:45.071 "subsystem": "nbd", 00:21:45.071 "config": [] 00:21:45.071 } 00:21:45.071 ] 00:21:45.071 }' 00:21:45.071 15:26:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:45.071 15:26:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:45.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:45.071 15:26:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:45.071 15:26:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:45.071 [2024-07-15 15:26:48.961209] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:21:45.071 [2024-07-15 15:26:48.961261] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3094722 ] 00:21:45.330 EAL: No free 2048 kB hugepages reported on node 1 00:21:45.330 [2024-07-15 15:26:49.026498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.330 [2024-07-15 15:26:49.096178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:45.588 [2024-07-15 15:26:49.238825] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:45.588 [2024-07-15 15:26:49.238913] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:46.156 15:26:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:46.156 15:26:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:46.156 15:26:49 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:46.156 Running I/O for 10 seconds... 00:21:56.135 00:21:56.135 Latency(us) 00:21:56.135 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:56.135 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:56.135 Verification LBA range: start 0x0 length 0x2000 00:21:56.135 TLSTESTn1 : 10.03 4398.90 17.18 0.00 0.00 29045.45 4744.81 71303.17 00:21:56.135 =================================================================================================================== 00:21:56.135 Total : 4398.90 17.18 0.00 0.00 29045.45 4744.81 71303.17 00:21:56.135 0 00:21:56.135 15:26:59 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:56.135 15:26:59 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 3094722 00:21:56.135 15:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3094722 ']' 00:21:56.135 15:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3094722 00:21:56.135 15:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:56.135 15:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:56.135 15:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3094722 00:21:56.135 15:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:56.135 15:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:56.135 15:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3094722' 00:21:56.135 killing process with pid 3094722 00:21:56.135 15:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3094722 00:21:56.135 Received shutdown signal, test time was about 10.000000 seconds 00:21:56.135 00:21:56.135 Latency(us) 00:21:56.135 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:56.135 =================================================================================================================== 00:21:56.135 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:56.135 [2024-07-15 15:26:59.986570] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:56.135 15:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3094722 00:21:56.406 15:27:00 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 3094448 00:21:56.406 15:27:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3094448 ']' 00:21:56.406 15:27:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3094448 00:21:56.406 15:27:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:56.406 15:27:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:56.406 15:27:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3094448 00:21:56.406 15:27:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:56.406 15:27:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:56.406 15:27:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3094448' 00:21:56.406 killing process with pid 3094448 00:21:56.406 15:27:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3094448 00:21:56.406 [2024-07-15 15:27:00.223215] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:56.406 15:27:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3094448 00:21:56.678 15:27:00 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:21:56.678 15:27:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:56.678 15:27:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:56.678 15:27:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:56.678 15:27:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3096613 00:21:56.678 15:27:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:56.678 15:27:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3096613 00:21:56.678 15:27:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3096613 ']' 00:21:56.678 15:27:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:56.678 15:27:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:56.678 15:27:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:56.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:56.678 15:27:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:56.678 15:27:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:56.678 [2024-07-15 15:27:00.471282] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:21:56.678 [2024-07-15 15:27:00.471344] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:56.678 EAL: No free 2048 kB hugepages reported on node 1 00:21:56.678 [2024-07-15 15:27:00.548399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:56.937 [2024-07-15 15:27:00.620412] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:56.937 [2024-07-15 15:27:00.620450] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:56.937 [2024-07-15 15:27:00.620459] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:56.937 [2024-07-15 15:27:00.620468] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:56.937 [2024-07-15 15:27:00.620491] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:56.937 [2024-07-15 15:27:00.620511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:57.564 15:27:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:57.564 15:27:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:57.564 15:27:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:57.564 15:27:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:57.564 15:27:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:57.564 15:27:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:57.564 15:27:01 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.ehJcbKIws5 00:21:57.564 15:27:01 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ehJcbKIws5 00:21:57.564 15:27:01 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:57.823 [2024-07-15 15:27:01.474719] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:57.823 15:27:01 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:57.823 15:27:01 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:58.081 [2024-07-15 15:27:01.803661] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:58.081 [2024-07-15 15:27:01.803869] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:58.082 15:27:01 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:58.082 malloc0 00:21:58.082 15:27:01 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:58.340 15:27:02 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ehJcbKIws5 00:21:58.598 [2024-07-15 15:27:02.313290] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:58.598 15:27:02 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:58.598 15:27:02 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=3096996 00:21:58.598 15:27:02 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:58.598 15:27:02 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 3096996 /var/tmp/bdevperf.sock 00:21:58.598 15:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3096996 ']' 00:21:58.598 15:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:58.598 15:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:58.598 15:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:58.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:58.598 15:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:58.598 15:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:58.598 [2024-07-15 15:27:02.357326] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:21:58.598 [2024-07-15 15:27:02.357379] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3096996 ] 00:21:58.598 EAL: No free 2048 kB hugepages reported on node 1 00:21:58.598 [2024-07-15 15:27:02.427306] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:58.598 [2024-07-15 15:27:02.502032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:59.531 15:27:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:59.531 15:27:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:59.531 15:27:03 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ehJcbKIws5 00:21:59.531 15:27:03 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:59.787 [2024-07-15 15:27:03.508703] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:59.787 nvme0n1 00:21:59.787 15:27:03 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:59.787 Running I/O for 1 seconds... 00:22:01.157 00:22:01.157 Latency(us) 00:22:01.157 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:01.157 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:01.157 Verification LBA range: start 0x0 length 0x2000 00:22:01.157 nvme0n1 : 1.03 4049.20 15.82 0.00 0.00 31187.08 6658.46 64172.85 00:22:01.157 =================================================================================================================== 00:22:01.157 Total : 4049.20 15.82 0.00 0.00 31187.08 6658.46 64172.85 00:22:01.157 0 00:22:01.157 15:27:04 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 3096996 00:22:01.157 15:27:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3096996 ']' 00:22:01.157 15:27:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3096996 00:22:01.157 15:27:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:01.157 15:27:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:01.157 15:27:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3096996 00:22:01.157 15:27:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:01.157 15:27:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:01.157 15:27:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3096996' 00:22:01.157 killing process with pid 3096996 00:22:01.157 15:27:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3096996 00:22:01.157 Received shutdown signal, test time was about 1.000000 seconds 00:22:01.157 00:22:01.157 Latency(us) 00:22:01.158 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:01.158 =================================================================================================================== 00:22:01.158 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:01.158 15:27:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3096996 00:22:01.158 15:27:04 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 3096613 00:22:01.158 15:27:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3096613 ']' 00:22:01.158 15:27:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3096613 00:22:01.158 15:27:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:01.158 15:27:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:01.158 15:27:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3096613 00:22:01.158 15:27:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:01.158 15:27:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:01.158 15:27:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3096613' 00:22:01.158 killing process with pid 3096613 00:22:01.158 15:27:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3096613 00:22:01.158 [2024-07-15 15:27:05.041178] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:01.158 15:27:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3096613 00:22:01.416 15:27:05 nvmf_tcp.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:22:01.416 15:27:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:01.416 15:27:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:01.416 15:27:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:01.416 15:27:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3097876 00:22:01.416 15:27:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:01.416 15:27:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3097876 00:22:01.416 15:27:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3097876 ']' 00:22:01.416 15:27:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:01.416 15:27:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:01.416 15:27:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:01.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:01.416 15:27:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:01.416 15:27:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:01.416 [2024-07-15 15:27:05.288358] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:22:01.416 [2024-07-15 15:27:05.288408] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:01.416 EAL: No free 2048 kB hugepages reported on node 1 00:22:01.674 [2024-07-15 15:27:05.361192] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.674 [2024-07-15 15:27:05.432809] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:01.674 [2024-07-15 15:27:05.432852] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:01.674 [2024-07-15 15:27:05.432862] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:01.674 [2024-07-15 15:27:05.432871] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:01.674 [2024-07-15 15:27:05.432878] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:01.674 [2024-07-15 15:27:05.432905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:02.241 15:27:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:02.241 15:27:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:02.241 15:27:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:02.241 15:27:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:02.241 15:27:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:02.241 15:27:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:02.241 15:27:06 nvmf_tcp.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:22:02.241 15:27:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.241 15:27:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:02.241 [2024-07-15 15:27:06.136351] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:02.499 malloc0 00:22:02.499 [2024-07-15 15:27:06.164729] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:02.499 [2024-07-15 15:27:06.164929] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:02.499 15:27:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.499 15:27:06 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=3098229 00:22:02.499 15:27:06 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:02.499 15:27:06 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 3098229 /var/tmp/bdevperf.sock 00:22:02.499 15:27:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3098229 ']' 00:22:02.499 15:27:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:02.499 15:27:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:02.499 15:27:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:02.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:02.499 15:27:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:02.499 15:27:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:02.499 [2024-07-15 15:27:06.242293] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:22:02.499 [2024-07-15 15:27:06.242340] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3098229 ] 00:22:02.499 EAL: No free 2048 kB hugepages reported on node 1 00:22:02.499 [2024-07-15 15:27:06.312793] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.499 [2024-07-15 15:27:06.387086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:03.433 15:27:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:03.433 15:27:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:03.433 15:27:07 nvmf_tcp.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ehJcbKIws5 00:22:03.433 15:27:07 nvmf_tcp.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:03.690 [2024-07-15 15:27:07.389442] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:03.690 nvme0n1 00:22:03.690 15:27:07 nvmf_tcp.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:03.690 Running I/O for 1 seconds... 00:22:05.062 00:22:05.062 Latency(us) 00:22:05.062 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:05.062 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:05.062 Verification LBA range: start 0x0 length 0x2000 00:22:05.062 nvme0n1 : 1.03 3937.21 15.38 0.00 0.00 32084.92 6920.60 77175.19 00:22:05.062 =================================================================================================================== 00:22:05.062 Total : 3937.21 15.38 0.00 0.00 32084.92 6920.60 77175.19 00:22:05.062 0 00:22:05.062 15:27:08 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:22:05.062 15:27:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.062 15:27:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:05.062 15:27:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.062 15:27:08 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:22:05.062 "subsystems": [ 00:22:05.062 { 00:22:05.062 "subsystem": "keyring", 00:22:05.062 "config": [ 00:22:05.062 { 00:22:05.062 "method": "keyring_file_add_key", 00:22:05.062 "params": { 00:22:05.062 "name": "key0", 00:22:05.062 "path": "/tmp/tmp.ehJcbKIws5" 00:22:05.062 } 00:22:05.062 } 00:22:05.062 ] 00:22:05.062 }, 00:22:05.062 { 00:22:05.062 "subsystem": "iobuf", 00:22:05.062 "config": [ 00:22:05.062 { 00:22:05.062 "method": "iobuf_set_options", 00:22:05.062 "params": { 00:22:05.062 "small_pool_count": 8192, 00:22:05.062 "large_pool_count": 1024, 00:22:05.062 "small_bufsize": 8192, 00:22:05.062 "large_bufsize": 135168 00:22:05.062 } 00:22:05.062 } 00:22:05.062 ] 00:22:05.062 }, 00:22:05.062 { 00:22:05.062 "subsystem": "sock", 00:22:05.062 "config": [ 00:22:05.062 { 00:22:05.062 "method": "sock_set_default_impl", 00:22:05.062 "params": { 00:22:05.062 "impl_name": "posix" 00:22:05.062 } 00:22:05.062 }, 00:22:05.062 { 00:22:05.062 "method": "sock_impl_set_options", 00:22:05.062 "params": { 00:22:05.062 "impl_name": "ssl", 00:22:05.062 "recv_buf_size": 4096, 00:22:05.062 "send_buf_size": 4096, 00:22:05.062 "enable_recv_pipe": true, 00:22:05.062 "enable_quickack": false, 00:22:05.062 "enable_placement_id": 0, 00:22:05.062 "enable_zerocopy_send_server": true, 00:22:05.062 "enable_zerocopy_send_client": false, 00:22:05.062 "zerocopy_threshold": 0, 00:22:05.062 "tls_version": 0, 00:22:05.062 "enable_ktls": false 00:22:05.062 } 00:22:05.062 }, 00:22:05.062 { 00:22:05.062 "method": "sock_impl_set_options", 00:22:05.062 "params": { 00:22:05.062 "impl_name": "posix", 00:22:05.062 "recv_buf_size": 2097152, 00:22:05.062 "send_buf_size": 2097152, 00:22:05.062 "enable_recv_pipe": true, 00:22:05.062 "enable_quickack": false, 00:22:05.062 "enable_placement_id": 0, 00:22:05.062 "enable_zerocopy_send_server": true, 00:22:05.062 "enable_zerocopy_send_client": false, 00:22:05.062 "zerocopy_threshold": 0, 00:22:05.062 "tls_version": 0, 00:22:05.062 "enable_ktls": false 00:22:05.062 } 00:22:05.062 } 00:22:05.062 ] 00:22:05.062 }, 00:22:05.062 { 00:22:05.062 "subsystem": "vmd", 00:22:05.062 "config": [] 00:22:05.062 }, 00:22:05.062 { 00:22:05.062 "subsystem": "accel", 00:22:05.062 "config": [ 00:22:05.062 { 00:22:05.062 "method": "accel_set_options", 00:22:05.062 "params": { 00:22:05.062 "small_cache_size": 128, 00:22:05.062 "large_cache_size": 16, 00:22:05.062 "task_count": 2048, 00:22:05.062 "sequence_count": 2048, 00:22:05.062 "buf_count": 2048 00:22:05.062 } 00:22:05.062 } 00:22:05.062 ] 00:22:05.062 }, 00:22:05.062 { 00:22:05.062 "subsystem": "bdev", 00:22:05.062 "config": [ 00:22:05.062 { 00:22:05.062 "method": "bdev_set_options", 00:22:05.062 "params": { 00:22:05.062 "bdev_io_pool_size": 65535, 00:22:05.062 "bdev_io_cache_size": 256, 00:22:05.062 "bdev_auto_examine": true, 00:22:05.062 "iobuf_small_cache_size": 128, 00:22:05.062 "iobuf_large_cache_size": 16 00:22:05.062 } 00:22:05.062 }, 00:22:05.062 { 00:22:05.062 "method": "bdev_raid_set_options", 00:22:05.062 "params": { 00:22:05.062 "process_window_size_kb": 1024 00:22:05.062 } 00:22:05.062 }, 00:22:05.062 { 00:22:05.062 "method": "bdev_iscsi_set_options", 00:22:05.062 "params": { 00:22:05.062 "timeout_sec": 30 00:22:05.062 } 00:22:05.062 }, 00:22:05.062 { 00:22:05.062 "method": "bdev_nvme_set_options", 00:22:05.062 "params": { 00:22:05.062 "action_on_timeout": "none", 00:22:05.062 "timeout_us": 0, 00:22:05.062 "timeout_admin_us": 0, 00:22:05.062 "keep_alive_timeout_ms": 10000, 00:22:05.062 "arbitration_burst": 0, 00:22:05.062 "low_priority_weight": 0, 00:22:05.062 "medium_priority_weight": 0, 00:22:05.062 "high_priority_weight": 0, 00:22:05.062 "nvme_adminq_poll_period_us": 10000, 00:22:05.062 "nvme_ioq_poll_period_us": 0, 00:22:05.062 "io_queue_requests": 0, 00:22:05.062 "delay_cmd_submit": true, 00:22:05.062 "transport_retry_count": 4, 00:22:05.062 "bdev_retry_count": 3, 00:22:05.062 "transport_ack_timeout": 0, 00:22:05.062 "ctrlr_loss_timeout_sec": 0, 00:22:05.062 "reconnect_delay_sec": 0, 00:22:05.062 "fast_io_fail_timeout_sec": 0, 00:22:05.062 "disable_auto_failback": false, 00:22:05.062 "generate_uuids": false, 00:22:05.062 "transport_tos": 0, 00:22:05.062 "nvme_error_stat": false, 00:22:05.062 "rdma_srq_size": 0, 00:22:05.062 "io_path_stat": false, 00:22:05.062 "allow_accel_sequence": false, 00:22:05.062 "rdma_max_cq_size": 0, 00:22:05.062 "rdma_cm_event_timeout_ms": 0, 00:22:05.062 "dhchap_digests": [ 00:22:05.062 "sha256", 00:22:05.062 "sha384", 00:22:05.062 "sha512" 00:22:05.062 ], 00:22:05.062 "dhchap_dhgroups": [ 00:22:05.062 "null", 00:22:05.062 "ffdhe2048", 00:22:05.062 "ffdhe3072", 00:22:05.062 "ffdhe4096", 00:22:05.062 "ffdhe6144", 00:22:05.062 "ffdhe8192" 00:22:05.062 ] 00:22:05.062 } 00:22:05.062 }, 00:22:05.062 { 00:22:05.062 "method": "bdev_nvme_set_hotplug", 00:22:05.062 "params": { 00:22:05.062 "period_us": 100000, 00:22:05.062 "enable": false 00:22:05.062 } 00:22:05.062 }, 00:22:05.062 { 00:22:05.062 "method": "bdev_malloc_create", 00:22:05.062 "params": { 00:22:05.062 "name": "malloc0", 00:22:05.062 "num_blocks": 8192, 00:22:05.062 "block_size": 4096, 00:22:05.062 "physical_block_size": 4096, 00:22:05.062 "uuid": "555c1729-d60e-4715-95a8-e96c43456299", 00:22:05.062 "optimal_io_boundary": 0 00:22:05.062 } 00:22:05.062 }, 00:22:05.062 { 00:22:05.062 "method": "bdev_wait_for_examine" 00:22:05.062 } 00:22:05.062 ] 00:22:05.062 }, 00:22:05.062 { 00:22:05.062 "subsystem": "nbd", 00:22:05.062 "config": [] 00:22:05.062 }, 00:22:05.062 { 00:22:05.062 "subsystem": "scheduler", 00:22:05.062 "config": [ 00:22:05.062 { 00:22:05.062 "method": "framework_set_scheduler", 00:22:05.062 "params": { 00:22:05.062 "name": "static" 00:22:05.062 } 00:22:05.062 } 00:22:05.062 ] 00:22:05.062 }, 00:22:05.062 { 00:22:05.062 "subsystem": "nvmf", 00:22:05.062 "config": [ 00:22:05.062 { 00:22:05.062 "method": "nvmf_set_config", 00:22:05.062 "params": { 00:22:05.062 "discovery_filter": "match_any", 00:22:05.062 "admin_cmd_passthru": { 00:22:05.062 "identify_ctrlr": false 00:22:05.062 } 00:22:05.062 } 00:22:05.062 }, 00:22:05.062 { 00:22:05.062 "method": "nvmf_set_max_subsystems", 00:22:05.062 "params": { 00:22:05.062 "max_subsystems": 1024 00:22:05.062 } 00:22:05.062 }, 00:22:05.062 { 00:22:05.062 "method": "nvmf_set_crdt", 00:22:05.062 "params": { 00:22:05.062 "crdt1": 0, 00:22:05.062 "crdt2": 0, 00:22:05.062 "crdt3": 0 00:22:05.062 } 00:22:05.062 }, 00:22:05.062 { 00:22:05.062 "method": "nvmf_create_transport", 00:22:05.062 "params": { 00:22:05.062 "trtype": "TCP", 00:22:05.062 "max_queue_depth": 128, 00:22:05.062 "max_io_qpairs_per_ctrlr": 127, 00:22:05.062 "in_capsule_data_size": 4096, 00:22:05.062 "max_io_size": 131072, 00:22:05.062 "io_unit_size": 131072, 00:22:05.062 "max_aq_depth": 128, 00:22:05.062 "num_shared_buffers": 511, 00:22:05.062 "buf_cache_size": 4294967295, 00:22:05.062 "dif_insert_or_strip": false, 00:22:05.062 "zcopy": false, 00:22:05.062 "c2h_success": false, 00:22:05.062 "sock_priority": 0, 00:22:05.062 "abort_timeout_sec": 1, 00:22:05.062 "ack_timeout": 0, 00:22:05.062 "data_wr_pool_size": 0 00:22:05.062 } 00:22:05.062 }, 00:22:05.062 { 00:22:05.062 "method": "nvmf_create_subsystem", 00:22:05.062 "params": { 00:22:05.062 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:05.062 "allow_any_host": false, 00:22:05.062 "serial_number": "00000000000000000000", 00:22:05.062 "model_number": "SPDK bdev Controller", 00:22:05.062 "max_namespaces": 32, 00:22:05.062 "min_cntlid": 1, 00:22:05.062 "max_cntlid": 65519, 00:22:05.062 "ana_reporting": false 00:22:05.062 } 00:22:05.062 }, 00:22:05.062 { 00:22:05.062 "method": "nvmf_subsystem_add_host", 00:22:05.062 "params": { 00:22:05.062 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:05.062 "host": "nqn.2016-06.io.spdk:host1", 00:22:05.062 "psk": "key0" 00:22:05.062 } 00:22:05.062 }, 00:22:05.062 { 00:22:05.062 "method": "nvmf_subsystem_add_ns", 00:22:05.062 "params": { 00:22:05.062 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:05.062 "namespace": { 00:22:05.063 "nsid": 1, 00:22:05.063 "bdev_name": "malloc0", 00:22:05.063 "nguid": "555C1729D60E471595A8E96C43456299", 00:22:05.063 "uuid": "555c1729-d60e-4715-95a8-e96c43456299", 00:22:05.063 "no_auto_visible": false 00:22:05.063 } 00:22:05.063 } 00:22:05.063 }, 00:22:05.063 { 00:22:05.063 "method": "nvmf_subsystem_add_listener", 00:22:05.063 "params": { 00:22:05.063 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:05.063 "listen_address": { 00:22:05.063 "trtype": "TCP", 00:22:05.063 "adrfam": "IPv4", 00:22:05.063 "traddr": "10.0.0.2", 00:22:05.063 "trsvcid": "4420" 00:22:05.063 }, 00:22:05.063 "secure_channel": false, 00:22:05.063 "sock_impl": "ssl" 00:22:05.063 } 00:22:05.063 } 00:22:05.063 ] 00:22:05.063 } 00:22:05.063 ] 00:22:05.063 }' 00:22:05.063 15:27:08 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:05.321 15:27:08 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:22:05.321 "subsystems": [ 00:22:05.321 { 00:22:05.321 "subsystem": "keyring", 00:22:05.321 "config": [ 00:22:05.321 { 00:22:05.321 "method": "keyring_file_add_key", 00:22:05.321 "params": { 00:22:05.321 "name": "key0", 00:22:05.321 "path": "/tmp/tmp.ehJcbKIws5" 00:22:05.321 } 00:22:05.321 } 00:22:05.321 ] 00:22:05.321 }, 00:22:05.321 { 00:22:05.321 "subsystem": "iobuf", 00:22:05.321 "config": [ 00:22:05.321 { 00:22:05.321 "method": "iobuf_set_options", 00:22:05.321 "params": { 00:22:05.321 "small_pool_count": 8192, 00:22:05.321 "large_pool_count": 1024, 00:22:05.321 "small_bufsize": 8192, 00:22:05.321 "large_bufsize": 135168 00:22:05.321 } 00:22:05.321 } 00:22:05.321 ] 00:22:05.321 }, 00:22:05.321 { 00:22:05.321 "subsystem": "sock", 00:22:05.321 "config": [ 00:22:05.321 { 00:22:05.321 "method": "sock_set_default_impl", 00:22:05.321 "params": { 00:22:05.321 "impl_name": "posix" 00:22:05.321 } 00:22:05.321 }, 00:22:05.321 { 00:22:05.321 "method": "sock_impl_set_options", 00:22:05.321 "params": { 00:22:05.321 "impl_name": "ssl", 00:22:05.321 "recv_buf_size": 4096, 00:22:05.321 "send_buf_size": 4096, 00:22:05.321 "enable_recv_pipe": true, 00:22:05.321 "enable_quickack": false, 00:22:05.321 "enable_placement_id": 0, 00:22:05.321 "enable_zerocopy_send_server": true, 00:22:05.321 "enable_zerocopy_send_client": false, 00:22:05.321 "zerocopy_threshold": 0, 00:22:05.321 "tls_version": 0, 00:22:05.321 "enable_ktls": false 00:22:05.321 } 00:22:05.321 }, 00:22:05.321 { 00:22:05.321 "method": "sock_impl_set_options", 00:22:05.321 "params": { 00:22:05.321 "impl_name": "posix", 00:22:05.321 "recv_buf_size": 2097152, 00:22:05.321 "send_buf_size": 2097152, 00:22:05.321 "enable_recv_pipe": true, 00:22:05.321 "enable_quickack": false, 00:22:05.321 "enable_placement_id": 0, 00:22:05.321 "enable_zerocopy_send_server": true, 00:22:05.321 "enable_zerocopy_send_client": false, 00:22:05.321 "zerocopy_threshold": 0, 00:22:05.321 "tls_version": 0, 00:22:05.321 "enable_ktls": false 00:22:05.321 } 00:22:05.321 } 00:22:05.321 ] 00:22:05.321 }, 00:22:05.321 { 00:22:05.321 "subsystem": "vmd", 00:22:05.321 "config": [] 00:22:05.321 }, 00:22:05.321 { 00:22:05.321 "subsystem": "accel", 00:22:05.321 "config": [ 00:22:05.321 { 00:22:05.321 "method": "accel_set_options", 00:22:05.321 "params": { 00:22:05.321 "small_cache_size": 128, 00:22:05.321 "large_cache_size": 16, 00:22:05.321 "task_count": 2048, 00:22:05.321 "sequence_count": 2048, 00:22:05.321 "buf_count": 2048 00:22:05.321 } 00:22:05.321 } 00:22:05.321 ] 00:22:05.321 }, 00:22:05.321 { 00:22:05.321 "subsystem": "bdev", 00:22:05.321 "config": [ 00:22:05.321 { 00:22:05.321 "method": "bdev_set_options", 00:22:05.321 "params": { 00:22:05.321 "bdev_io_pool_size": 65535, 00:22:05.321 "bdev_io_cache_size": 256, 00:22:05.321 "bdev_auto_examine": true, 00:22:05.321 "iobuf_small_cache_size": 128, 00:22:05.321 "iobuf_large_cache_size": 16 00:22:05.321 } 00:22:05.321 }, 00:22:05.321 { 00:22:05.321 "method": "bdev_raid_set_options", 00:22:05.321 "params": { 00:22:05.321 "process_window_size_kb": 1024 00:22:05.321 } 00:22:05.321 }, 00:22:05.321 { 00:22:05.321 "method": "bdev_iscsi_set_options", 00:22:05.321 "params": { 00:22:05.321 "timeout_sec": 30 00:22:05.321 } 00:22:05.321 }, 00:22:05.321 { 00:22:05.322 "method": "bdev_nvme_set_options", 00:22:05.322 "params": { 00:22:05.322 "action_on_timeout": "none", 00:22:05.322 "timeout_us": 0, 00:22:05.322 "timeout_admin_us": 0, 00:22:05.322 "keep_alive_timeout_ms": 10000, 00:22:05.322 "arbitration_burst": 0, 00:22:05.322 "low_priority_weight": 0, 00:22:05.322 "medium_priority_weight": 0, 00:22:05.322 "high_priority_weight": 0, 00:22:05.322 "nvme_adminq_poll_period_us": 10000, 00:22:05.322 "nvme_ioq_poll_period_us": 0, 00:22:05.322 "io_queue_requests": 512, 00:22:05.322 "delay_cmd_submit": true, 00:22:05.322 "transport_retry_count": 4, 00:22:05.322 "bdev_retry_count": 3, 00:22:05.322 "transport_ack_timeout": 0, 00:22:05.322 "ctrlr_loss_timeout_sec": 0, 00:22:05.322 "reconnect_delay_sec": 0, 00:22:05.322 "fast_io_fail_timeout_sec": 0, 00:22:05.322 "disable_auto_failback": false, 00:22:05.322 "generate_uuids": false, 00:22:05.322 "transport_tos": 0, 00:22:05.322 "nvme_error_stat": false, 00:22:05.322 "rdma_srq_size": 0, 00:22:05.322 "io_path_stat": false, 00:22:05.322 "allow_accel_sequence": false, 00:22:05.322 "rdma_max_cq_size": 0, 00:22:05.322 "rdma_cm_event_timeout_ms": 0, 00:22:05.322 "dhchap_digests": [ 00:22:05.322 "sha256", 00:22:05.322 "sha384", 00:22:05.322 "sha512" 00:22:05.322 ], 00:22:05.322 "dhchap_dhgroups": [ 00:22:05.322 "null", 00:22:05.322 "ffdhe2048", 00:22:05.322 "ffdhe3072", 00:22:05.322 "ffdhe4096", 00:22:05.322 "ffdhe6144", 00:22:05.322 "ffdhe8192" 00:22:05.322 ] 00:22:05.322 } 00:22:05.322 }, 00:22:05.322 { 00:22:05.322 "method": "bdev_nvme_attach_controller", 00:22:05.322 "params": { 00:22:05.322 "name": "nvme0", 00:22:05.322 "trtype": "TCP", 00:22:05.322 "adrfam": "IPv4", 00:22:05.322 "traddr": "10.0.0.2", 00:22:05.322 "trsvcid": "4420", 00:22:05.322 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:05.322 "prchk_reftag": false, 00:22:05.322 "prchk_guard": false, 00:22:05.322 "ctrlr_loss_timeout_sec": 0, 00:22:05.322 "reconnect_delay_sec": 0, 00:22:05.322 "fast_io_fail_timeout_sec": 0, 00:22:05.322 "psk": "key0", 00:22:05.322 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:05.322 "hdgst": false, 00:22:05.322 "ddgst": false 00:22:05.322 } 00:22:05.322 }, 00:22:05.322 { 00:22:05.322 "method": "bdev_nvme_set_hotplug", 00:22:05.322 "params": { 00:22:05.322 "period_us": 100000, 00:22:05.322 "enable": false 00:22:05.322 } 00:22:05.322 }, 00:22:05.322 { 00:22:05.322 "method": "bdev_enable_histogram", 00:22:05.322 "params": { 00:22:05.322 "name": "nvme0n1", 00:22:05.322 "enable": true 00:22:05.322 } 00:22:05.322 }, 00:22:05.322 { 00:22:05.322 "method": "bdev_wait_for_examine" 00:22:05.322 } 00:22:05.322 ] 00:22:05.322 }, 00:22:05.322 { 00:22:05.322 "subsystem": "nbd", 00:22:05.322 "config": [] 00:22:05.322 } 00:22:05.322 ] 00:22:05.322 }' 00:22:05.322 15:27:08 nvmf_tcp.nvmf_tls -- target/tls.sh@268 -- # killprocess 3098229 00:22:05.322 15:27:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3098229 ']' 00:22:05.322 15:27:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3098229 00:22:05.322 15:27:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:05.322 15:27:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:05.322 15:27:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3098229 00:22:05.322 15:27:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:05.322 15:27:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:05.322 15:27:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3098229' 00:22:05.322 killing process with pid 3098229 00:22:05.322 15:27:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3098229 00:22:05.322 Received shutdown signal, test time was about 1.000000 seconds 00:22:05.322 00:22:05.322 Latency(us) 00:22:05.322 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:05.322 =================================================================================================================== 00:22:05.322 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:05.322 15:27:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3098229 00:22:05.322 15:27:09 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # killprocess 3097876 00:22:05.322 15:27:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3097876 ']' 00:22:05.322 15:27:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3097876 00:22:05.322 15:27:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:05.579 15:27:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:05.579 15:27:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3097876 00:22:05.579 15:27:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:05.579 15:27:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:05.579 15:27:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3097876' 00:22:05.579 killing process with pid 3097876 00:22:05.579 15:27:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3097876 00:22:05.579 15:27:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3097876 00:22:05.579 15:27:09 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:22:05.579 15:27:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:05.579 15:27:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:05.579 15:27:09 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:22:05.579 "subsystems": [ 00:22:05.579 { 00:22:05.579 "subsystem": "keyring", 00:22:05.579 "config": [ 00:22:05.579 { 00:22:05.579 "method": "keyring_file_add_key", 00:22:05.579 "params": { 00:22:05.579 "name": "key0", 00:22:05.579 "path": "/tmp/tmp.ehJcbKIws5" 00:22:05.579 } 00:22:05.579 } 00:22:05.579 ] 00:22:05.579 }, 00:22:05.579 { 00:22:05.579 "subsystem": "iobuf", 00:22:05.579 "config": [ 00:22:05.579 { 00:22:05.579 "method": "iobuf_set_options", 00:22:05.579 "params": { 00:22:05.579 "small_pool_count": 8192, 00:22:05.579 "large_pool_count": 1024, 00:22:05.579 "small_bufsize": 8192, 00:22:05.579 "large_bufsize": 135168 00:22:05.579 } 00:22:05.579 } 00:22:05.579 ] 00:22:05.579 }, 00:22:05.579 { 00:22:05.579 "subsystem": "sock", 00:22:05.579 "config": [ 00:22:05.579 { 00:22:05.579 "method": "sock_set_default_impl", 00:22:05.579 "params": { 00:22:05.579 "impl_name": "posix" 00:22:05.579 } 00:22:05.579 }, 00:22:05.579 { 00:22:05.579 "method": "sock_impl_set_options", 00:22:05.579 "params": { 00:22:05.579 "impl_name": "ssl", 00:22:05.579 "recv_buf_size": 4096, 00:22:05.579 "send_buf_size": 4096, 00:22:05.579 "enable_recv_pipe": true, 00:22:05.579 "enable_quickack": false, 00:22:05.579 "enable_placement_id": 0, 00:22:05.579 "enable_zerocopy_send_server": true, 00:22:05.579 "enable_zerocopy_send_client": false, 00:22:05.579 "zerocopy_threshold": 0, 00:22:05.579 "tls_version": 0, 00:22:05.579 "enable_ktls": false 00:22:05.579 } 00:22:05.579 }, 00:22:05.579 { 00:22:05.579 "method": "sock_impl_set_options", 00:22:05.579 "params": { 00:22:05.579 "impl_name": "posix", 00:22:05.579 "recv_buf_size": 2097152, 00:22:05.579 "send_buf_size": 2097152, 00:22:05.579 "enable_recv_pipe": true, 00:22:05.579 "enable_quickack": false, 00:22:05.579 "enable_placement_id": 0, 00:22:05.579 "enable_zerocopy_send_server": true, 00:22:05.579 "enable_zerocopy_send_client": false, 00:22:05.579 "zerocopy_threshold": 0, 00:22:05.579 "tls_version": 0, 00:22:05.579 "enable_ktls": false 00:22:05.579 } 00:22:05.579 } 00:22:05.579 ] 00:22:05.579 }, 00:22:05.579 { 00:22:05.579 "subsystem": "vmd", 00:22:05.579 "config": [] 00:22:05.579 }, 00:22:05.579 { 00:22:05.579 "subsystem": "accel", 00:22:05.579 "config": [ 00:22:05.579 { 00:22:05.579 "method": "accel_set_options", 00:22:05.579 "params": { 00:22:05.579 "small_cache_size": 128, 00:22:05.579 "large_cache_size": 16, 00:22:05.579 "task_count": 2048, 00:22:05.579 "sequence_count": 2048, 00:22:05.579 "buf_count": 2048 00:22:05.579 } 00:22:05.579 } 00:22:05.579 ] 00:22:05.579 }, 00:22:05.579 { 00:22:05.579 "subsystem": "bdev", 00:22:05.579 "config": [ 00:22:05.579 { 00:22:05.579 "method": "bdev_set_options", 00:22:05.579 "params": { 00:22:05.579 "bdev_io_pool_size": 65535, 00:22:05.579 "bdev_io_cache_size": 256, 00:22:05.579 "bdev_auto_examine": true, 00:22:05.579 "iobuf_small_cache_size": 128, 00:22:05.579 "iobuf_large_cache_size": 16 00:22:05.579 } 00:22:05.579 }, 00:22:05.579 { 00:22:05.579 "method": "bdev_raid_set_options", 00:22:05.579 "params": { 00:22:05.579 "process_window_size_kb": 1024 00:22:05.579 } 00:22:05.579 }, 00:22:05.579 { 00:22:05.579 "method": "bdev_iscsi_set_options", 00:22:05.579 "params": { 00:22:05.579 "timeout_sec": 30 00:22:05.579 } 00:22:05.579 }, 00:22:05.579 { 00:22:05.579 "method": "bdev_nvme_set_options", 00:22:05.579 "params": { 00:22:05.579 "action_on_timeout": "none", 00:22:05.579 "timeout_us": 0, 00:22:05.579 "timeout_admin_us": 0, 00:22:05.579 "keep_alive_timeout_ms": 10000, 00:22:05.579 "arbitration_burst": 0, 00:22:05.579 "low_priority_weight": 0, 00:22:05.579 "medium_priority_weight": 0, 00:22:05.579 "high_priority_weight": 0, 00:22:05.579 "nvme_adminq_poll_period_us": 10000, 00:22:05.579 "nvme_ioq_poll_period_us": 0, 00:22:05.579 "io_queue_requests": 0, 00:22:05.579 "delay_cmd_submit": true, 00:22:05.579 "transport_retry_count": 4, 00:22:05.579 "bdev_retry_count": 3, 00:22:05.579 "transport_ack_timeout": 0, 00:22:05.579 "ctrlr_loss_timeout_sec": 0, 00:22:05.579 "reconnect_delay_sec": 0, 00:22:05.579 "fast_io_fail_timeout_sec": 0, 00:22:05.579 "disable_auto_failback": false, 00:22:05.579 "generate_uuids": false, 00:22:05.579 "transport_tos": 0, 00:22:05.579 "nvme_error_stat": false, 00:22:05.579 "rdma_srq_size": 0, 00:22:05.579 "io_path_stat": false, 00:22:05.579 "allow_accel_sequence": false, 00:22:05.579 "rdma_max_cq_size": 0, 00:22:05.579 "rdma_cm_event_timeout_ms": 0, 00:22:05.579 "dhchap_digests": [ 00:22:05.579 "sha256", 00:22:05.579 "sha384", 00:22:05.579 "sha512" 00:22:05.579 ], 00:22:05.579 "dhchap_dhgroups": [ 00:22:05.579 "null", 00:22:05.579 "ffdhe2048", 00:22:05.579 "ffdhe3072", 00:22:05.579 "ffdhe4096", 00:22:05.579 "ffdhe6144", 00:22:05.579 "ffdhe8192" 00:22:05.579 ] 00:22:05.579 } 00:22:05.579 }, 00:22:05.579 { 00:22:05.579 "method": "bdev_nvme_set_hotplug", 00:22:05.579 "params": { 00:22:05.579 "period_us": 100000, 00:22:05.579 "enable": false 00:22:05.579 } 00:22:05.579 }, 00:22:05.579 { 00:22:05.579 "method": "bdev_malloc_create", 00:22:05.579 "params": { 00:22:05.579 "name": "malloc0", 00:22:05.579 "num_blocks": 8192, 00:22:05.579 "block_size": 4096, 00:22:05.579 "physical_block_size": 4096, 00:22:05.579 "uuid": "555c1729-d60e-4715-95a8-e96c43456299", 00:22:05.579 "optimal_io_boundary": 0 00:22:05.579 } 00:22:05.579 }, 00:22:05.579 { 00:22:05.579 "method": "bdev_wait_for_examine" 00:22:05.579 } 00:22:05.579 ] 00:22:05.579 }, 00:22:05.579 { 00:22:05.580 "subsystem": "nbd", 00:22:05.580 "config": [] 00:22:05.580 }, 00:22:05.580 { 00:22:05.580 "subsystem": "scheduler", 00:22:05.580 "config": [ 00:22:05.580 { 00:22:05.580 "method": "framework_set_scheduler", 00:22:05.580 "params": { 00:22:05.580 "name": "static" 00:22:05.580 } 00:22:05.580 } 00:22:05.580 ] 00:22:05.580 }, 00:22:05.580 { 00:22:05.580 "subsystem": "nvmf", 00:22:05.580 "config": [ 00:22:05.580 { 00:22:05.580 "method": "nvmf_set_config", 00:22:05.580 "params": { 00:22:05.580 "discovery_filter": "match_any", 00:22:05.580 "admin_cmd_passthru": { 00:22:05.580 "identify_ctrlr": false 00:22:05.580 } 00:22:05.580 } 00:22:05.580 }, 00:22:05.580 { 00:22:05.580 "method": "nvmf_set_max_subsystems", 00:22:05.580 "params": { 00:22:05.580 "max_subsystems": 1024 00:22:05.580 } 00:22:05.580 }, 00:22:05.580 { 00:22:05.580 "method": "nvmf_set_crdt", 00:22:05.580 "params": { 00:22:05.580 "crdt1": 0, 00:22:05.580 "crdt2": 0, 00:22:05.580 "crdt3": 0 00:22:05.580 } 00:22:05.580 }, 00:22:05.580 { 00:22:05.580 "method": "nvmf_create_transport", 00:22:05.580 "params": { 00:22:05.580 "trtype": "TCP", 00:22:05.580 "max_queue_depth": 128, 00:22:05.580 "max_io_qpairs_per_ctrlr": 127, 00:22:05.580 "in_capsule_data_size": 4096, 00:22:05.580 "max_io_size": 131072, 00:22:05.580 "io_unit_size": 131072, 00:22:05.580 "max_aq_depth": 128, 00:22:05.580 "num_shared_buffers": 511, 00:22:05.580 "buf_cache_size": 4294967295, 00:22:05.580 "dif_insert_or_strip": false, 00:22:05.580 "zcopy": false, 00:22:05.580 "c2h_success": false, 00:22:05.580 "sock_priority": 0, 00:22:05.580 "abort_timeout_sec": 1, 00:22:05.580 "ack_timeout": 0, 00:22:05.580 "data_wr_pool_size": 0 00:22:05.580 } 00:22:05.580 }, 00:22:05.580 { 00:22:05.580 "method": "nvmf_create_subsystem", 00:22:05.580 "params": { 00:22:05.580 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:05.580 "allow_any_host": false, 00:22:05.580 "serial_number": "00000000000000000000", 00:22:05.580 "model_number": "SPDK bdev Controller", 00:22:05.580 "max_namespaces": 32, 00:22:05.580 "min_cntlid": 1, 00:22:05.580 "max_cntlid": 65519, 00:22:05.580 "ana_reporting": false 00:22:05.580 } 00:22:05.580 }, 00:22:05.580 { 00:22:05.580 "method": "nvmf_subsystem_add_host", 00:22:05.580 "params": { 00:22:05.580 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:05.580 "host": "nqn.2016-06.io.spdk:host1", 00:22:05.580 "psk": "key0" 00:22:05.580 } 00:22:05.580 }, 00:22:05.580 { 00:22:05.580 "method": "nvmf_subsystem_add_ns", 00:22:05.580 "params": { 00:22:05.580 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:05.580 "namespace": { 00:22:05.580 "nsid": 1, 00:22:05.580 "bdev_name": "malloc0", 00:22:05.580 "nguid": "555C1729D60E471595A8E96C43456299", 00:22:05.580 "uuid": "555c1729-d60e-4715-95a8-e96c43456299", 00:22:05.580 "no_auto_visible": false 00:22:05.580 } 00:22:05.580 } 00:22:05.580 }, 00:22:05.580 { 00:22:05.580 "method": "nvmf_subsystem_add_listener", 00:22:05.580 "params": { 00:22:05.580 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:05.580 "listen_address": { 00:22:05.580 "trtype": "TCP", 00:22:05.580 "adrfam": "IPv4", 00:22:05.580 "traddr": "10.0.0.2", 00:22:05.580 "trsvcid": "4420" 00:22:05.580 }, 00:22:05.580 "secure_channel": false, 00:22:05.580 "sock_impl": "ssl" 00:22:05.580 } 00:22:05.580 } 00:22:05.580 ] 00:22:05.580 } 00:22:05.580 ] 00:22:05.580 }' 00:22:05.580 15:27:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:05.580 15:27:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3098773 00:22:05.580 15:27:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:22:05.580 15:27:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3098773 00:22:05.580 15:27:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3098773 ']' 00:22:05.580 15:27:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:05.580 15:27:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:05.580 15:27:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:05.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:05.580 15:27:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:05.580 15:27:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:05.837 [2024-07-15 15:27:09.533043] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:22:05.837 [2024-07-15 15:27:09.533092] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:05.837 EAL: No free 2048 kB hugepages reported on node 1 00:22:05.837 [2024-07-15 15:27:09.604090] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.837 [2024-07-15 15:27:09.671772] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:05.837 [2024-07-15 15:27:09.671810] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:05.837 [2024-07-15 15:27:09.671820] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:05.837 [2024-07-15 15:27:09.671828] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:05.837 [2024-07-15 15:27:09.671839] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:05.837 [2024-07-15 15:27:09.671893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:06.095 [2024-07-15 15:27:09.882138] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:06.095 [2024-07-15 15:27:09.914177] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:06.095 [2024-07-15 15:27:09.925135] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:06.660 15:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:06.660 15:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:06.660 15:27:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:06.660 15:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:06.660 15:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:06.660 15:27:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:06.660 15:27:10 nvmf_tcp.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=3098817 00:22:06.660 15:27:10 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 3098817 /var/tmp/bdevperf.sock 00:22:06.660 15:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3098817 ']' 00:22:06.660 15:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:06.660 15:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:06.660 15:27:10 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:22:06.660 15:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:06.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:06.660 15:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:06.660 15:27:10 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:22:06.660 "subsystems": [ 00:22:06.660 { 00:22:06.660 "subsystem": "keyring", 00:22:06.660 "config": [ 00:22:06.660 { 00:22:06.660 "method": "keyring_file_add_key", 00:22:06.660 "params": { 00:22:06.660 "name": "key0", 00:22:06.660 "path": "/tmp/tmp.ehJcbKIws5" 00:22:06.660 } 00:22:06.660 } 00:22:06.660 ] 00:22:06.660 }, 00:22:06.660 { 00:22:06.660 "subsystem": "iobuf", 00:22:06.660 "config": [ 00:22:06.660 { 00:22:06.660 "method": "iobuf_set_options", 00:22:06.660 "params": { 00:22:06.660 "small_pool_count": 8192, 00:22:06.660 "large_pool_count": 1024, 00:22:06.660 "small_bufsize": 8192, 00:22:06.660 "large_bufsize": 135168 00:22:06.660 } 00:22:06.660 } 00:22:06.660 ] 00:22:06.660 }, 00:22:06.660 { 00:22:06.660 "subsystem": "sock", 00:22:06.660 "config": [ 00:22:06.660 { 00:22:06.660 "method": "sock_set_default_impl", 00:22:06.660 "params": { 00:22:06.660 "impl_name": "posix" 00:22:06.660 } 00:22:06.660 }, 00:22:06.660 { 00:22:06.660 "method": "sock_impl_set_options", 00:22:06.660 "params": { 00:22:06.660 "impl_name": "ssl", 00:22:06.660 "recv_buf_size": 4096, 00:22:06.660 "send_buf_size": 4096, 00:22:06.660 "enable_recv_pipe": true, 00:22:06.660 "enable_quickack": false, 00:22:06.660 "enable_placement_id": 0, 00:22:06.660 "enable_zerocopy_send_server": true, 00:22:06.660 "enable_zerocopy_send_client": false, 00:22:06.660 "zerocopy_threshold": 0, 00:22:06.660 "tls_version": 0, 00:22:06.660 "enable_ktls": false 00:22:06.660 } 00:22:06.661 }, 00:22:06.661 { 00:22:06.661 "method": "sock_impl_set_options", 00:22:06.661 "params": { 00:22:06.661 "impl_name": "posix", 00:22:06.661 "recv_buf_size": 2097152, 00:22:06.661 "send_buf_size": 2097152, 00:22:06.661 "enable_recv_pipe": true, 00:22:06.661 "enable_quickack": false, 00:22:06.661 "enable_placement_id": 0, 00:22:06.661 "enable_zerocopy_send_server": true, 00:22:06.661 "enable_zerocopy_send_client": false, 00:22:06.661 "zerocopy_threshold": 0, 00:22:06.661 "tls_version": 0, 00:22:06.661 "enable_ktls": false 00:22:06.661 } 00:22:06.661 } 00:22:06.661 ] 00:22:06.661 }, 00:22:06.661 { 00:22:06.661 "subsystem": "vmd", 00:22:06.661 "config": [] 00:22:06.661 }, 00:22:06.661 { 00:22:06.661 "subsystem": "accel", 00:22:06.661 "config": [ 00:22:06.661 { 00:22:06.661 "method": "accel_set_options", 00:22:06.661 "params": { 00:22:06.661 "small_cache_size": 128, 00:22:06.661 "large_cache_size": 16, 00:22:06.661 "task_count": 2048, 00:22:06.661 "sequence_count": 2048, 00:22:06.661 "buf_count": 2048 00:22:06.661 } 00:22:06.661 } 00:22:06.661 ] 00:22:06.661 }, 00:22:06.661 { 00:22:06.661 "subsystem": "bdev", 00:22:06.661 "config": [ 00:22:06.661 { 00:22:06.661 "method": "bdev_set_options", 00:22:06.661 "params": { 00:22:06.661 "bdev_io_pool_size": 65535, 00:22:06.661 "bdev_io_cache_size": 256, 00:22:06.661 "bdev_auto_examine": true, 00:22:06.661 "iobuf_small_cache_size": 128, 00:22:06.661 "iobuf_large_cache_size": 16 00:22:06.661 } 00:22:06.661 }, 00:22:06.661 { 00:22:06.661 "method": "bdev_raid_set_options", 00:22:06.661 "params": { 00:22:06.661 "process_window_size_kb": 1024 00:22:06.661 } 00:22:06.661 }, 00:22:06.661 { 00:22:06.661 "method": "bdev_iscsi_set_options", 00:22:06.661 "params": { 00:22:06.661 "timeout_sec": 30 00:22:06.661 } 00:22:06.661 }, 00:22:06.661 { 00:22:06.661 "method": "bdev_nvme_set_options", 00:22:06.661 "params": { 00:22:06.661 "action_on_timeout": "none", 00:22:06.661 "timeout_us": 0, 00:22:06.661 "timeout_admin_us": 0, 00:22:06.661 "keep_alive_timeout_ms": 10000, 00:22:06.661 "arbitration_burst": 0, 00:22:06.661 "low_priority_weight": 0, 00:22:06.661 "medium_priority_weight": 0, 00:22:06.661 "high_priority_weight": 0, 00:22:06.661 "nvme_adminq_poll_period_us": 10000, 00:22:06.661 "nvme_ioq_poll_period_us": 0, 00:22:06.661 "io_queue_requests": 512, 00:22:06.661 "delay_cmd_submit": true, 00:22:06.661 "transport_retry_count": 4, 00:22:06.661 "bdev_retry_count": 3, 00:22:06.661 "transport_ack_timeout": 0, 00:22:06.661 "ctrlr_loss_timeout_sec": 0, 00:22:06.661 "reconnect_delay_sec": 0, 00:22:06.661 "fast_io_fail_timeout_sec": 0, 00:22:06.661 "disable_auto_failback": false, 00:22:06.661 "generate_uuids": false, 00:22:06.661 "transport_tos": 0, 00:22:06.661 "nvme_error_stat": false, 00:22:06.661 "rdma_srq_size": 0, 00:22:06.661 "io_path_stat": false, 00:22:06.661 "allow_accel_sequence": false, 00:22:06.661 "rdma_max_cq_size": 0, 00:22:06.661 "rdma_cm_event_timeout_ms": 0, 00:22:06.661 "dhchap_digests": [ 00:22:06.661 "sha256", 00:22:06.661 "sha384", 00:22:06.661 "sha512" 00:22:06.661 ], 00:22:06.661 "dhchap_dhgroups": [ 00:22:06.661 "null", 00:22:06.661 "ffdhe2048", 00:22:06.661 "ffdhe3072", 00:22:06.661 "ffdhe4096", 00:22:06.661 "ffdhe6144", 00:22:06.661 "ffdhe8192" 00:22:06.661 ] 00:22:06.661 } 00:22:06.661 }, 00:22:06.661 { 00:22:06.661 "method": "bdev_nvme_attach_controller", 00:22:06.661 "params": { 00:22:06.661 "name": "nvme0", 00:22:06.661 "trtype": "TCP", 00:22:06.661 "adrfam": "IPv4", 00:22:06.661 "traddr": "10.0.0.2", 00:22:06.661 "trsvcid": "4420", 00:22:06.661 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:06.661 "prchk_reftag": false, 00:22:06.661 "prchk_guard": false, 00:22:06.661 "ctrlr_loss_timeout_sec": 0, 00:22:06.661 "reconnect_delay_sec": 0, 00:22:06.661 "fast_io_fail_timeout_sec": 0, 00:22:06.661 "psk": "key0", 00:22:06.661 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:06.661 "hdgst": false, 00:22:06.661 "ddgst": false 00:22:06.661 } 00:22:06.661 }, 00:22:06.661 { 00:22:06.661 "method": "bdev_nvme_set_hotplug", 00:22:06.661 "params": { 00:22:06.661 "period_us": 100000, 00:22:06.661 "enable": false 00:22:06.661 } 00:22:06.661 }, 00:22:06.661 { 00:22:06.661 "method": "bdev_enable_histogram", 00:22:06.661 "params": { 00:22:06.661 "name": "nvme0n1", 00:22:06.661 "enable": true 00:22:06.661 } 00:22:06.661 }, 00:22:06.661 { 00:22:06.661 "method": "bdev_wait_for_examine" 00:22:06.661 } 00:22:06.661 ] 00:22:06.661 }, 00:22:06.661 { 00:22:06.661 "subsystem": "nbd", 00:22:06.661 "config": [] 00:22:06.661 } 00:22:06.661 ] 00:22:06.661 }' 00:22:06.661 15:27:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:06.661 [2024-07-15 15:27:10.408902] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:22:06.661 [2024-07-15 15:27:10.408953] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3098817 ] 00:22:06.661 EAL: No free 2048 kB hugepages reported on node 1 00:22:06.661 [2024-07-15 15:27:10.477258] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:06.661 [2024-07-15 15:27:10.547703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:06.919 [2024-07-15 15:27:10.697424] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:07.484 15:27:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:07.484 15:27:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:07.484 15:27:11 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:07.484 15:27:11 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:22:07.484 15:27:11 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.484 15:27:11 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:07.741 Running I/O for 1 seconds... 00:22:08.673 00:22:08.673 Latency(us) 00:22:08.673 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:08.673 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:08.673 Verification LBA range: start 0x0 length 0x2000 00:22:08.673 nvme0n1 : 1.03 4256.50 16.63 0.00 0.00 29700.33 6920.60 64172.85 00:22:08.673 =================================================================================================================== 00:22:08.673 Total : 4256.50 16.63 0.00 0.00 29700.33 6920.60 64172.85 00:22:08.673 0 00:22:08.673 15:27:12 nvmf_tcp.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:22:08.673 15:27:12 nvmf_tcp.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:22:08.673 15:27:12 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:22:08.673 15:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:22:08.673 15:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:22:08.673 15:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:22:08.673 15:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:08.673 15:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:22:08.673 15:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:22:08.673 15:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:22:08.673 15:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:08.673 nvmf_trace.0 00:22:08.932 15:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:22:08.932 15:27:12 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 3098817 00:22:08.932 15:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3098817 ']' 00:22:08.932 15:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3098817 00:22:08.932 15:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:08.932 15:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:08.932 15:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3098817 00:22:08.932 15:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:08.932 15:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:08.932 15:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3098817' 00:22:08.932 killing process with pid 3098817 00:22:08.932 15:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3098817 00:22:08.932 Received shutdown signal, test time was about 1.000000 seconds 00:22:08.932 00:22:08.932 Latency(us) 00:22:08.932 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:08.932 =================================================================================================================== 00:22:08.932 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:08.932 15:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3098817 00:22:08.932 15:27:12 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:22:08.932 15:27:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:08.932 15:27:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:22:08.932 15:27:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:08.932 15:27:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:22:08.932 15:27:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:08.932 15:27:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:08.932 rmmod nvme_tcp 00:22:09.190 rmmod nvme_fabrics 00:22:09.190 rmmod nvme_keyring 00:22:09.190 15:27:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:09.190 15:27:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:22:09.190 15:27:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:22:09.190 15:27:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 3098773 ']' 00:22:09.190 15:27:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 3098773 00:22:09.190 15:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3098773 ']' 00:22:09.190 15:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3098773 00:22:09.190 15:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:09.190 15:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:09.190 15:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3098773 00:22:09.190 15:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:09.191 15:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:09.191 15:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3098773' 00:22:09.191 killing process with pid 3098773 00:22:09.191 15:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3098773 00:22:09.191 15:27:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3098773 00:22:09.449 15:27:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:09.449 15:27:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:09.449 15:27:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:09.449 15:27:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:09.449 15:27:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:09.449 15:27:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:09.449 15:27:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:09.449 15:27:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:11.353 15:27:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:11.353 15:27:15 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.yklIYzCxqz /tmp/tmp.Z3rp7fRWXU /tmp/tmp.ehJcbKIws5 00:22:11.353 00:22:11.353 real 1m26.201s 00:22:11.353 user 2m5.926s 00:22:11.353 sys 0m35.369s 00:22:11.353 15:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:11.353 15:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:11.353 ************************************ 00:22:11.353 END TEST nvmf_tls 00:22:11.353 ************************************ 00:22:11.613 15:27:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:11.613 15:27:15 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:11.613 15:27:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:11.613 15:27:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:11.613 15:27:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:11.613 ************************************ 00:22:11.613 START TEST nvmf_fips 00:22:11.613 ************************************ 00:22:11.613 15:27:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:11.613 * Looking for test storage... 00:22:11.613 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:22:11.613 15:27:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:11.613 15:27:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:22:11.613 15:27:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:11.613 15:27:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:11.613 15:27:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:11.613 15:27:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:11.613 15:27:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:11.613 15:27:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:11.613 15:27:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:11.613 15:27:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:11.613 15:27:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:11.613 15:27:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:11.613 15:27:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:11.613 15:27:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:22:11.613 15:27:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:11.613 15:27:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:11.613 15:27:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:11.613 15:27:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:11.613 15:27:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:11.613 15:27:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:11.613 15:27:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:11.613 15:27:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:11.613 15:27:15 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.613 15:27:15 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.613 15:27:15 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.613 15:27:15 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:22:11.613 15:27:15 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.613 15:27:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:22:11.613 15:27:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:11.613 15:27:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:11.613 15:27:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:11.613 15:27:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:11.613 15:27:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:11.613 15:27:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:11.613 15:27:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:11.613 15:27:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:11.613 15:27:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:11.613 15:27:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:22:11.613 15:27:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:22:11.613 15:27:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:22:11.613 15:27:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:22:11.613 15:27:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:22:11.613 15:27:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:22:11.613 15:27:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:22:11.613 15:27:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:22:11.613 15:27:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:22:11.613 15:27:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:22:11.613 15:27:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:22:11.614 15:27:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:22:11.614 15:27:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:22:11.614 15:27:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:22:11.614 15:27:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:22:11.614 15:27:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:22:11.614 15:27:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:22:11.614 15:27:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:22:11.614 15:27:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:22:11.614 15:27:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:11.614 15:27:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:22:11.614 15:27:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:22:11.614 15:27:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:11.614 15:27:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:22:11.614 15:27:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:22:11.614 15:27:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:22:11.614 15:27:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:22:11.614 15:27:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:11.614 15:27:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:22:11.614 15:27:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:22:11.614 15:27:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:11.614 15:27:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:22:11.614 15:27:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:22:11.614 15:27:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:11.614 15:27:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:22:11.614 15:27:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:11.614 15:27:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:11.614 15:27:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:11.614 15:27:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:22:11.614 15:27:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:22:11.614 15:27:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:11.614 15:27:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:11.614 15:27:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:11.614 15:27:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:22:11.614 15:27:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:11.614 15:27:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:22:11.614 15:27:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:22:11.614 15:27:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:11.614 15:27:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:22:11.614 15:27:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:22:11.614 15:27:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:22:11.614 15:27:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:22:11.614 15:27:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:22:11.614 15:27:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:22:11.614 15:27:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:11.614 15:27:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:11.614 15:27:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:11.614 15:27:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:22:11.614 15:27:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:11.614 15:27:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:22:11.614 15:27:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:22:11.873 15:27:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:22:11.873 15:27:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:22:11.873 15:27:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:22:11.873 15:27:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:22:11.873 15:27:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:22:11.873 15:27:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:22:11.873 15:27:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:22:11.873 15:27:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:22:11.873 15:27:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:22:11.873 15:27:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:22:11.873 15:27:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:22:11.873 15:27:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:22:11.873 15:27:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:22:11.873 15:27:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:22:11.873 15:27:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:22:11.873 15:27:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:22:11.873 15:27:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:22:11.873 15:27:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:22:11.873 15:27:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:22:11.873 15:27:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:22:11.873 15:27:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:22:11.873 15:27:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:22:11.873 15:27:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:22:11.873 15:27:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:11.873 15:27:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:22:11.873 15:27:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:11.873 15:27:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:22:11.873 15:27:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:11.873 15:27:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:22:11.873 15:27:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:22:11.873 15:27:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:22:11.873 Error setting digest 00:22:11.873 00624F057E7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:22:11.873 00624F057E7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:22:11.873 15:27:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:22:11.873 15:27:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:11.873 15:27:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:11.873 15:27:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:11.873 15:27:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:22:11.873 15:27:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:11.873 15:27:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:11.873 15:27:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:11.873 15:27:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:11.873 15:27:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:11.873 15:27:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:11.873 15:27:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:11.873 15:27:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:11.873 15:27:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:11.873 15:27:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:11.873 15:27:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:22:11.873 15:27:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:18.431 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:18.431 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:18.431 Found net devices under 0000:af:00.0: cvl_0_0 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:18.431 Found net devices under 0000:af:00.1: cvl_0_1 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:18.431 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:18.432 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:18.432 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:18.432 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:18.432 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:18.432 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:18.432 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:18.432 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:18.432 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:18.432 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:18.432 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:18.432 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:22:18.432 00:22:18.432 --- 10.0.0.2 ping statistics --- 00:22:18.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:18.432 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:22:18.432 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:18.432 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:18.432 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:22:18.432 00:22:18.432 --- 10.0.0.1 ping statistics --- 00:22:18.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:18.432 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:22:18.432 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:18.432 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:22:18.432 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:18.432 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:18.432 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:18.432 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:18.432 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:18.432 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:18.432 15:27:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:18.432 15:27:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:22:18.432 15:27:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:18.432 15:27:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:18.432 15:27:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:18.432 15:27:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=3103042 00:22:18.432 15:27:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 3103042 00:22:18.432 15:27:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 3103042 ']' 00:22:18.432 15:27:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:18.432 15:27:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:18.432 15:27:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:18.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:18.432 15:27:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:18.432 15:27:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:18.432 15:27:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:18.432 [2024-07-15 15:27:22.076015] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:22:18.432 [2024-07-15 15:27:22.076069] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:18.432 EAL: No free 2048 kB hugepages reported on node 1 00:22:18.432 [2024-07-15 15:27:22.150770] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:18.432 [2024-07-15 15:27:22.221533] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:18.432 [2024-07-15 15:27:22.221573] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:18.432 [2024-07-15 15:27:22.221582] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:18.432 [2024-07-15 15:27:22.221590] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:18.432 [2024-07-15 15:27:22.221613] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:18.432 [2024-07-15 15:27:22.221634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:18.996 15:27:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:18.996 15:27:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:22:18.996 15:27:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:18.996 15:27:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:18.996 15:27:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:18.996 15:27:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:18.996 15:27:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:22:18.996 15:27:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:18.996 15:27:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:18.996 15:27:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:18.996 15:27:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:18.996 15:27:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:18.996 15:27:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:18.997 15:27:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:19.254 [2024-07-15 15:27:23.042884] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:19.254 [2024-07-15 15:27:23.058873] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:19.254 [2024-07-15 15:27:23.059058] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:19.254 [2024-07-15 15:27:23.087360] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:19.254 malloc0 00:22:19.254 15:27:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:19.254 15:27:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=3103192 00:22:19.254 15:27:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 3103192 /var/tmp/bdevperf.sock 00:22:19.254 15:27:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 3103192 ']' 00:22:19.254 15:27:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:19.254 15:27:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:19.254 15:27:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:19.254 15:27:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:19.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:19.254 15:27:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:19.254 15:27:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:19.519 [2024-07-15 15:27:23.163964] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:22:19.519 [2024-07-15 15:27:23.164023] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3103192 ] 00:22:19.519 EAL: No free 2048 kB hugepages reported on node 1 00:22:19.519 [2024-07-15 15:27:23.230560] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.519 [2024-07-15 15:27:23.304926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:20.089 15:27:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:20.089 15:27:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:22:20.089 15:27:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:20.347 [2024-07-15 15:27:24.091778] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:20.347 [2024-07-15 15:27:24.091877] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:20.347 TLSTESTn1 00:22:20.347 15:27:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:20.347 Running I/O for 10 seconds... 00:22:32.557 00:22:32.557 Latency(us) 00:22:32.557 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.557 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:32.557 Verification LBA range: start 0x0 length 0x2000 00:22:32.557 TLSTESTn1 : 10.03 4390.85 17.15 0.00 0.00 29092.94 6763.32 57042.53 00:22:32.557 =================================================================================================================== 00:22:32.557 Total : 4390.85 17.15 0.00 0.00 29092.94 6763.32 57042.53 00:22:32.557 0 00:22:32.557 15:27:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:22:32.557 15:27:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:22:32.557 15:27:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:22:32.557 15:27:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:22:32.557 15:27:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:22:32.557 15:27:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:32.557 15:27:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:22:32.557 15:27:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:22:32.557 15:27:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:22:32.557 15:27:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:32.557 nvmf_trace.0 00:22:32.557 15:27:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:22:32.557 15:27:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3103192 00:22:32.557 15:27:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 3103192 ']' 00:22:32.557 15:27:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 3103192 00:22:32.557 15:27:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:22:32.557 15:27:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:32.557 15:27:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3103192 00:22:32.557 15:27:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:32.557 15:27:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:32.557 15:27:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3103192' 00:22:32.557 killing process with pid 3103192 00:22:32.557 15:27:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 3103192 00:22:32.557 Received shutdown signal, test time was about 10.000000 seconds 00:22:32.557 00:22:32.557 Latency(us) 00:22:32.557 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.557 =================================================================================================================== 00:22:32.557 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:32.557 [2024-07-15 15:27:34.454421] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:32.557 15:27:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 3103192 00:22:32.557 15:27:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:22:32.557 15:27:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:32.557 15:27:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:22:32.557 15:27:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:32.557 15:27:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:22:32.557 15:27:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:32.557 15:27:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:32.557 rmmod nvme_tcp 00:22:32.557 rmmod nvme_fabrics 00:22:32.557 rmmod nvme_keyring 00:22:32.557 15:27:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:32.557 15:27:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:22:32.557 15:27:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:22:32.557 15:27:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 3103042 ']' 00:22:32.557 15:27:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 3103042 00:22:32.557 15:27:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 3103042 ']' 00:22:32.557 15:27:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 3103042 00:22:32.557 15:27:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:22:32.557 15:27:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:32.557 15:27:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3103042 00:22:32.557 15:27:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:32.557 15:27:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:32.557 15:27:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3103042' 00:22:32.557 killing process with pid 3103042 00:22:32.557 15:27:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 3103042 00:22:32.557 [2024-07-15 15:27:34.746798] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:32.557 15:27:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 3103042 00:22:32.557 15:27:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:32.557 15:27:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:32.557 15:27:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:32.557 15:27:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:32.557 15:27:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:32.557 15:27:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:32.557 15:27:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:32.557 15:27:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:33.123 15:27:36 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:33.123 15:27:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:33.123 00:22:33.123 real 0m21.701s 00:22:33.123 user 0m21.601s 00:22:33.123 sys 0m10.805s 00:22:33.123 15:27:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:33.123 15:27:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:33.123 ************************************ 00:22:33.123 END TEST nvmf_fips 00:22:33.123 ************************************ 00:22:33.381 15:27:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:33.381 15:27:37 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:22:33.381 15:27:37 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:22:33.381 15:27:37 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:22:33.381 15:27:37 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:22:33.381 15:27:37 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:22:33.381 15:27:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:39.946 15:27:43 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:39.946 15:27:43 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:22:39.946 15:27:43 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:39.946 15:27:43 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:39.946 15:27:43 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:39.946 15:27:43 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:39.946 15:27:43 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:39.946 15:27:43 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:22:39.946 15:27:43 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:39.946 15:27:43 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:22:39.946 15:27:43 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:22:39.946 15:27:43 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:22:39.946 15:27:43 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:22:39.946 15:27:43 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:22:39.946 15:27:43 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:22:39.946 15:27:43 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:39.946 15:27:43 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:39.946 15:27:43 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:39.946 15:27:43 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:39.946 15:27:43 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:39.946 15:27:43 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:39.946 15:27:43 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:39.946 15:27:43 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:39.946 15:27:43 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:39.946 15:27:43 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:39.946 15:27:43 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:39.946 15:27:43 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:39.946 15:27:43 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:39.946 15:27:43 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:39.946 15:27:43 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:39.946 15:27:43 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:39.946 15:27:43 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:39.946 15:27:43 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:39.946 15:27:43 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:39.946 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:39.946 15:27:43 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:39.946 15:27:43 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:39.946 15:27:43 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.946 15:27:43 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.946 15:27:43 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:39.946 15:27:43 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:39.946 15:27:43 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:39.946 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:39.946 15:27:43 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:39.946 15:27:43 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:39.946 15:27:43 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.946 15:27:43 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.946 15:27:43 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:39.946 15:27:43 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:39.946 15:27:43 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:39.947 15:27:43 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:39.947 15:27:43 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:39.947 15:27:43 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.947 15:27:43 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:39.947 15:27:43 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:39.947 15:27:43 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:39.947 15:27:43 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:39.947 15:27:43 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.947 15:27:43 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:39.947 Found net devices under 0000:af:00.0: cvl_0_0 00:22:39.947 15:27:43 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.947 15:27:43 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:39.947 15:27:43 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.947 15:27:43 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:39.947 15:27:43 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:39.947 15:27:43 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:39.947 15:27:43 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:39.947 15:27:43 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.947 15:27:43 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:39.947 Found net devices under 0000:af:00.1: cvl_0_1 00:22:39.947 15:27:43 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.947 15:27:43 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:39.947 15:27:43 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:39.947 15:27:43 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:22:39.947 15:27:43 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:39.947 15:27:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:39.947 15:27:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:39.947 15:27:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:40.206 ************************************ 00:22:40.206 START TEST nvmf_perf_adq 00:22:40.206 ************************************ 00:22:40.206 15:27:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:40.206 * Looking for test storage... 00:22:40.206 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:40.206 15:27:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:40.206 15:27:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:22:40.206 15:27:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:40.206 15:27:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:40.206 15:27:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:40.206 15:27:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:40.206 15:27:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:40.206 15:27:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:40.206 15:27:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:40.206 15:27:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:40.206 15:27:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:40.206 15:27:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:40.206 15:27:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:40.206 15:27:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:22:40.206 15:27:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:40.206 15:27:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:40.206 15:27:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:40.206 15:27:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:40.206 15:27:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:40.206 15:27:43 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:40.206 15:27:43 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:40.206 15:27:43 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:40.206 15:27:43 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.206 15:27:43 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.206 15:27:43 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.206 15:27:43 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:22:40.206 15:27:43 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.206 15:27:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:22:40.206 15:27:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:40.206 15:27:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:40.206 15:27:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:40.206 15:27:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:40.206 15:27:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:40.206 15:27:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:40.206 15:27:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:40.206 15:27:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:40.206 15:27:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:22:40.206 15:27:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:40.206 15:27:44 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:46.870 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:46.870 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:46.870 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:46.870 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:46.870 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:46.870 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:46.870 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:46.870 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:46.870 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:46.870 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:46.870 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:46.870 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:46.870 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:46.870 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:46.870 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:46.870 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:46.870 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:46.870 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:46.870 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:46.870 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:46.870 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:46.870 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:46.870 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:46.870 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:46.870 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:46.870 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:46.870 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:46.870 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:46.870 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:46.870 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:46.870 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:46.870 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:46.870 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:46.870 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:46.870 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:46.870 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:46.870 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:46.870 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:46.870 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:46.870 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:46.870 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:46.870 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:46.870 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:46.870 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:46.870 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:46.870 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:46.870 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:46.870 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:46.870 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:46.870 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:46.870 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:46.870 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:46.870 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:46.870 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:46.870 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:46.870 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:46.870 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:46.870 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:46.871 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:46.871 Found net devices under 0000:af:00.0: cvl_0_0 00:22:46.871 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:46.871 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:46.871 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:46.871 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:46.871 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:46.871 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:46.871 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:46.871 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:46.871 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:46.871 Found net devices under 0000:af:00.1: cvl_0_1 00:22:46.871 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:46.871 15:27:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:46.871 15:27:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:46.871 15:27:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:22:46.871 15:27:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:46.871 15:27:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:22:46.871 15:27:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:48.247 15:27:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:50.150 15:27:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:55.421 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:55.421 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:55.421 Found net devices under 0000:af:00.0: cvl_0_0 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:55.421 Found net devices under 0000:af:00.1: cvl_0_1 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:55.421 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:55.680 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:55.680 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:55.680 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:22:55.680 00:22:55.680 --- 10.0.0.2 ping statistics --- 00:22:55.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:55.680 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:22:55.680 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:55.680 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:55.680 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:22:55.680 00:22:55.680 --- 10.0.0.1 ping statistics --- 00:22:55.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:55.680 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:22:55.681 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:55.681 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:55.681 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:55.681 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:55.681 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:55.681 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:55.681 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:55.681 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:55.681 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:55.681 15:27:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:55.681 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:55.681 15:27:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:55.681 15:27:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:55.681 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3113518 00:22:55.681 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3113518 00:22:55.681 15:27:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:55.681 15:27:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 3113518 ']' 00:22:55.681 15:27:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:55.681 15:27:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:55.681 15:27:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:55.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:55.681 15:27:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:55.681 15:27:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:55.681 [2024-07-15 15:27:59.440907] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:22:55.681 [2024-07-15 15:27:59.440956] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:55.681 EAL: No free 2048 kB hugepages reported on node 1 00:22:55.681 [2024-07-15 15:27:59.515145] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:55.939 [2024-07-15 15:27:59.590589] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:55.939 [2024-07-15 15:27:59.590630] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:55.939 [2024-07-15 15:27:59.590640] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:55.939 [2024-07-15 15:27:59.590649] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:55.939 [2024-07-15 15:27:59.590657] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:55.939 [2024-07-15 15:27:59.590710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:55.939 [2024-07-15 15:27:59.590805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:55.939 [2024-07-15 15:27:59.590866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:55.939 [2024-07-15 15:27:59.590869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:56.507 15:28:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:56.507 15:28:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:22:56.507 15:28:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:56.507 15:28:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:56.507 15:28:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:56.507 15:28:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:56.507 15:28:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:22:56.507 15:28:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:56.507 15:28:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.507 15:28:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:56.507 15:28:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:56.507 15:28:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.507 15:28:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:56.507 15:28:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:56.507 15:28:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.507 15:28:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:56.507 15:28:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.507 15:28:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:56.507 15:28:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.507 15:28:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:56.766 15:28:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.766 15:28:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:56.766 15:28:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.766 15:28:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:56.766 [2024-07-15 15:28:00.442700] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:56.766 15:28:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.766 15:28:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:56.766 15:28:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.766 15:28:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:56.766 Malloc1 00:22:56.766 15:28:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.766 15:28:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:56.766 15:28:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.766 15:28:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:56.766 15:28:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.766 15:28:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:56.766 15:28:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.766 15:28:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:56.766 15:28:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.766 15:28:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:56.766 15:28:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.766 15:28:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:56.766 [2024-07-15 15:28:00.489356] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:56.766 15:28:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.766 15:28:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=3113757 00:22:56.766 15:28:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:22:56.766 15:28:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:56.766 EAL: No free 2048 kB hugepages reported on node 1 00:22:58.668 15:28:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:22:58.668 15:28:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.668 15:28:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:58.668 15:28:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.668 15:28:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:22:58.668 "tick_rate": 2500000000, 00:22:58.668 "poll_groups": [ 00:22:58.668 { 00:22:58.668 "name": "nvmf_tgt_poll_group_000", 00:22:58.668 "admin_qpairs": 1, 00:22:58.668 "io_qpairs": 1, 00:22:58.668 "current_admin_qpairs": 1, 00:22:58.668 "current_io_qpairs": 1, 00:22:58.668 "pending_bdev_io": 0, 00:22:58.668 "completed_nvme_io": 21344, 00:22:58.668 "transports": [ 00:22:58.668 { 00:22:58.668 "trtype": "TCP" 00:22:58.668 } 00:22:58.668 ] 00:22:58.668 }, 00:22:58.668 { 00:22:58.668 "name": "nvmf_tgt_poll_group_001", 00:22:58.668 "admin_qpairs": 0, 00:22:58.668 "io_qpairs": 1, 00:22:58.668 "current_admin_qpairs": 0, 00:22:58.668 "current_io_qpairs": 1, 00:22:58.668 "pending_bdev_io": 0, 00:22:58.668 "completed_nvme_io": 21056, 00:22:58.668 "transports": [ 00:22:58.668 { 00:22:58.668 "trtype": "TCP" 00:22:58.668 } 00:22:58.668 ] 00:22:58.668 }, 00:22:58.668 { 00:22:58.668 "name": "nvmf_tgt_poll_group_002", 00:22:58.668 "admin_qpairs": 0, 00:22:58.668 "io_qpairs": 1, 00:22:58.668 "current_admin_qpairs": 0, 00:22:58.668 "current_io_qpairs": 1, 00:22:58.668 "pending_bdev_io": 0, 00:22:58.668 "completed_nvme_io": 21625, 00:22:58.668 "transports": [ 00:22:58.668 { 00:22:58.668 "trtype": "TCP" 00:22:58.668 } 00:22:58.668 ] 00:22:58.668 }, 00:22:58.668 { 00:22:58.668 "name": "nvmf_tgt_poll_group_003", 00:22:58.668 "admin_qpairs": 0, 00:22:58.668 "io_qpairs": 1, 00:22:58.668 "current_admin_qpairs": 0, 00:22:58.668 "current_io_qpairs": 1, 00:22:58.668 "pending_bdev_io": 0, 00:22:58.668 "completed_nvme_io": 21281, 00:22:58.668 "transports": [ 00:22:58.668 { 00:22:58.668 "trtype": "TCP" 00:22:58.668 } 00:22:58.668 ] 00:22:58.668 } 00:22:58.668 ] 00:22:58.668 }' 00:22:58.668 15:28:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:58.668 15:28:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:22:58.927 15:28:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:22:58.927 15:28:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:22:58.927 15:28:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 3113757 00:23:07.042 Initializing NVMe Controllers 00:23:07.042 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:07.042 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:07.042 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:07.042 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:07.042 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:07.042 Initialization complete. Launching workers. 00:23:07.042 ======================================================== 00:23:07.042 Latency(us) 00:23:07.042 Device Information : IOPS MiB/s Average min max 00:23:07.042 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11399.90 44.53 5615.03 1763.08 9197.17 00:23:07.042 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 11216.30 43.81 5706.08 1569.58 9944.72 00:23:07.042 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 11330.30 44.26 5649.00 1941.70 10097.57 00:23:07.042 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11284.10 44.08 5672.46 1473.20 10319.07 00:23:07.042 ======================================================== 00:23:07.042 Total : 45230.60 176.68 5660.45 1473.20 10319.07 00:23:07.042 00:23:07.042 15:28:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:23:07.042 15:28:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:07.042 15:28:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:23:07.042 15:28:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:07.042 15:28:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:23:07.042 15:28:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:07.042 15:28:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:07.042 rmmod nvme_tcp 00:23:07.042 rmmod nvme_fabrics 00:23:07.042 rmmod nvme_keyring 00:23:07.042 15:28:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:07.043 15:28:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:23:07.043 15:28:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:23:07.043 15:28:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3113518 ']' 00:23:07.043 15:28:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3113518 00:23:07.043 15:28:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 3113518 ']' 00:23:07.043 15:28:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 3113518 00:23:07.043 15:28:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:23:07.043 15:28:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:07.043 15:28:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3113518 00:23:07.043 15:28:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:07.043 15:28:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:07.043 15:28:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3113518' 00:23:07.043 killing process with pid 3113518 00:23:07.043 15:28:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 3113518 00:23:07.043 15:28:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 3113518 00:23:07.301 15:28:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:07.301 15:28:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:07.301 15:28:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:07.301 15:28:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:07.301 15:28:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:07.301 15:28:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:07.301 15:28:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:07.301 15:28:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.204 15:28:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:09.204 15:28:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:23:09.204 15:28:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:23:10.606 15:28:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:23:13.140 15:28:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:23:18.411 15:28:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:23:18.411 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:18.411 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:18.411 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:18.411 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:18.411 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:18.411 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:18.411 15:28:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:18.411 15:28:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:18.411 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:18.411 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:18.411 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:23:18.411 15:28:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:18.411 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:18.411 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:23:18.411 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:18.411 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:18.411 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:18.411 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:18.411 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:18.411 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:23:18.411 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:18.411 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:23:18.411 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:23:18.411 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:23:18.411 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:23:18.411 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:23:18.411 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:23:18.411 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:18.411 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:18.411 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:18.411 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:18.411 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:18.411 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:18.411 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:18.411 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:18.411 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:18.411 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:18.411 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:18.411 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:18.411 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:18.411 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:18.411 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:18.411 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:18.411 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:18.411 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:18.412 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:18.412 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:18.412 Found net devices under 0000:af:00.0: cvl_0_0 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:18.412 Found net devices under 0000:af:00.1: cvl_0_1 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:18.412 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:18.412 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:23:18.412 00:23:18.412 --- 10.0.0.2 ping statistics --- 00:23:18.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:18.412 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:18.412 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:18.412 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:23:18.412 00:23:18.412 --- 10.0.0.1 ping statistics --- 00:23:18.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:18.412 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:23:18.412 net.core.busy_poll = 1 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:23:18.412 net.core.busy_read = 1 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:23:18.412 15:28:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:23:18.412 15:28:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:23:18.412 15:28:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:23:18.412 15:28:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:23:18.412 15:28:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:18.412 15:28:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:18.412 15:28:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:18.412 15:28:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:18.412 15:28:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3117806 00:23:18.412 15:28:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3117806 00:23:18.412 15:28:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:18.412 15:28:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 3117806 ']' 00:23:18.412 15:28:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:18.412 15:28:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:18.412 15:28:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:18.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:18.412 15:28:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:18.412 15:28:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:18.412 [2024-07-15 15:28:22.140542] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:23:18.412 [2024-07-15 15:28:22.140589] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:18.412 EAL: No free 2048 kB hugepages reported on node 1 00:23:18.412 [2024-07-15 15:28:22.211100] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:18.412 [2024-07-15 15:28:22.283098] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:18.412 [2024-07-15 15:28:22.283140] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:18.412 [2024-07-15 15:28:22.283149] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:18.412 [2024-07-15 15:28:22.283157] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:18.412 [2024-07-15 15:28:22.283164] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:18.412 [2024-07-15 15:28:22.283230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:18.412 [2024-07-15 15:28:22.283326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:18.412 [2024-07-15 15:28:22.283409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:18.412 [2024-07-15 15:28:22.283411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:19.348 15:28:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:19.348 15:28:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:23:19.348 15:28:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:19.348 15:28:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:19.348 15:28:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:19.348 15:28:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:19.348 15:28:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:23:19.348 15:28:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:23:19.348 15:28:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:23:19.348 15:28:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.348 15:28:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:19.348 15:28:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.348 15:28:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:23:19.348 15:28:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:23:19.348 15:28:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.348 15:28:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:19.348 15:28:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.348 15:28:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:23:19.348 15:28:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.348 15:28:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:19.348 15:28:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.348 15:28:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:23:19.348 15:28:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.348 15:28:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:19.348 [2024-07-15 15:28:23.129682] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:19.348 15:28:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.348 15:28:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:19.348 15:28:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.348 15:28:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:19.348 Malloc1 00:23:19.348 15:28:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.348 15:28:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:19.348 15:28:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.348 15:28:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:19.348 15:28:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.348 15:28:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:19.348 15:28:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.348 15:28:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:19.348 15:28:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.348 15:28:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:19.348 15:28:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.348 15:28:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:19.348 [2024-07-15 15:28:23.180396] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:19.348 15:28:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.348 15:28:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=3117974 00:23:19.348 15:28:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:23:19.348 15:28:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:19.348 EAL: No free 2048 kB hugepages reported on node 1 00:23:21.880 15:28:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:23:21.880 15:28:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.880 15:28:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:21.880 15:28:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.880 15:28:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:23:21.880 "tick_rate": 2500000000, 00:23:21.880 "poll_groups": [ 00:23:21.880 { 00:23:21.880 "name": "nvmf_tgt_poll_group_000", 00:23:21.880 "admin_qpairs": 1, 00:23:21.880 "io_qpairs": 4, 00:23:21.880 "current_admin_qpairs": 1, 00:23:21.880 "current_io_qpairs": 4, 00:23:21.880 "pending_bdev_io": 0, 00:23:21.880 "completed_nvme_io": 46324, 00:23:21.880 "transports": [ 00:23:21.880 { 00:23:21.880 "trtype": "TCP" 00:23:21.880 } 00:23:21.880 ] 00:23:21.880 }, 00:23:21.880 { 00:23:21.880 "name": "nvmf_tgt_poll_group_001", 00:23:21.880 "admin_qpairs": 0, 00:23:21.880 "io_qpairs": 0, 00:23:21.880 "current_admin_qpairs": 0, 00:23:21.880 "current_io_qpairs": 0, 00:23:21.880 "pending_bdev_io": 0, 00:23:21.880 "completed_nvme_io": 0, 00:23:21.880 "transports": [ 00:23:21.880 { 00:23:21.880 "trtype": "TCP" 00:23:21.880 } 00:23:21.880 ] 00:23:21.880 }, 00:23:21.880 { 00:23:21.880 "name": "nvmf_tgt_poll_group_002", 00:23:21.880 "admin_qpairs": 0, 00:23:21.880 "io_qpairs": 0, 00:23:21.880 "current_admin_qpairs": 0, 00:23:21.880 "current_io_qpairs": 0, 00:23:21.880 "pending_bdev_io": 0, 00:23:21.880 "completed_nvme_io": 0, 00:23:21.880 "transports": [ 00:23:21.880 { 00:23:21.880 "trtype": "TCP" 00:23:21.880 } 00:23:21.880 ] 00:23:21.880 }, 00:23:21.880 { 00:23:21.880 "name": "nvmf_tgt_poll_group_003", 00:23:21.880 "admin_qpairs": 0, 00:23:21.880 "io_qpairs": 0, 00:23:21.880 "current_admin_qpairs": 0, 00:23:21.880 "current_io_qpairs": 0, 00:23:21.880 "pending_bdev_io": 0, 00:23:21.880 "completed_nvme_io": 0, 00:23:21.880 "transports": [ 00:23:21.880 { 00:23:21.880 "trtype": "TCP" 00:23:21.880 } 00:23:21.880 ] 00:23:21.880 } 00:23:21.880 ] 00:23:21.880 }' 00:23:21.880 15:28:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:23:21.880 15:28:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:23:21.880 15:28:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=3 00:23:21.880 15:28:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 3 -lt 2 ]] 00:23:21.880 15:28:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 3117974 00:23:29.995 Initializing NVMe Controllers 00:23:29.995 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:29.995 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:29.995 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:29.995 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:29.995 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:29.995 Initialization complete. Launching workers. 00:23:29.995 ======================================================== 00:23:29.995 Latency(us) 00:23:29.995 Device Information : IOPS MiB/s Average min max 00:23:29.995 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6545.96 25.57 9777.91 1175.48 57316.01 00:23:29.995 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6076.76 23.74 10553.88 1351.17 56168.98 00:23:29.995 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6230.76 24.34 10272.65 1715.93 56786.48 00:23:29.995 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5680.16 22.19 11267.43 2048.41 55583.09 00:23:29.995 ======================================================== 00:23:29.995 Total : 24533.65 95.83 10440.62 1175.48 57316.01 00:23:29.995 00:23:29.995 15:28:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:23:29.995 15:28:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:29.995 15:28:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:23:29.995 15:28:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:29.995 15:28:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:23:29.995 15:28:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:29.995 15:28:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:29.995 rmmod nvme_tcp 00:23:29.995 rmmod nvme_fabrics 00:23:29.995 rmmod nvme_keyring 00:23:29.996 15:28:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:29.996 15:28:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:23:29.996 15:28:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:23:29.996 15:28:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3117806 ']' 00:23:29.996 15:28:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3117806 00:23:29.996 15:28:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 3117806 ']' 00:23:29.996 15:28:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 3117806 00:23:29.996 15:28:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:23:29.996 15:28:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:29.996 15:28:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3117806 00:23:29.996 15:28:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:29.996 15:28:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:29.996 15:28:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3117806' 00:23:29.996 killing process with pid 3117806 00:23:29.996 15:28:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 3117806 00:23:29.996 15:28:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 3117806 00:23:29.996 15:28:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:29.996 15:28:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:29.996 15:28:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:29.996 15:28:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:29.996 15:28:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:29.996 15:28:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:29.996 15:28:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:29.996 15:28:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:31.899 15:28:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:31.899 15:28:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:23:31.899 00:23:31.899 real 0m51.921s 00:23:31.899 user 2m46.522s 00:23:31.899 sys 0m13.988s 00:23:31.899 15:28:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:31.899 15:28:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:31.899 ************************************ 00:23:31.899 END TEST nvmf_perf_adq 00:23:31.899 ************************************ 00:23:32.157 15:28:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:32.157 15:28:35 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:32.157 15:28:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:32.157 15:28:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:32.157 15:28:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:32.157 ************************************ 00:23:32.157 START TEST nvmf_shutdown 00:23:32.157 ************************************ 00:23:32.157 15:28:35 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:32.157 * Looking for test storage... 00:23:32.157 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:32.157 15:28:35 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:32.157 15:28:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:23:32.157 15:28:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:32.157 15:28:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:32.157 15:28:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:32.157 15:28:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:32.157 15:28:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:32.157 15:28:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:32.157 15:28:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:32.157 15:28:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:32.157 15:28:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:32.157 15:28:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:32.158 15:28:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:32.158 15:28:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:23:32.158 15:28:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:32.158 15:28:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:32.158 15:28:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:32.158 15:28:35 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:32.158 15:28:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:32.158 15:28:36 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:32.158 15:28:36 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:32.158 15:28:36 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:32.158 15:28:36 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.158 15:28:36 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.158 15:28:36 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.158 15:28:36 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:23:32.158 15:28:36 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.158 15:28:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:23:32.158 15:28:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:32.158 15:28:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:32.158 15:28:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:32.158 15:28:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:32.158 15:28:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:32.158 15:28:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:32.158 15:28:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:32.158 15:28:36 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:32.158 15:28:36 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:32.158 15:28:36 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:32.158 15:28:36 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:23:32.158 15:28:36 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:32.158 15:28:36 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:32.158 15:28:36 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:32.158 ************************************ 00:23:32.158 START TEST nvmf_shutdown_tc1 00:23:32.158 ************************************ 00:23:32.158 15:28:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:23:32.158 15:28:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:23:32.158 15:28:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:32.158 15:28:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:32.158 15:28:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:32.158 15:28:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:32.158 15:28:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:32.158 15:28:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:32.158 15:28:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:32.158 15:28:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:32.158 15:28:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:32.416 15:28:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:32.416 15:28:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:32.416 15:28:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:32.416 15:28:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:38.978 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:38.978 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:38.978 Found net devices under 0000:af:00.0: cvl_0_0 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:38.978 Found net devices under 0000:af:00.1: cvl_0_1 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:38.978 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:38.978 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:38.978 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:23:38.978 00:23:38.978 --- 10.0.0.2 ping statistics --- 00:23:38.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:38.979 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:23:38.979 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:38.979 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:38.979 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:23:38.979 00:23:38.979 --- 10.0.0.1 ping statistics --- 00:23:38.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:38.979 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:23:38.979 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:38.979 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:23:38.979 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:38.979 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:38.979 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:38.979 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:38.979 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:38.979 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:38.979 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:38.979 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:38.979 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:38.979 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:38.979 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:38.979 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=3123454 00:23:38.979 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:38.979 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 3123454 00:23:38.979 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 3123454 ']' 00:23:38.979 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:38.979 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:38.979 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:38.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:38.979 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:38.979 15:28:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:39.237 [2024-07-15 15:28:42.909938] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:23:39.237 [2024-07-15 15:28:42.909987] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:39.237 EAL: No free 2048 kB hugepages reported on node 1 00:23:39.237 [2024-07-15 15:28:42.983978] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:39.237 [2024-07-15 15:28:43.057800] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:39.237 [2024-07-15 15:28:43.057843] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:39.237 [2024-07-15 15:28:43.057853] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:39.237 [2024-07-15 15:28:43.057861] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:39.237 [2024-07-15 15:28:43.057869] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:39.237 [2024-07-15 15:28:43.057967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:39.237 [2024-07-15 15:28:43.058052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:39.237 [2024-07-15 15:28:43.058162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:39.237 [2024-07-15 15:28:43.058163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:40.171 15:28:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:40.171 15:28:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:23:40.171 15:28:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:40.171 15:28:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:40.172 15:28:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:40.172 15:28:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:40.172 15:28:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:40.172 15:28:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.172 15:28:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:40.172 [2024-07-15 15:28:43.771786] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:40.172 15:28:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.172 15:28:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:40.172 15:28:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:40.172 15:28:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:40.172 15:28:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:40.172 15:28:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:40.172 15:28:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:40.172 15:28:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:40.172 15:28:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:40.172 15:28:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:40.172 15:28:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:40.172 15:28:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:40.172 15:28:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:40.172 15:28:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:40.172 15:28:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:40.172 15:28:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:40.172 15:28:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:40.172 15:28:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:40.172 15:28:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:40.172 15:28:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:40.172 15:28:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:40.172 15:28:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:40.172 15:28:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:40.172 15:28:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:40.172 15:28:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:40.172 15:28:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:40.172 15:28:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:40.172 15:28:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.172 15:28:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:40.172 Malloc1 00:23:40.172 [2024-07-15 15:28:43.886771] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:40.172 Malloc2 00:23:40.172 Malloc3 00:23:40.172 Malloc4 00:23:40.172 Malloc5 00:23:40.172 Malloc6 00:23:40.431 Malloc7 00:23:40.431 Malloc8 00:23:40.431 Malloc9 00:23:40.431 Malloc10 00:23:40.431 15:28:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.431 15:28:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:40.431 15:28:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:40.431 15:28:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:40.431 15:28:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=3123765 00:23:40.431 15:28:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 3123765 /var/tmp/bdevperf.sock 00:23:40.431 15:28:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 3123765 ']' 00:23:40.431 15:28:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:40.431 15:28:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:40.431 15:28:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:40.431 15:28:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:40.431 15:28:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:40.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:40.431 15:28:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:40.431 15:28:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:23:40.431 15:28:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:40.431 15:28:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:23:40.431 15:28:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:40.431 15:28:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:40.431 { 00:23:40.431 "params": { 00:23:40.431 "name": "Nvme$subsystem", 00:23:40.431 "trtype": "$TEST_TRANSPORT", 00:23:40.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.431 "adrfam": "ipv4", 00:23:40.431 "trsvcid": "$NVMF_PORT", 00:23:40.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.431 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.431 "hdgst": ${hdgst:-false}, 00:23:40.431 "ddgst": ${ddgst:-false} 00:23:40.431 }, 00:23:40.431 "method": "bdev_nvme_attach_controller" 00:23:40.431 } 00:23:40.431 EOF 00:23:40.431 )") 00:23:40.431 15:28:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:40.431 15:28:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:40.431 15:28:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:40.431 { 00:23:40.431 "params": { 00:23:40.431 "name": "Nvme$subsystem", 00:23:40.431 "trtype": "$TEST_TRANSPORT", 00:23:40.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.431 "adrfam": "ipv4", 00:23:40.431 "trsvcid": "$NVMF_PORT", 00:23:40.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.431 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.431 "hdgst": ${hdgst:-false}, 00:23:40.431 "ddgst": ${ddgst:-false} 00:23:40.431 }, 00:23:40.431 "method": "bdev_nvme_attach_controller" 00:23:40.431 } 00:23:40.431 EOF 00:23:40.431 )") 00:23:40.691 15:28:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:40.691 15:28:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:40.691 15:28:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:40.691 { 00:23:40.691 "params": { 00:23:40.691 "name": "Nvme$subsystem", 00:23:40.691 "trtype": "$TEST_TRANSPORT", 00:23:40.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.691 "adrfam": "ipv4", 00:23:40.691 "trsvcid": "$NVMF_PORT", 00:23:40.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.691 "hdgst": ${hdgst:-false}, 00:23:40.691 "ddgst": ${ddgst:-false} 00:23:40.691 }, 00:23:40.691 "method": "bdev_nvme_attach_controller" 00:23:40.691 } 00:23:40.691 EOF 00:23:40.691 )") 00:23:40.691 15:28:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:40.691 15:28:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:40.691 15:28:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:40.691 { 00:23:40.691 "params": { 00:23:40.691 "name": "Nvme$subsystem", 00:23:40.691 "trtype": "$TEST_TRANSPORT", 00:23:40.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.691 "adrfam": "ipv4", 00:23:40.691 "trsvcid": "$NVMF_PORT", 00:23:40.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.691 "hdgst": ${hdgst:-false}, 00:23:40.691 "ddgst": ${ddgst:-false} 00:23:40.691 }, 00:23:40.691 "method": "bdev_nvme_attach_controller" 00:23:40.691 } 00:23:40.691 EOF 00:23:40.691 )") 00:23:40.691 15:28:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:40.691 15:28:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:40.691 15:28:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:40.691 { 00:23:40.691 "params": { 00:23:40.691 "name": "Nvme$subsystem", 00:23:40.691 "trtype": "$TEST_TRANSPORT", 00:23:40.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.691 "adrfam": "ipv4", 00:23:40.691 "trsvcid": "$NVMF_PORT", 00:23:40.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.691 "hdgst": ${hdgst:-false}, 00:23:40.691 "ddgst": ${ddgst:-false} 00:23:40.691 }, 00:23:40.691 "method": "bdev_nvme_attach_controller" 00:23:40.691 } 00:23:40.691 EOF 00:23:40.691 )") 00:23:40.691 15:28:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:40.691 [2024-07-15 15:28:44.370803] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:23:40.691 [2024-07-15 15:28:44.370859] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:40.691 15:28:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:40.691 15:28:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:40.691 { 00:23:40.691 "params": { 00:23:40.691 "name": "Nvme$subsystem", 00:23:40.691 "trtype": "$TEST_TRANSPORT", 00:23:40.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.691 "adrfam": "ipv4", 00:23:40.691 "trsvcid": "$NVMF_PORT", 00:23:40.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.691 "hdgst": ${hdgst:-false}, 00:23:40.691 "ddgst": ${ddgst:-false} 00:23:40.691 }, 00:23:40.691 "method": "bdev_nvme_attach_controller" 00:23:40.691 } 00:23:40.691 EOF 00:23:40.691 )") 00:23:40.691 15:28:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:40.691 15:28:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:40.691 15:28:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:40.691 { 00:23:40.691 "params": { 00:23:40.691 "name": "Nvme$subsystem", 00:23:40.691 "trtype": "$TEST_TRANSPORT", 00:23:40.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.691 "adrfam": "ipv4", 00:23:40.691 "trsvcid": "$NVMF_PORT", 00:23:40.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.691 "hdgst": ${hdgst:-false}, 00:23:40.691 "ddgst": ${ddgst:-false} 00:23:40.691 }, 00:23:40.691 "method": "bdev_nvme_attach_controller" 00:23:40.691 } 00:23:40.691 EOF 00:23:40.691 )") 00:23:40.691 15:28:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:40.691 15:28:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:40.691 15:28:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:40.691 { 00:23:40.691 "params": { 00:23:40.691 "name": "Nvme$subsystem", 00:23:40.691 "trtype": "$TEST_TRANSPORT", 00:23:40.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.691 "adrfam": "ipv4", 00:23:40.691 "trsvcid": "$NVMF_PORT", 00:23:40.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.691 "hdgst": ${hdgst:-false}, 00:23:40.691 "ddgst": ${ddgst:-false} 00:23:40.691 }, 00:23:40.691 "method": "bdev_nvme_attach_controller" 00:23:40.691 } 00:23:40.691 EOF 00:23:40.691 )") 00:23:40.691 15:28:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:40.691 15:28:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:40.691 15:28:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:40.691 { 00:23:40.691 "params": { 00:23:40.691 "name": "Nvme$subsystem", 00:23:40.691 "trtype": "$TEST_TRANSPORT", 00:23:40.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.691 "adrfam": "ipv4", 00:23:40.691 "trsvcid": "$NVMF_PORT", 00:23:40.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.692 "hdgst": ${hdgst:-false}, 00:23:40.692 "ddgst": ${ddgst:-false} 00:23:40.692 }, 00:23:40.692 "method": "bdev_nvme_attach_controller" 00:23:40.692 } 00:23:40.692 EOF 00:23:40.692 )") 00:23:40.692 15:28:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:40.692 15:28:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:40.692 15:28:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:40.692 { 00:23:40.692 "params": { 00:23:40.692 "name": "Nvme$subsystem", 00:23:40.692 "trtype": "$TEST_TRANSPORT", 00:23:40.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.692 "adrfam": "ipv4", 00:23:40.692 "trsvcid": "$NVMF_PORT", 00:23:40.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.692 "hdgst": ${hdgst:-false}, 00:23:40.692 "ddgst": ${ddgst:-false} 00:23:40.692 }, 00:23:40.692 "method": "bdev_nvme_attach_controller" 00:23:40.692 } 00:23:40.692 EOF 00:23:40.692 )") 00:23:40.692 EAL: No free 2048 kB hugepages reported on node 1 00:23:40.692 15:28:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:40.692 15:28:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:23:40.692 15:28:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:23:40.692 15:28:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:40.692 "params": { 00:23:40.692 "name": "Nvme1", 00:23:40.692 "trtype": "tcp", 00:23:40.692 "traddr": "10.0.0.2", 00:23:40.692 "adrfam": "ipv4", 00:23:40.692 "trsvcid": "4420", 00:23:40.692 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.692 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:40.692 "hdgst": false, 00:23:40.692 "ddgst": false 00:23:40.692 }, 00:23:40.692 "method": "bdev_nvme_attach_controller" 00:23:40.692 },{ 00:23:40.692 "params": { 00:23:40.692 "name": "Nvme2", 00:23:40.692 "trtype": "tcp", 00:23:40.692 "traddr": "10.0.0.2", 00:23:40.692 "adrfam": "ipv4", 00:23:40.692 "trsvcid": "4420", 00:23:40.692 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:40.692 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:40.692 "hdgst": false, 00:23:40.692 "ddgst": false 00:23:40.692 }, 00:23:40.692 "method": "bdev_nvme_attach_controller" 00:23:40.692 },{ 00:23:40.692 "params": { 00:23:40.692 "name": "Nvme3", 00:23:40.692 "trtype": "tcp", 00:23:40.692 "traddr": "10.0.0.2", 00:23:40.692 "adrfam": "ipv4", 00:23:40.692 "trsvcid": "4420", 00:23:40.692 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:40.692 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:40.692 "hdgst": false, 00:23:40.692 "ddgst": false 00:23:40.692 }, 00:23:40.692 "method": "bdev_nvme_attach_controller" 00:23:40.692 },{ 00:23:40.692 "params": { 00:23:40.692 "name": "Nvme4", 00:23:40.692 "trtype": "tcp", 00:23:40.692 "traddr": "10.0.0.2", 00:23:40.692 "adrfam": "ipv4", 00:23:40.692 "trsvcid": "4420", 00:23:40.692 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:40.692 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:40.692 "hdgst": false, 00:23:40.692 "ddgst": false 00:23:40.692 }, 00:23:40.692 "method": "bdev_nvme_attach_controller" 00:23:40.692 },{ 00:23:40.692 "params": { 00:23:40.692 "name": "Nvme5", 00:23:40.692 "trtype": "tcp", 00:23:40.692 "traddr": "10.0.0.2", 00:23:40.692 "adrfam": "ipv4", 00:23:40.692 "trsvcid": "4420", 00:23:40.692 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:40.692 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:40.692 "hdgst": false, 00:23:40.692 "ddgst": false 00:23:40.692 }, 00:23:40.692 "method": "bdev_nvme_attach_controller" 00:23:40.692 },{ 00:23:40.692 "params": { 00:23:40.692 "name": "Nvme6", 00:23:40.692 "trtype": "tcp", 00:23:40.692 "traddr": "10.0.0.2", 00:23:40.692 "adrfam": "ipv4", 00:23:40.692 "trsvcid": "4420", 00:23:40.692 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:40.692 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:40.692 "hdgst": false, 00:23:40.692 "ddgst": false 00:23:40.692 }, 00:23:40.692 "method": "bdev_nvme_attach_controller" 00:23:40.692 },{ 00:23:40.692 "params": { 00:23:40.692 "name": "Nvme7", 00:23:40.692 "trtype": "tcp", 00:23:40.692 "traddr": "10.0.0.2", 00:23:40.692 "adrfam": "ipv4", 00:23:40.692 "trsvcid": "4420", 00:23:40.692 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:40.692 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:40.692 "hdgst": false, 00:23:40.692 "ddgst": false 00:23:40.692 }, 00:23:40.692 "method": "bdev_nvme_attach_controller" 00:23:40.692 },{ 00:23:40.692 "params": { 00:23:40.692 "name": "Nvme8", 00:23:40.692 "trtype": "tcp", 00:23:40.692 "traddr": "10.0.0.2", 00:23:40.692 "adrfam": "ipv4", 00:23:40.692 "trsvcid": "4420", 00:23:40.692 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:40.692 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:40.692 "hdgst": false, 00:23:40.692 "ddgst": false 00:23:40.692 }, 00:23:40.692 "method": "bdev_nvme_attach_controller" 00:23:40.692 },{ 00:23:40.692 "params": { 00:23:40.692 "name": "Nvme9", 00:23:40.692 "trtype": "tcp", 00:23:40.692 "traddr": "10.0.0.2", 00:23:40.692 "adrfam": "ipv4", 00:23:40.692 "trsvcid": "4420", 00:23:40.692 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:40.692 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:40.692 "hdgst": false, 00:23:40.692 "ddgst": false 00:23:40.692 }, 00:23:40.692 "method": "bdev_nvme_attach_controller" 00:23:40.692 },{ 00:23:40.692 "params": { 00:23:40.692 "name": "Nvme10", 00:23:40.692 "trtype": "tcp", 00:23:40.692 "traddr": "10.0.0.2", 00:23:40.692 "adrfam": "ipv4", 00:23:40.692 "trsvcid": "4420", 00:23:40.692 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:40.692 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:40.692 "hdgst": false, 00:23:40.692 "ddgst": false 00:23:40.692 }, 00:23:40.692 "method": "bdev_nvme_attach_controller" 00:23:40.692 }' 00:23:40.692 [2024-07-15 15:28:44.444410] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.692 [2024-07-15 15:28:44.514057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:42.070 15:28:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:42.070 15:28:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:23:42.070 15:28:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:42.070 15:28:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.070 15:28:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:42.070 15:28:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.070 15:28:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 3123765 00:23:42.070 15:28:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:23:42.070 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3123765 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:42.070 15:28:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:23:43.007 15:28:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 3123454 00:23:43.007 15:28:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:43.007 15:28:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:43.007 15:28:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:23:43.007 15:28:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:23:43.007 15:28:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:43.007 15:28:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:43.007 { 00:23:43.007 "params": { 00:23:43.007 "name": "Nvme$subsystem", 00:23:43.007 "trtype": "$TEST_TRANSPORT", 00:23:43.007 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:43.007 "adrfam": "ipv4", 00:23:43.007 "trsvcid": "$NVMF_PORT", 00:23:43.007 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:43.007 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:43.007 "hdgst": ${hdgst:-false}, 00:23:43.007 "ddgst": ${ddgst:-false} 00:23:43.007 }, 00:23:43.007 "method": "bdev_nvme_attach_controller" 00:23:43.007 } 00:23:43.007 EOF 00:23:43.007 )") 00:23:43.007 15:28:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:43.007 15:28:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:43.007 15:28:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:43.007 { 00:23:43.007 "params": { 00:23:43.007 "name": "Nvme$subsystem", 00:23:43.007 "trtype": "$TEST_TRANSPORT", 00:23:43.008 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:43.008 "adrfam": "ipv4", 00:23:43.008 "trsvcid": "$NVMF_PORT", 00:23:43.008 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:43.008 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:43.008 "hdgst": ${hdgst:-false}, 00:23:43.008 "ddgst": ${ddgst:-false} 00:23:43.008 }, 00:23:43.008 "method": "bdev_nvme_attach_controller" 00:23:43.008 } 00:23:43.008 EOF 00:23:43.008 )") 00:23:43.008 15:28:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:43.008 15:28:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:43.008 15:28:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:43.008 { 00:23:43.008 "params": { 00:23:43.008 "name": "Nvme$subsystem", 00:23:43.008 "trtype": "$TEST_TRANSPORT", 00:23:43.008 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:43.008 "adrfam": "ipv4", 00:23:43.008 "trsvcid": "$NVMF_PORT", 00:23:43.008 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:43.008 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:43.008 "hdgst": ${hdgst:-false}, 00:23:43.008 "ddgst": ${ddgst:-false} 00:23:43.008 }, 00:23:43.008 "method": "bdev_nvme_attach_controller" 00:23:43.008 } 00:23:43.008 EOF 00:23:43.008 )") 00:23:43.008 15:28:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:43.008 15:28:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:43.008 15:28:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:43.008 { 00:23:43.008 "params": { 00:23:43.008 "name": "Nvme$subsystem", 00:23:43.008 "trtype": "$TEST_TRANSPORT", 00:23:43.008 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:43.008 "adrfam": "ipv4", 00:23:43.008 "trsvcid": "$NVMF_PORT", 00:23:43.008 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:43.008 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:43.008 "hdgst": ${hdgst:-false}, 00:23:43.008 "ddgst": ${ddgst:-false} 00:23:43.008 }, 00:23:43.008 "method": "bdev_nvme_attach_controller" 00:23:43.008 } 00:23:43.008 EOF 00:23:43.008 )") 00:23:43.008 15:28:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:43.008 15:28:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:43.008 15:28:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:43.008 { 00:23:43.008 "params": { 00:23:43.008 "name": "Nvme$subsystem", 00:23:43.008 "trtype": "$TEST_TRANSPORT", 00:23:43.008 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:43.008 "adrfam": "ipv4", 00:23:43.008 "trsvcid": "$NVMF_PORT", 00:23:43.008 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:43.008 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:43.008 "hdgst": ${hdgst:-false}, 00:23:43.008 "ddgst": ${ddgst:-false} 00:23:43.008 }, 00:23:43.008 "method": "bdev_nvme_attach_controller" 00:23:43.008 } 00:23:43.008 EOF 00:23:43.008 )") 00:23:43.008 15:28:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:43.008 15:28:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:43.008 15:28:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:43.008 { 00:23:43.008 "params": { 00:23:43.008 "name": "Nvme$subsystem", 00:23:43.008 "trtype": "$TEST_TRANSPORT", 00:23:43.008 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:43.008 "adrfam": "ipv4", 00:23:43.008 "trsvcid": "$NVMF_PORT", 00:23:43.008 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:43.008 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:43.008 "hdgst": ${hdgst:-false}, 00:23:43.008 "ddgst": ${ddgst:-false} 00:23:43.008 }, 00:23:43.008 "method": "bdev_nvme_attach_controller" 00:23:43.008 } 00:23:43.008 EOF 00:23:43.008 )") 00:23:43.008 15:28:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:43.008 [2024-07-15 15:28:46.843108] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:23:43.008 [2024-07-15 15:28:46.843163] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3124082 ] 00:23:43.008 15:28:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:43.008 15:28:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:43.008 { 00:23:43.008 "params": { 00:23:43.008 "name": "Nvme$subsystem", 00:23:43.008 "trtype": "$TEST_TRANSPORT", 00:23:43.008 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:43.008 "adrfam": "ipv4", 00:23:43.008 "trsvcid": "$NVMF_PORT", 00:23:43.008 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:43.008 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:43.008 "hdgst": ${hdgst:-false}, 00:23:43.008 "ddgst": ${ddgst:-false} 00:23:43.008 }, 00:23:43.008 "method": "bdev_nvme_attach_controller" 00:23:43.008 } 00:23:43.008 EOF 00:23:43.008 )") 00:23:43.008 15:28:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:43.008 15:28:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:43.008 15:28:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:43.008 { 00:23:43.008 "params": { 00:23:43.008 "name": "Nvme$subsystem", 00:23:43.008 "trtype": "$TEST_TRANSPORT", 00:23:43.008 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:43.008 "adrfam": "ipv4", 00:23:43.008 "trsvcid": "$NVMF_PORT", 00:23:43.008 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:43.008 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:43.008 "hdgst": ${hdgst:-false}, 00:23:43.008 "ddgst": ${ddgst:-false} 00:23:43.008 }, 00:23:43.008 "method": "bdev_nvme_attach_controller" 00:23:43.008 } 00:23:43.008 EOF 00:23:43.008 )") 00:23:43.008 15:28:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:43.008 15:28:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:43.008 15:28:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:43.008 { 00:23:43.008 "params": { 00:23:43.008 "name": "Nvme$subsystem", 00:23:43.008 "trtype": "$TEST_TRANSPORT", 00:23:43.008 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:43.008 "adrfam": "ipv4", 00:23:43.008 "trsvcid": "$NVMF_PORT", 00:23:43.008 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:43.008 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:43.008 "hdgst": ${hdgst:-false}, 00:23:43.008 "ddgst": ${ddgst:-false} 00:23:43.008 }, 00:23:43.008 "method": "bdev_nvme_attach_controller" 00:23:43.008 } 00:23:43.008 EOF 00:23:43.008 )") 00:23:43.008 15:28:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:43.008 15:28:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:43.008 15:28:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:43.008 { 00:23:43.008 "params": { 00:23:43.008 "name": "Nvme$subsystem", 00:23:43.008 "trtype": "$TEST_TRANSPORT", 00:23:43.008 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:43.008 "adrfam": "ipv4", 00:23:43.008 "trsvcid": "$NVMF_PORT", 00:23:43.008 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:43.008 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:43.008 "hdgst": ${hdgst:-false}, 00:23:43.008 "ddgst": ${ddgst:-false} 00:23:43.008 }, 00:23:43.008 "method": "bdev_nvme_attach_controller" 00:23:43.008 } 00:23:43.008 EOF 00:23:43.008 )") 00:23:43.008 15:28:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:43.008 EAL: No free 2048 kB hugepages reported on node 1 00:23:43.008 15:28:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:23:43.008 15:28:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:23:43.008 15:28:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:43.008 "params": { 00:23:43.008 "name": "Nvme1", 00:23:43.008 "trtype": "tcp", 00:23:43.008 "traddr": "10.0.0.2", 00:23:43.008 "adrfam": "ipv4", 00:23:43.008 "trsvcid": "4420", 00:23:43.008 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:43.008 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:43.008 "hdgst": false, 00:23:43.008 "ddgst": false 00:23:43.008 }, 00:23:43.008 "method": "bdev_nvme_attach_controller" 00:23:43.008 },{ 00:23:43.008 "params": { 00:23:43.008 "name": "Nvme2", 00:23:43.008 "trtype": "tcp", 00:23:43.008 "traddr": "10.0.0.2", 00:23:43.008 "adrfam": "ipv4", 00:23:43.008 "trsvcid": "4420", 00:23:43.008 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:43.008 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:43.008 "hdgst": false, 00:23:43.008 "ddgst": false 00:23:43.008 }, 00:23:43.008 "method": "bdev_nvme_attach_controller" 00:23:43.008 },{ 00:23:43.008 "params": { 00:23:43.008 "name": "Nvme3", 00:23:43.008 "trtype": "tcp", 00:23:43.008 "traddr": "10.0.0.2", 00:23:43.008 "adrfam": "ipv4", 00:23:43.008 "trsvcid": "4420", 00:23:43.008 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:43.008 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:43.008 "hdgst": false, 00:23:43.008 "ddgst": false 00:23:43.008 }, 00:23:43.008 "method": "bdev_nvme_attach_controller" 00:23:43.008 },{ 00:23:43.008 "params": { 00:23:43.008 "name": "Nvme4", 00:23:43.008 "trtype": "tcp", 00:23:43.008 "traddr": "10.0.0.2", 00:23:43.008 "adrfam": "ipv4", 00:23:43.008 "trsvcid": "4420", 00:23:43.008 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:43.008 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:43.008 "hdgst": false, 00:23:43.008 "ddgst": false 00:23:43.008 }, 00:23:43.008 "method": "bdev_nvme_attach_controller" 00:23:43.008 },{ 00:23:43.008 "params": { 00:23:43.008 "name": "Nvme5", 00:23:43.009 "trtype": "tcp", 00:23:43.009 "traddr": "10.0.0.2", 00:23:43.009 "adrfam": "ipv4", 00:23:43.009 "trsvcid": "4420", 00:23:43.009 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:43.009 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:43.009 "hdgst": false, 00:23:43.009 "ddgst": false 00:23:43.009 }, 00:23:43.009 "method": "bdev_nvme_attach_controller" 00:23:43.009 },{ 00:23:43.009 "params": { 00:23:43.009 "name": "Nvme6", 00:23:43.009 "trtype": "tcp", 00:23:43.009 "traddr": "10.0.0.2", 00:23:43.009 "adrfam": "ipv4", 00:23:43.009 "trsvcid": "4420", 00:23:43.009 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:43.009 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:43.009 "hdgst": false, 00:23:43.009 "ddgst": false 00:23:43.009 }, 00:23:43.009 "method": "bdev_nvme_attach_controller" 00:23:43.009 },{ 00:23:43.009 "params": { 00:23:43.009 "name": "Nvme7", 00:23:43.009 "trtype": "tcp", 00:23:43.009 "traddr": "10.0.0.2", 00:23:43.009 "adrfam": "ipv4", 00:23:43.009 "trsvcid": "4420", 00:23:43.009 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:43.009 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:43.009 "hdgst": false, 00:23:43.009 "ddgst": false 00:23:43.009 }, 00:23:43.009 "method": "bdev_nvme_attach_controller" 00:23:43.009 },{ 00:23:43.009 "params": { 00:23:43.009 "name": "Nvme8", 00:23:43.009 "trtype": "tcp", 00:23:43.009 "traddr": "10.0.0.2", 00:23:43.009 "adrfam": "ipv4", 00:23:43.009 "trsvcid": "4420", 00:23:43.009 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:43.009 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:43.009 "hdgst": false, 00:23:43.009 "ddgst": false 00:23:43.009 }, 00:23:43.009 "method": "bdev_nvme_attach_controller" 00:23:43.009 },{ 00:23:43.009 "params": { 00:23:43.009 "name": "Nvme9", 00:23:43.009 "trtype": "tcp", 00:23:43.009 "traddr": "10.0.0.2", 00:23:43.009 "adrfam": "ipv4", 00:23:43.009 "trsvcid": "4420", 00:23:43.009 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:43.009 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:43.009 "hdgst": false, 00:23:43.009 "ddgst": false 00:23:43.009 }, 00:23:43.009 "method": "bdev_nvme_attach_controller" 00:23:43.009 },{ 00:23:43.009 "params": { 00:23:43.009 "name": "Nvme10", 00:23:43.009 "trtype": "tcp", 00:23:43.009 "traddr": "10.0.0.2", 00:23:43.009 "adrfam": "ipv4", 00:23:43.009 "trsvcid": "4420", 00:23:43.009 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:43.009 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:43.009 "hdgst": false, 00:23:43.009 "ddgst": false 00:23:43.009 }, 00:23:43.009 "method": "bdev_nvme_attach_controller" 00:23:43.009 }' 00:23:43.268 [2024-07-15 15:28:46.914694] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:43.268 [2024-07-15 15:28:46.985501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:44.646 Running I/O for 1 seconds... 00:23:45.706 00:23:45.706 Latency(us) 00:23:45.706 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:45.706 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:45.706 Verification LBA range: start 0x0 length 0x400 00:23:45.706 Nvme1n1 : 1.02 250.96 15.69 0.00 0.00 252352.92 30618.42 209715.20 00:23:45.707 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:45.707 Verification LBA range: start 0x0 length 0x400 00:23:45.707 Nvme2n1 : 1.07 238.67 14.92 0.00 0.00 260728.63 21181.24 212231.78 00:23:45.707 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:45.707 Verification LBA range: start 0x0 length 0x400 00:23:45.707 Nvme3n1 : 1.16 275.36 17.21 0.00 0.00 223040.80 18245.22 228170.14 00:23:45.707 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:45.707 Verification LBA range: start 0x0 length 0x400 00:23:45.707 Nvme4n1 : 1.08 297.35 18.58 0.00 0.00 202300.62 18769.51 199648.87 00:23:45.707 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:45.707 Verification LBA range: start 0x0 length 0x400 00:23:45.707 Nvme5n1 : 1.17 272.79 17.05 0.00 0.00 218042.53 19084.08 236558.75 00:23:45.707 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:45.707 Verification LBA range: start 0x0 length 0x400 00:23:45.707 Nvme6n1 : 1.11 292.98 18.31 0.00 0.00 197261.07 2306.87 204682.04 00:23:45.707 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:45.707 Verification LBA range: start 0x0 length 0x400 00:23:45.707 Nvme7n1 : 1.18 326.54 20.41 0.00 0.00 176129.23 16462.64 210554.06 00:23:45.707 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:45.707 Verification LBA range: start 0x0 length 0x400 00:23:45.707 Nvme8n1 : 1.18 326.02 20.38 0.00 0.00 173426.35 13369.34 206359.76 00:23:45.707 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:45.707 Verification LBA range: start 0x0 length 0x400 00:23:45.707 Nvme9n1 : 1.16 276.91 17.31 0.00 0.00 200186.10 18245.22 204682.04 00:23:45.707 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:45.707 Verification LBA range: start 0x0 length 0x400 00:23:45.707 Nvme10n1 : 1.19 328.21 20.51 0.00 0.00 166530.06 1756.36 209715.20 00:23:45.707 =================================================================================================================== 00:23:45.707 Total : 2885.80 180.36 0.00 0.00 202933.72 1756.36 236558.75 00:23:45.966 15:28:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:23:45.966 15:28:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:45.966 15:28:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:45.966 15:28:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:45.966 15:28:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:45.966 15:28:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:45.966 15:28:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:23:45.966 15:28:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:45.966 15:28:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:23:45.966 15:28:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:45.966 15:28:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:45.966 rmmod nvme_tcp 00:23:45.966 rmmod nvme_fabrics 00:23:45.966 rmmod nvme_keyring 00:23:45.966 15:28:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:45.966 15:28:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:23:45.966 15:28:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:23:45.966 15:28:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 3123454 ']' 00:23:45.966 15:28:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 3123454 00:23:45.966 15:28:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 3123454 ']' 00:23:45.966 15:28:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 3123454 00:23:45.966 15:28:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:23:45.966 15:28:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:45.966 15:28:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3123454 00:23:45.966 15:28:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:45.966 15:28:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:45.966 15:28:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3123454' 00:23:45.966 killing process with pid 3123454 00:23:45.966 15:28:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 3123454 00:23:45.966 15:28:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 3123454 00:23:46.534 15:28:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:46.534 15:28:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:46.534 15:28:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:46.534 15:28:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:46.534 15:28:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:46.534 15:28:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:46.534 15:28:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:46.534 15:28:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:48.440 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:48.440 00:23:48.440 real 0m16.254s 00:23:48.440 user 0m33.997s 00:23:48.440 sys 0m6.779s 00:23:48.440 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:48.440 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:48.440 ************************************ 00:23:48.440 END TEST nvmf_shutdown_tc1 00:23:48.440 ************************************ 00:23:48.700 15:28:52 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:23:48.700 15:28:52 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:48.700 15:28:52 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:48.700 15:28:52 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:48.700 15:28:52 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:48.700 ************************************ 00:23:48.700 START TEST nvmf_shutdown_tc2 00:23:48.700 ************************************ 00:23:48.700 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:23:48.700 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:23:48.700 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:48.700 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:48.700 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:48.700 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:48.700 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:48.700 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:48.700 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:48.700 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:48.700 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:48.700 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:48.700 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:48.700 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:48.700 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:48.700 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:48.700 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:48.700 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:48.700 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:48.700 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:48.700 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:48.700 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:48.700 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:23:48.700 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:48.700 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:23:48.700 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:23:48.700 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:23:48.700 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:23:48.700 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:23:48.700 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:48.700 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:48.700 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:48.700 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:48.700 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:48.700 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:48.700 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:48.700 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:48.700 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:48.700 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:48.700 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:48.700 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:48.700 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:48.701 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:48.701 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:48.701 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:48.701 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:48.701 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:48.701 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:48.701 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:48.701 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:48.701 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:48.701 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:48.701 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:48.701 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:48.701 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:48.701 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:48.701 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:48.701 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:48.701 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:48.701 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:48.701 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:48.701 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:48.701 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:48.701 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:48.701 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:48.701 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:48.701 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:48.701 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:48.701 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:48.701 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:48.701 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:48.701 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:48.701 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:48.701 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:48.701 Found net devices under 0000:af:00.0: cvl_0_0 00:23:48.701 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:48.701 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:48.701 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:48.701 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:48.701 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:48.701 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:48.701 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:48.701 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:48.701 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:48.701 Found net devices under 0000:af:00.1: cvl_0_1 00:23:48.701 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:48.701 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:48.701 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:48.701 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:48.701 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:48.701 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:48.701 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:48.701 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:48.701 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:48.701 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:48.701 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:48.701 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:48.701 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:48.701 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:48.701 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:48.701 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:48.701 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:48.701 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:48.701 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:48.701 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:48.701 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:48.701 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:48.701 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:48.960 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:48.960 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:48.960 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:48.960 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:48.960 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:23:48.960 00:23:48.960 --- 10.0.0.2 ping statistics --- 00:23:48.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:48.960 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:23:48.960 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:48.960 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:48.960 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.251 ms 00:23:48.960 00:23:48.960 --- 10.0.0.1 ping statistics --- 00:23:48.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:48.960 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:23:48.960 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:48.960 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:23:48.960 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:48.960 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:48.960 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:48.960 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:48.960 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:48.960 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:48.960 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:48.960 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:48.960 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:48.960 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:48.960 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:48.960 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3125224 00:23:48.960 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3125224 00:23:48.960 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:48.960 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3125224 ']' 00:23:48.960 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:48.960 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:48.960 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:48.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:48.960 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:48.960 15:28:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:48.960 [2024-07-15 15:28:52.816155] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:23:48.960 [2024-07-15 15:28:52.816203] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:48.960 EAL: No free 2048 kB hugepages reported on node 1 00:23:49.219 [2024-07-15 15:28:52.887565] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:49.219 [2024-07-15 15:28:52.957274] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:49.219 [2024-07-15 15:28:52.957311] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:49.219 [2024-07-15 15:28:52.957320] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:49.219 [2024-07-15 15:28:52.957329] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:49.219 [2024-07-15 15:28:52.957336] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:49.219 [2024-07-15 15:28:52.957440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:49.219 [2024-07-15 15:28:52.957541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:49.219 [2024-07-15 15:28:52.957654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:49.219 [2024-07-15 15:28:52.957655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:49.785 15:28:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:49.785 15:28:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:23:49.785 15:28:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:49.785 15:28:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:49.785 15:28:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:49.785 15:28:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:49.785 15:28:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:49.785 15:28:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.785 15:28:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:49.785 [2024-07-15 15:28:53.678803] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:49.785 15:28:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.785 15:28:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:49.785 15:28:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:49.785 15:28:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:49.785 15:28:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:50.044 15:28:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:50.044 15:28:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:50.044 15:28:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:50.044 15:28:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:50.044 15:28:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:50.044 15:28:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:50.044 15:28:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:50.044 15:28:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:50.044 15:28:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:50.044 15:28:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:50.044 15:28:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:50.044 15:28:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:50.044 15:28:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:50.044 15:28:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:50.044 15:28:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:50.044 15:28:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:50.044 15:28:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:50.044 15:28:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:50.044 15:28:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:50.044 15:28:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:50.044 15:28:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:50.044 15:28:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:50.044 15:28:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.044 15:28:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:50.044 Malloc1 00:23:50.044 [2024-07-15 15:28:53.789504] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:50.044 Malloc2 00:23:50.044 Malloc3 00:23:50.044 Malloc4 00:23:50.044 Malloc5 00:23:50.301 Malloc6 00:23:50.301 Malloc7 00:23:50.301 Malloc8 00:23:50.301 Malloc9 00:23:50.301 Malloc10 00:23:50.301 15:28:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.301 15:28:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:50.301 15:28:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:50.301 15:28:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:50.559 15:28:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=3125541 00:23:50.559 15:28:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 3125541 /var/tmp/bdevperf.sock 00:23:50.559 15:28:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3125541 ']' 00:23:50.559 15:28:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:50.559 15:28:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:50.559 15:28:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:50.559 15:28:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:50.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:50.559 15:28:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:50.559 15:28:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:50.559 15:28:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:23:50.559 15:28:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:50.559 15:28:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:23:50.560 15:28:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:50.560 15:28:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:50.560 { 00:23:50.560 "params": { 00:23:50.560 "name": "Nvme$subsystem", 00:23:50.560 "trtype": "$TEST_TRANSPORT", 00:23:50.560 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:50.560 "adrfam": "ipv4", 00:23:50.560 "trsvcid": "$NVMF_PORT", 00:23:50.560 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:50.560 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:50.560 "hdgst": ${hdgst:-false}, 00:23:50.560 "ddgst": ${ddgst:-false} 00:23:50.560 }, 00:23:50.560 "method": "bdev_nvme_attach_controller" 00:23:50.560 } 00:23:50.560 EOF 00:23:50.560 )") 00:23:50.560 15:28:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:50.560 15:28:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:50.560 15:28:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:50.560 { 00:23:50.560 "params": { 00:23:50.560 "name": "Nvme$subsystem", 00:23:50.560 "trtype": "$TEST_TRANSPORT", 00:23:50.560 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:50.560 "adrfam": "ipv4", 00:23:50.560 "trsvcid": "$NVMF_PORT", 00:23:50.560 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:50.560 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:50.560 "hdgst": ${hdgst:-false}, 00:23:50.560 "ddgst": ${ddgst:-false} 00:23:50.560 }, 00:23:50.560 "method": "bdev_nvme_attach_controller" 00:23:50.560 } 00:23:50.560 EOF 00:23:50.560 )") 00:23:50.560 15:28:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:50.560 15:28:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:50.560 15:28:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:50.560 { 00:23:50.560 "params": { 00:23:50.560 "name": "Nvme$subsystem", 00:23:50.560 "trtype": "$TEST_TRANSPORT", 00:23:50.560 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:50.560 "adrfam": "ipv4", 00:23:50.560 "trsvcid": "$NVMF_PORT", 00:23:50.560 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:50.560 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:50.560 "hdgst": ${hdgst:-false}, 00:23:50.560 "ddgst": ${ddgst:-false} 00:23:50.560 }, 00:23:50.560 "method": "bdev_nvme_attach_controller" 00:23:50.560 } 00:23:50.560 EOF 00:23:50.560 )") 00:23:50.560 15:28:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:50.560 15:28:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:50.560 15:28:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:50.560 { 00:23:50.560 "params": { 00:23:50.560 "name": "Nvme$subsystem", 00:23:50.560 "trtype": "$TEST_TRANSPORT", 00:23:50.560 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:50.560 "adrfam": "ipv4", 00:23:50.560 "trsvcid": "$NVMF_PORT", 00:23:50.560 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:50.560 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:50.560 "hdgst": ${hdgst:-false}, 00:23:50.560 "ddgst": ${ddgst:-false} 00:23:50.560 }, 00:23:50.560 "method": "bdev_nvme_attach_controller" 00:23:50.560 } 00:23:50.560 EOF 00:23:50.560 )") 00:23:50.560 15:28:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:50.560 15:28:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:50.560 15:28:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:50.560 { 00:23:50.560 "params": { 00:23:50.560 "name": "Nvme$subsystem", 00:23:50.560 "trtype": "$TEST_TRANSPORT", 00:23:50.560 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:50.560 "adrfam": "ipv4", 00:23:50.560 "trsvcid": "$NVMF_PORT", 00:23:50.560 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:50.560 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:50.560 "hdgst": ${hdgst:-false}, 00:23:50.560 "ddgst": ${ddgst:-false} 00:23:50.560 }, 00:23:50.560 "method": "bdev_nvme_attach_controller" 00:23:50.560 } 00:23:50.560 EOF 00:23:50.560 )") 00:23:50.560 15:28:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:50.560 15:28:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:50.560 15:28:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:50.560 { 00:23:50.560 "params": { 00:23:50.560 "name": "Nvme$subsystem", 00:23:50.560 "trtype": "$TEST_TRANSPORT", 00:23:50.560 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:50.560 "adrfam": "ipv4", 00:23:50.560 "trsvcid": "$NVMF_PORT", 00:23:50.560 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:50.560 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:50.560 "hdgst": ${hdgst:-false}, 00:23:50.560 "ddgst": ${ddgst:-false} 00:23:50.560 }, 00:23:50.560 "method": "bdev_nvme_attach_controller" 00:23:50.560 } 00:23:50.560 EOF 00:23:50.560 )") 00:23:50.560 15:28:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:50.560 [2024-07-15 15:28:54.273534] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:23:50.560 [2024-07-15 15:28:54.273589] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3125541 ] 00:23:50.560 15:28:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:50.560 15:28:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:50.560 { 00:23:50.560 "params": { 00:23:50.560 "name": "Nvme$subsystem", 00:23:50.560 "trtype": "$TEST_TRANSPORT", 00:23:50.560 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:50.560 "adrfam": "ipv4", 00:23:50.560 "trsvcid": "$NVMF_PORT", 00:23:50.560 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:50.560 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:50.560 "hdgst": ${hdgst:-false}, 00:23:50.560 "ddgst": ${ddgst:-false} 00:23:50.560 }, 00:23:50.560 "method": "bdev_nvme_attach_controller" 00:23:50.560 } 00:23:50.560 EOF 00:23:50.560 )") 00:23:50.560 15:28:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:50.560 15:28:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:50.560 15:28:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:50.560 { 00:23:50.560 "params": { 00:23:50.560 "name": "Nvme$subsystem", 00:23:50.560 "trtype": "$TEST_TRANSPORT", 00:23:50.560 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:50.560 "adrfam": "ipv4", 00:23:50.560 "trsvcid": "$NVMF_PORT", 00:23:50.560 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:50.560 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:50.560 "hdgst": ${hdgst:-false}, 00:23:50.560 "ddgst": ${ddgst:-false} 00:23:50.560 }, 00:23:50.560 "method": "bdev_nvme_attach_controller" 00:23:50.560 } 00:23:50.560 EOF 00:23:50.560 )") 00:23:50.560 15:28:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:50.560 15:28:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:50.560 15:28:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:50.560 { 00:23:50.560 "params": { 00:23:50.560 "name": "Nvme$subsystem", 00:23:50.560 "trtype": "$TEST_TRANSPORT", 00:23:50.560 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:50.560 "adrfam": "ipv4", 00:23:50.560 "trsvcid": "$NVMF_PORT", 00:23:50.560 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:50.560 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:50.560 "hdgst": ${hdgst:-false}, 00:23:50.560 "ddgst": ${ddgst:-false} 00:23:50.560 }, 00:23:50.560 "method": "bdev_nvme_attach_controller" 00:23:50.560 } 00:23:50.560 EOF 00:23:50.560 )") 00:23:50.560 15:28:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:50.560 15:28:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:50.560 15:28:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:50.560 { 00:23:50.560 "params": { 00:23:50.560 "name": "Nvme$subsystem", 00:23:50.560 "trtype": "$TEST_TRANSPORT", 00:23:50.560 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:50.560 "adrfam": "ipv4", 00:23:50.560 "trsvcid": "$NVMF_PORT", 00:23:50.560 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:50.560 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:50.560 "hdgst": ${hdgst:-false}, 00:23:50.560 "ddgst": ${ddgst:-false} 00:23:50.560 }, 00:23:50.560 "method": "bdev_nvme_attach_controller" 00:23:50.560 } 00:23:50.560 EOF 00:23:50.560 )") 00:23:50.560 15:28:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:50.560 EAL: No free 2048 kB hugepages reported on node 1 00:23:50.560 15:28:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:23:50.560 15:28:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:23:50.560 15:28:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:50.560 "params": { 00:23:50.560 "name": "Nvme1", 00:23:50.560 "trtype": "tcp", 00:23:50.560 "traddr": "10.0.0.2", 00:23:50.560 "adrfam": "ipv4", 00:23:50.560 "trsvcid": "4420", 00:23:50.560 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:50.560 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:50.560 "hdgst": false, 00:23:50.560 "ddgst": false 00:23:50.560 }, 00:23:50.560 "method": "bdev_nvme_attach_controller" 00:23:50.560 },{ 00:23:50.560 "params": { 00:23:50.560 "name": "Nvme2", 00:23:50.560 "trtype": "tcp", 00:23:50.560 "traddr": "10.0.0.2", 00:23:50.560 "adrfam": "ipv4", 00:23:50.560 "trsvcid": "4420", 00:23:50.560 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:50.560 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:50.560 "hdgst": false, 00:23:50.560 "ddgst": false 00:23:50.560 }, 00:23:50.560 "method": "bdev_nvme_attach_controller" 00:23:50.560 },{ 00:23:50.561 "params": { 00:23:50.561 "name": "Nvme3", 00:23:50.561 "trtype": "tcp", 00:23:50.561 "traddr": "10.0.0.2", 00:23:50.561 "adrfam": "ipv4", 00:23:50.561 "trsvcid": "4420", 00:23:50.561 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:50.561 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:50.561 "hdgst": false, 00:23:50.561 "ddgst": false 00:23:50.561 }, 00:23:50.561 "method": "bdev_nvme_attach_controller" 00:23:50.561 },{ 00:23:50.561 "params": { 00:23:50.561 "name": "Nvme4", 00:23:50.561 "trtype": "tcp", 00:23:50.561 "traddr": "10.0.0.2", 00:23:50.561 "adrfam": "ipv4", 00:23:50.561 "trsvcid": "4420", 00:23:50.561 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:50.561 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:50.561 "hdgst": false, 00:23:50.561 "ddgst": false 00:23:50.561 }, 00:23:50.561 "method": "bdev_nvme_attach_controller" 00:23:50.561 },{ 00:23:50.561 "params": { 00:23:50.561 "name": "Nvme5", 00:23:50.561 "trtype": "tcp", 00:23:50.561 "traddr": "10.0.0.2", 00:23:50.561 "adrfam": "ipv4", 00:23:50.561 "trsvcid": "4420", 00:23:50.561 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:50.561 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:50.561 "hdgst": false, 00:23:50.561 "ddgst": false 00:23:50.561 }, 00:23:50.561 "method": "bdev_nvme_attach_controller" 00:23:50.561 },{ 00:23:50.561 "params": { 00:23:50.561 "name": "Nvme6", 00:23:50.561 "trtype": "tcp", 00:23:50.561 "traddr": "10.0.0.2", 00:23:50.561 "adrfam": "ipv4", 00:23:50.561 "trsvcid": "4420", 00:23:50.561 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:50.561 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:50.561 "hdgst": false, 00:23:50.561 "ddgst": false 00:23:50.561 }, 00:23:50.561 "method": "bdev_nvme_attach_controller" 00:23:50.561 },{ 00:23:50.561 "params": { 00:23:50.561 "name": "Nvme7", 00:23:50.561 "trtype": "tcp", 00:23:50.561 "traddr": "10.0.0.2", 00:23:50.561 "adrfam": "ipv4", 00:23:50.561 "trsvcid": "4420", 00:23:50.561 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:50.561 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:50.561 "hdgst": false, 00:23:50.561 "ddgst": false 00:23:50.561 }, 00:23:50.561 "method": "bdev_nvme_attach_controller" 00:23:50.561 },{ 00:23:50.561 "params": { 00:23:50.561 "name": "Nvme8", 00:23:50.561 "trtype": "tcp", 00:23:50.561 "traddr": "10.0.0.2", 00:23:50.561 "adrfam": "ipv4", 00:23:50.561 "trsvcid": "4420", 00:23:50.561 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:50.561 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:50.561 "hdgst": false, 00:23:50.561 "ddgst": false 00:23:50.561 }, 00:23:50.561 "method": "bdev_nvme_attach_controller" 00:23:50.561 },{ 00:23:50.561 "params": { 00:23:50.561 "name": "Nvme9", 00:23:50.561 "trtype": "tcp", 00:23:50.561 "traddr": "10.0.0.2", 00:23:50.561 "adrfam": "ipv4", 00:23:50.561 "trsvcid": "4420", 00:23:50.561 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:50.561 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:50.561 "hdgst": false, 00:23:50.561 "ddgst": false 00:23:50.561 }, 00:23:50.561 "method": "bdev_nvme_attach_controller" 00:23:50.561 },{ 00:23:50.561 "params": { 00:23:50.561 "name": "Nvme10", 00:23:50.561 "trtype": "tcp", 00:23:50.561 "traddr": "10.0.0.2", 00:23:50.561 "adrfam": "ipv4", 00:23:50.561 "trsvcid": "4420", 00:23:50.561 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:50.561 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:50.561 "hdgst": false, 00:23:50.561 "ddgst": false 00:23:50.561 }, 00:23:50.561 "method": "bdev_nvme_attach_controller" 00:23:50.561 }' 00:23:50.561 [2024-07-15 15:28:54.346254] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.561 [2024-07-15 15:28:54.415230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:51.935 Running I/O for 10 seconds... 00:23:51.935 15:28:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:51.935 15:28:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:23:51.935 15:28:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:51.935 15:28:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.935 15:28:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:52.201 15:28:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.201 15:28:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:52.201 15:28:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:52.201 15:28:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:52.201 15:28:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:23:52.201 15:28:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:23:52.201 15:28:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:52.201 15:28:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:52.201 15:28:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:52.201 15:28:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:52.201 15:28:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.201 15:28:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:52.201 15:28:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.201 15:28:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:23:52.201 15:28:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:23:52.201 15:28:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:52.459 15:28:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:52.459 15:28:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:52.459 15:28:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:52.459 15:28:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:52.459 15:28:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.459 15:28:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:52.459 15:28:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.459 15:28:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:23:52.459 15:28:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:23:52.459 15:28:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:23:52.459 15:28:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:23:52.459 15:28:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:23:52.459 15:28:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 3125541 00:23:52.459 15:28:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 3125541 ']' 00:23:52.459 15:28:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 3125541 00:23:52.459 15:28:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:23:52.459 15:28:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:52.459 15:28:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3125541 00:23:52.459 15:28:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:52.459 15:28:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:52.459 15:28:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3125541' 00:23:52.459 killing process with pid 3125541 00:23:52.459 15:28:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 3125541 00:23:52.459 15:28:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 3125541 00:23:52.718 Received shutdown signal, test time was about 0.620327 seconds 00:23:52.718 00:23:52.718 Latency(us) 00:23:52.718 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:52.718 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:52.718 Verification LBA range: start 0x0 length 0x400 00:23:52.718 Nvme1n1 : 0.61 317.26 19.83 0.00 0.00 198530.39 30198.99 173644.19 00:23:52.718 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:52.718 Verification LBA range: start 0x0 length 0x400 00:23:52.718 Nvme2n1 : 0.60 318.55 19.91 0.00 0.00 192891.84 17511.22 187065.96 00:23:52.718 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:52.718 Verification LBA range: start 0x0 length 0x400 00:23:52.718 Nvme3n1 : 0.61 314.57 19.66 0.00 0.00 190589.34 18874.37 211392.92 00:23:52.718 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:52.718 Verification LBA range: start 0x0 length 0x400 00:23:52.718 Nvme4n1 : 0.59 324.05 20.25 0.00 0.00 179171.33 18664.65 187904.82 00:23:52.718 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:52.718 Verification LBA range: start 0x0 length 0x400 00:23:52.718 Nvme5n1 : 0.62 309.84 19.36 0.00 0.00 183676.38 19188.94 207198.62 00:23:52.718 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:52.718 Verification LBA range: start 0x0 length 0x400 00:23:52.718 Nvme6n1 : 0.60 215.02 13.44 0.00 0.00 256041.78 35441.87 223136.97 00:23:52.718 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:52.718 Verification LBA range: start 0x0 length 0x400 00:23:52.718 Nvme7n1 : 0.58 219.91 13.74 0.00 0.00 241904.03 18664.65 205520.90 00:23:52.718 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:52.718 Verification LBA range: start 0x0 length 0x400 00:23:52.718 Nvme8n1 : 0.61 312.39 19.52 0.00 0.00 167215.65 19293.80 204682.04 00:23:52.718 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:52.718 Verification LBA range: start 0x0 length 0x400 00:23:52.718 Nvme9n1 : 0.60 214.13 13.38 0.00 0.00 232485.68 19084.08 239914.19 00:23:52.718 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:52.718 Verification LBA range: start 0x0 length 0x400 00:23:52.718 Nvme10n1 : 0.62 311.03 19.44 0.00 0.00 158100.68 16882.07 205520.90 00:23:52.718 =================================================================================================================== 00:23:52.718 Total : 2856.75 178.55 0.00 0.00 195236.66 16882.07 239914.19 00:23:52.718 15:28:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:23:54.095 15:28:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 3125224 00:23:54.095 15:28:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:23:54.095 15:28:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:54.095 15:28:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:54.095 15:28:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:54.095 15:28:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:54.095 15:28:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:54.095 15:28:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:23:54.095 15:28:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:54.095 15:28:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:23:54.095 15:28:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:54.095 15:28:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:54.095 rmmod nvme_tcp 00:23:54.095 rmmod nvme_fabrics 00:23:54.095 rmmod nvme_keyring 00:23:54.095 15:28:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:54.095 15:28:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:23:54.095 15:28:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:23:54.095 15:28:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 3125224 ']' 00:23:54.095 15:28:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 3125224 00:23:54.095 15:28:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 3125224 ']' 00:23:54.095 15:28:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 3125224 00:23:54.095 15:28:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:23:54.095 15:28:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:54.095 15:28:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3125224 00:23:54.095 15:28:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:54.095 15:28:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:54.096 15:28:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3125224' 00:23:54.096 killing process with pid 3125224 00:23:54.096 15:28:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 3125224 00:23:54.096 15:28:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 3125224 00:23:54.355 15:28:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:54.355 15:28:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:54.355 15:28:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:54.355 15:28:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:54.355 15:28:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:54.355 15:28:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:54.355 15:28:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:54.355 15:28:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:56.892 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:56.892 00:23:56.892 real 0m7.796s 00:23:56.892 user 0m22.775s 00:23:56.892 sys 0m1.541s 00:23:56.892 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:56.893 ************************************ 00:23:56.893 END TEST nvmf_shutdown_tc2 00:23:56.893 ************************************ 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:56.893 ************************************ 00:23:56.893 START TEST nvmf_shutdown_tc3 00:23:56.893 ************************************ 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:56.893 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:56.893 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:56.893 Found net devices under 0000:af:00.0: cvl_0_0 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:56.893 Found net devices under 0000:af:00.1: cvl_0_1 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:56.893 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:56.894 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:56.894 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:23:56.894 00:23:56.894 --- 10.0.0.2 ping statistics --- 00:23:56.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:56.894 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:23:56.894 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:56.894 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:56.894 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:23:56.894 00:23:56.894 --- 10.0.0.1 ping statistics --- 00:23:56.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:56.894 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:23:56.894 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:56.894 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:23:56.894 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:56.894 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:56.894 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:56.894 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:56.894 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:56.894 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:56.894 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:56.894 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:56.894 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:56.894 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:56.894 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:56.894 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=3126710 00:23:56.894 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 3126710 00:23:56.894 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:56.894 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 3126710 ']' 00:23:56.894 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:56.894 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:56.894 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:56.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:56.894 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:56.894 15:29:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:56.894 [2024-07-15 15:29:00.726547] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:23:56.894 [2024-07-15 15:29:00.726602] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:56.894 EAL: No free 2048 kB hugepages reported on node 1 00:23:57.153 [2024-07-15 15:29:00.802066] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:57.153 [2024-07-15 15:29:00.876156] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:57.153 [2024-07-15 15:29:00.876197] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:57.153 [2024-07-15 15:29:00.876206] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:57.153 [2024-07-15 15:29:00.876214] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:57.153 [2024-07-15 15:29:00.876221] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:57.153 [2024-07-15 15:29:00.876328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:57.153 [2024-07-15 15:29:00.876430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:57.153 [2024-07-15 15:29:00.876543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:57.153 [2024-07-15 15:29:00.876545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:57.719 15:29:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:57.719 15:29:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:23:57.719 15:29:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:57.719 15:29:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:57.719 15:29:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:57.719 15:29:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:57.719 15:29:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:57.719 15:29:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.719 15:29:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:57.719 [2024-07-15 15:29:01.561567] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:57.719 15:29:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.719 15:29:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:57.719 15:29:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:57.719 15:29:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:57.719 15:29:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:57.719 15:29:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:57.719 15:29:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:57.719 15:29:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:57.719 15:29:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:57.719 15:29:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:57.719 15:29:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:57.719 15:29:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:57.719 15:29:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:57.719 15:29:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:57.719 15:29:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:57.719 15:29:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:57.719 15:29:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:57.719 15:29:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:57.719 15:29:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:57.719 15:29:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:57.719 15:29:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:57.719 15:29:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:57.719 15:29:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:57.719 15:29:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:57.719 15:29:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:57.719 15:29:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:57.977 15:29:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:57.977 15:29:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.977 15:29:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:57.977 Malloc1 00:23:57.977 [2024-07-15 15:29:01.669602] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:57.977 Malloc2 00:23:57.977 Malloc3 00:23:57.977 Malloc4 00:23:57.977 Malloc5 00:23:57.977 Malloc6 00:23:58.235 Malloc7 00:23:58.235 Malloc8 00:23:58.235 Malloc9 00:23:58.235 Malloc10 00:23:58.235 15:29:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.235 15:29:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:58.235 15:29:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:58.235 15:29:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:58.235 15:29:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=3127021 00:23:58.235 15:29:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 3127021 /var/tmp/bdevperf.sock 00:23:58.235 15:29:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 3127021 ']' 00:23:58.235 15:29:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:58.235 15:29:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:58.235 15:29:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:58.235 15:29:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:58.235 15:29:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:58.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:58.235 15:29:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:58.235 15:29:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:23:58.235 15:29:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:58.235 15:29:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:23:58.235 15:29:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:58.235 15:29:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:58.235 { 00:23:58.235 "params": { 00:23:58.235 "name": "Nvme$subsystem", 00:23:58.235 "trtype": "$TEST_TRANSPORT", 00:23:58.235 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.235 "adrfam": "ipv4", 00:23:58.235 "trsvcid": "$NVMF_PORT", 00:23:58.235 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.235 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.235 "hdgst": ${hdgst:-false}, 00:23:58.235 "ddgst": ${ddgst:-false} 00:23:58.235 }, 00:23:58.235 "method": "bdev_nvme_attach_controller" 00:23:58.235 } 00:23:58.235 EOF 00:23:58.235 )") 00:23:58.235 15:29:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:58.235 15:29:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:58.235 15:29:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:58.235 { 00:23:58.235 "params": { 00:23:58.235 "name": "Nvme$subsystem", 00:23:58.235 "trtype": "$TEST_TRANSPORT", 00:23:58.235 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.235 "adrfam": "ipv4", 00:23:58.235 "trsvcid": "$NVMF_PORT", 00:23:58.235 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.235 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.235 "hdgst": ${hdgst:-false}, 00:23:58.235 "ddgst": ${ddgst:-false} 00:23:58.235 }, 00:23:58.235 "method": "bdev_nvme_attach_controller" 00:23:58.235 } 00:23:58.235 EOF 00:23:58.235 )") 00:23:58.235 15:29:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:58.235 15:29:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:58.235 15:29:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:58.235 { 00:23:58.235 "params": { 00:23:58.235 "name": "Nvme$subsystem", 00:23:58.235 "trtype": "$TEST_TRANSPORT", 00:23:58.235 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.235 "adrfam": "ipv4", 00:23:58.235 "trsvcid": "$NVMF_PORT", 00:23:58.235 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.235 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.235 "hdgst": ${hdgst:-false}, 00:23:58.235 "ddgst": ${ddgst:-false} 00:23:58.235 }, 00:23:58.235 "method": "bdev_nvme_attach_controller" 00:23:58.235 } 00:23:58.235 EOF 00:23:58.235 )") 00:23:58.235 15:29:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:58.235 15:29:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:58.235 15:29:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:58.235 { 00:23:58.235 "params": { 00:23:58.235 "name": "Nvme$subsystem", 00:23:58.235 "trtype": "$TEST_TRANSPORT", 00:23:58.235 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.235 "adrfam": "ipv4", 00:23:58.235 "trsvcid": "$NVMF_PORT", 00:23:58.235 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.235 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.235 "hdgst": ${hdgst:-false}, 00:23:58.235 "ddgst": ${ddgst:-false} 00:23:58.235 }, 00:23:58.235 "method": "bdev_nvme_attach_controller" 00:23:58.235 } 00:23:58.235 EOF 00:23:58.235 )") 00:23:58.235 15:29:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:58.235 15:29:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:58.235 15:29:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:58.235 { 00:23:58.235 "params": { 00:23:58.235 "name": "Nvme$subsystem", 00:23:58.235 "trtype": "$TEST_TRANSPORT", 00:23:58.235 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.235 "adrfam": "ipv4", 00:23:58.235 "trsvcid": "$NVMF_PORT", 00:23:58.235 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.235 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.235 "hdgst": ${hdgst:-false}, 00:23:58.235 "ddgst": ${ddgst:-false} 00:23:58.235 }, 00:23:58.235 "method": "bdev_nvme_attach_controller" 00:23:58.235 } 00:23:58.235 EOF 00:23:58.235 )") 00:23:58.235 15:29:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:58.494 15:29:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:58.494 15:29:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:58.494 { 00:23:58.494 "params": { 00:23:58.494 "name": "Nvme$subsystem", 00:23:58.494 "trtype": "$TEST_TRANSPORT", 00:23:58.494 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.494 "adrfam": "ipv4", 00:23:58.494 "trsvcid": "$NVMF_PORT", 00:23:58.494 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.494 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.494 "hdgst": ${hdgst:-false}, 00:23:58.494 "ddgst": ${ddgst:-false} 00:23:58.494 }, 00:23:58.494 "method": "bdev_nvme_attach_controller" 00:23:58.494 } 00:23:58.494 EOF 00:23:58.494 )") 00:23:58.494 [2024-07-15 15:29:02.147529] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:23:58.494 15:29:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:58.495 [2024-07-15 15:29:02.147582] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3127021 ] 00:23:58.495 15:29:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:58.495 15:29:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:58.495 { 00:23:58.495 "params": { 00:23:58.495 "name": "Nvme$subsystem", 00:23:58.495 "trtype": "$TEST_TRANSPORT", 00:23:58.495 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.495 "adrfam": "ipv4", 00:23:58.495 "trsvcid": "$NVMF_PORT", 00:23:58.495 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.495 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.495 "hdgst": ${hdgst:-false}, 00:23:58.495 "ddgst": ${ddgst:-false} 00:23:58.495 }, 00:23:58.495 "method": "bdev_nvme_attach_controller" 00:23:58.495 } 00:23:58.495 EOF 00:23:58.495 )") 00:23:58.495 15:29:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:58.495 15:29:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:58.495 15:29:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:58.495 { 00:23:58.495 "params": { 00:23:58.495 "name": "Nvme$subsystem", 00:23:58.495 "trtype": "$TEST_TRANSPORT", 00:23:58.495 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.495 "adrfam": "ipv4", 00:23:58.495 "trsvcid": "$NVMF_PORT", 00:23:58.495 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.495 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.495 "hdgst": ${hdgst:-false}, 00:23:58.495 "ddgst": ${ddgst:-false} 00:23:58.495 }, 00:23:58.495 "method": "bdev_nvme_attach_controller" 00:23:58.495 } 00:23:58.495 EOF 00:23:58.495 )") 00:23:58.495 15:29:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:58.495 15:29:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:58.495 15:29:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:58.495 { 00:23:58.495 "params": { 00:23:58.495 "name": "Nvme$subsystem", 00:23:58.495 "trtype": "$TEST_TRANSPORT", 00:23:58.495 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.495 "adrfam": "ipv4", 00:23:58.495 "trsvcid": "$NVMF_PORT", 00:23:58.495 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.495 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.495 "hdgst": ${hdgst:-false}, 00:23:58.495 "ddgst": ${ddgst:-false} 00:23:58.495 }, 00:23:58.495 "method": "bdev_nvme_attach_controller" 00:23:58.495 } 00:23:58.495 EOF 00:23:58.495 )") 00:23:58.495 15:29:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:58.495 15:29:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:58.495 15:29:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:58.495 { 00:23:58.495 "params": { 00:23:58.495 "name": "Nvme$subsystem", 00:23:58.495 "trtype": "$TEST_TRANSPORT", 00:23:58.495 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.495 "adrfam": "ipv4", 00:23:58.495 "trsvcid": "$NVMF_PORT", 00:23:58.495 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.495 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.495 "hdgst": ${hdgst:-false}, 00:23:58.495 "ddgst": ${ddgst:-false} 00:23:58.495 }, 00:23:58.495 "method": "bdev_nvme_attach_controller" 00:23:58.495 } 00:23:58.495 EOF 00:23:58.495 )") 00:23:58.495 15:29:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:58.495 EAL: No free 2048 kB hugepages reported on node 1 00:23:58.495 15:29:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:23:58.495 15:29:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:23:58.495 15:29:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:58.495 "params": { 00:23:58.495 "name": "Nvme1", 00:23:58.495 "trtype": "tcp", 00:23:58.495 "traddr": "10.0.0.2", 00:23:58.495 "adrfam": "ipv4", 00:23:58.495 "trsvcid": "4420", 00:23:58.495 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:58.495 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:58.495 "hdgst": false, 00:23:58.495 "ddgst": false 00:23:58.495 }, 00:23:58.495 "method": "bdev_nvme_attach_controller" 00:23:58.495 },{ 00:23:58.495 "params": { 00:23:58.495 "name": "Nvme2", 00:23:58.495 "trtype": "tcp", 00:23:58.495 "traddr": "10.0.0.2", 00:23:58.495 "adrfam": "ipv4", 00:23:58.495 "trsvcid": "4420", 00:23:58.495 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:58.495 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:58.495 "hdgst": false, 00:23:58.495 "ddgst": false 00:23:58.495 }, 00:23:58.495 "method": "bdev_nvme_attach_controller" 00:23:58.495 },{ 00:23:58.495 "params": { 00:23:58.495 "name": "Nvme3", 00:23:58.495 "trtype": "tcp", 00:23:58.495 "traddr": "10.0.0.2", 00:23:58.495 "adrfam": "ipv4", 00:23:58.495 "trsvcid": "4420", 00:23:58.495 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:58.495 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:58.495 "hdgst": false, 00:23:58.495 "ddgst": false 00:23:58.495 }, 00:23:58.495 "method": "bdev_nvme_attach_controller" 00:23:58.495 },{ 00:23:58.495 "params": { 00:23:58.495 "name": "Nvme4", 00:23:58.495 "trtype": "tcp", 00:23:58.495 "traddr": "10.0.0.2", 00:23:58.495 "adrfam": "ipv4", 00:23:58.495 "trsvcid": "4420", 00:23:58.495 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:58.495 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:58.495 "hdgst": false, 00:23:58.495 "ddgst": false 00:23:58.495 }, 00:23:58.495 "method": "bdev_nvme_attach_controller" 00:23:58.495 },{ 00:23:58.495 "params": { 00:23:58.495 "name": "Nvme5", 00:23:58.495 "trtype": "tcp", 00:23:58.495 "traddr": "10.0.0.2", 00:23:58.495 "adrfam": "ipv4", 00:23:58.495 "trsvcid": "4420", 00:23:58.495 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:58.495 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:58.495 "hdgst": false, 00:23:58.495 "ddgst": false 00:23:58.495 }, 00:23:58.495 "method": "bdev_nvme_attach_controller" 00:23:58.495 },{ 00:23:58.495 "params": { 00:23:58.495 "name": "Nvme6", 00:23:58.495 "trtype": "tcp", 00:23:58.495 "traddr": "10.0.0.2", 00:23:58.495 "adrfam": "ipv4", 00:23:58.495 "trsvcid": "4420", 00:23:58.495 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:58.495 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:58.495 "hdgst": false, 00:23:58.495 "ddgst": false 00:23:58.495 }, 00:23:58.495 "method": "bdev_nvme_attach_controller" 00:23:58.495 },{ 00:23:58.495 "params": { 00:23:58.495 "name": "Nvme7", 00:23:58.495 "trtype": "tcp", 00:23:58.495 "traddr": "10.0.0.2", 00:23:58.495 "adrfam": "ipv4", 00:23:58.495 "trsvcid": "4420", 00:23:58.495 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:58.495 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:58.495 "hdgst": false, 00:23:58.495 "ddgst": false 00:23:58.495 }, 00:23:58.495 "method": "bdev_nvme_attach_controller" 00:23:58.495 },{ 00:23:58.495 "params": { 00:23:58.495 "name": "Nvme8", 00:23:58.495 "trtype": "tcp", 00:23:58.495 "traddr": "10.0.0.2", 00:23:58.495 "adrfam": "ipv4", 00:23:58.495 "trsvcid": "4420", 00:23:58.495 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:58.495 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:58.495 "hdgst": false, 00:23:58.495 "ddgst": false 00:23:58.495 }, 00:23:58.495 "method": "bdev_nvme_attach_controller" 00:23:58.495 },{ 00:23:58.495 "params": { 00:23:58.495 "name": "Nvme9", 00:23:58.495 "trtype": "tcp", 00:23:58.495 "traddr": "10.0.0.2", 00:23:58.495 "adrfam": "ipv4", 00:23:58.495 "trsvcid": "4420", 00:23:58.495 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:58.495 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:58.495 "hdgst": false, 00:23:58.495 "ddgst": false 00:23:58.495 }, 00:23:58.495 "method": "bdev_nvme_attach_controller" 00:23:58.495 },{ 00:23:58.495 "params": { 00:23:58.495 "name": "Nvme10", 00:23:58.495 "trtype": "tcp", 00:23:58.495 "traddr": "10.0.0.2", 00:23:58.495 "adrfam": "ipv4", 00:23:58.495 "trsvcid": "4420", 00:23:58.495 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:58.495 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:58.495 "hdgst": false, 00:23:58.495 "ddgst": false 00:23:58.495 }, 00:23:58.495 "method": "bdev_nvme_attach_controller" 00:23:58.495 }' 00:23:58.495 [2024-07-15 15:29:02.220040] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:58.495 [2024-07-15 15:29:02.289433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:00.397 Running I/O for 10 seconds... 00:24:00.397 15:29:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:00.397 15:29:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:24:00.397 15:29:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:00.397 15:29:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.397 15:29:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:00.397 15:29:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.397 15:29:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:00.397 15:29:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:00.397 15:29:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:00.397 15:29:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:24:00.397 15:29:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:24:00.397 15:29:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:24:00.397 15:29:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:24:00.397 15:29:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:00.397 15:29:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:00.397 15:29:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.397 15:29:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:00.397 15:29:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:00.397 15:29:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.397 15:29:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:24:00.397 15:29:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:24:00.397 15:29:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:00.397 15:29:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:00.397 15:29:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:00.397 15:29:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:00.397 15:29:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:00.397 15:29:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.397 15:29:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:00.397 15:29:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.670 15:29:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=128 00:24:00.670 15:29:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 128 -ge 100 ']' 00:24:00.670 15:29:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:24:00.670 15:29:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:24:00.670 15:29:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:24:00.670 15:29:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 3126710 00:24:00.670 15:29:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 3126710 ']' 00:24:00.670 15:29:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 3126710 00:24:00.670 15:29:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:24:00.671 15:29:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:00.671 15:29:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3126710 00:24:00.671 15:29:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:00.671 15:29:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:00.671 15:29:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3126710' 00:24:00.671 killing process with pid 3126710 00:24:00.671 15:29:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 3126710 00:24:00.671 15:29:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 3126710 00:24:00.671 [2024-07-15 15:29:04.371657] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.371711] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.371722] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.371731] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.371741] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.371750] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.371759] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.371768] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.371777] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.371785] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.371795] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.371803] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.371812] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.371821] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.371830] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.371843] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.371857] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.371866] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.371875] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.371883] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.371892] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.371902] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.371911] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.371919] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.371928] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.371936] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.371945] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.371954] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.371962] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.371971] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.371979] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.371988] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.371997] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.372005] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.372015] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.372024] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.372033] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.372042] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.372051] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.372060] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.372069] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.372077] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.372086] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.372096] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.372105] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.372114] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.372123] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.372131] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.372140] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.372149] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.372158] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.372166] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.372175] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.372184] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.372193] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.372201] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.372210] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.372219] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.372228] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.372237] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.372245] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.372254] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.372262] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999a00 is same with the state(5) to be set 00:24:00.671 [2024-07-15 15:29:04.374103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.671 [2024-07-15 15:29:04.374140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.671 [2024-07-15 15:29:04.374159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.671 [2024-07-15 15:29:04.374169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.671 [2024-07-15 15:29:04.374181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.671 [2024-07-15 15:29:04.374191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.671 [2024-07-15 15:29:04.374201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.671 [2024-07-15 15:29:04.374219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.671 [2024-07-15 15:29:04.374230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.671 [2024-07-15 15:29:04.374239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.671 [2024-07-15 15:29:04.374250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.671 [2024-07-15 15:29:04.374259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.671 [2024-07-15 15:29:04.374270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.671 [2024-07-15 15:29:04.374279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.671 [2024-07-15 15:29:04.374290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.671 [2024-07-15 15:29:04.374300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.671 [2024-07-15 15:29:04.374310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.671 [2024-07-15 15:29:04.374320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.671 [2024-07-15 15:29:04.374330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.671 [2024-07-15 15:29:04.374339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.671 [2024-07-15 15:29:04.374349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.671 [2024-07-15 15:29:04.374359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.671 [2024-07-15 15:29:04.374369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.671 [2024-07-15 15:29:04.374378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.671 [2024-07-15 15:29:04.374388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.671 [2024-07-15 15:29:04.374398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.671 [2024-07-15 15:29:04.374408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.671 [2024-07-15 15:29:04.374417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.671 [2024-07-15 15:29:04.374428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.671 [2024-07-15 15:29:04.374437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.671 [2024-07-15 15:29:04.374447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.671 [2024-07-15 15:29:04.374457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.671 [2024-07-15 15:29:04.374469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.671 [2024-07-15 15:29:04.374478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.671 [2024-07-15 15:29:04.374489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.671 [2024-07-15 15:29:04.374498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.671 [2024-07-15 15:29:04.374508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.671 [2024-07-15 15:29:04.374517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.671 [2024-07-15 15:29:04.374528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.671 [2024-07-15 15:29:04.374537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.671 [2024-07-15 15:29:04.374548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.671 [2024-07-15 15:29:04.374557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.671 [2024-07-15 15:29:04.374568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.671 [2024-07-15 15:29:04.374577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.671 [2024-07-15 15:29:04.374587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.671 [2024-07-15 15:29:04.374596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.671 [2024-07-15 15:29:04.374607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.672 [2024-07-15 15:29:04.374616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.672 [2024-07-15 15:29:04.374626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.672 [2024-07-15 15:29:04.374635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.672 [2024-07-15 15:29:04.374646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.672 [2024-07-15 15:29:04.374655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.672 [2024-07-15 15:29:04.374666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.672 [2024-07-15 15:29:04.374675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.672 [2024-07-15 15:29:04.374686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.672 [2024-07-15 15:29:04.374695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.672 [2024-07-15 15:29:04.374705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.672 [2024-07-15 15:29:04.374716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.672 [2024-07-15 15:29:04.374726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.672 [2024-07-15 15:29:04.374735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.672 [2024-07-15 15:29:04.374746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.672 [2024-07-15 15:29:04.374755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.672 [2024-07-15 15:29:04.374765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.672 [2024-07-15 15:29:04.374775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.672 [2024-07-15 15:29:04.374786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.672 [2024-07-15 15:29:04.374799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.672 [2024-07-15 15:29:04.374809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.672 [2024-07-15 15:29:04.374818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.672 [2024-07-15 15:29:04.374828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.672 [2024-07-15 15:29:04.374844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.672 [2024-07-15 15:29:04.374855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.672 [2024-07-15 15:29:04.374865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.672 [2024-07-15 15:29:04.374875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.672 [2024-07-15 15:29:04.374884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.672 [2024-07-15 15:29:04.374894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.672 [2024-07-15 15:29:04.374903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.672 [2024-07-15 15:29:04.374914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.672 [2024-07-15 15:29:04.374923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.672 [2024-07-15 15:29:04.374933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.672 [2024-07-15 15:29:04.374942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.672 [2024-07-15 15:29:04.374953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.672 [2024-07-15 15:29:04.374962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.672 [2024-07-15 15:29:04.374974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.672 [2024-07-15 15:29:04.374983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.672 [2024-07-15 15:29:04.374994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.672 [2024-07-15 15:29:04.375003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.672 [2024-07-15 15:29:04.375013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.672 [2024-07-15 15:29:04.375022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.672 [2024-07-15 15:29:04.375033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.672 [2024-07-15 15:29:04.375041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.672 [2024-07-15 15:29:04.375052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.672 [2024-07-15 15:29:04.375061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.672 [2024-07-15 15:29:04.375072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.672 [2024-07-15 15:29:04.375081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.672 [2024-07-15 15:29:04.375091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.672 [2024-07-15 15:29:04.375100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.672 [2024-07-15 15:29:04.375111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.672 [2024-07-15 15:29:04.375122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.672 [2024-07-15 15:29:04.375133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.672 [2024-07-15 15:29:04.375142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.672 [2024-07-15 15:29:04.375152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.672 [2024-07-15 15:29:04.375162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.672 [2024-07-15 15:29:04.375172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.672 [2024-07-15 15:29:04.375181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.672 [2024-07-15 15:29:04.375192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.672 [2024-07-15 15:29:04.375201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.672 [2024-07-15 15:29:04.375212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.672 [2024-07-15 15:29:04.375222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.672 [2024-07-15 15:29:04.375233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.672 [2024-07-15 15:29:04.375242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.672 [2024-07-15 15:29:04.375252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.672 [2024-07-15 15:29:04.375262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.672 [2024-07-15 15:29:04.375272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.672 [2024-07-15 15:29:04.375281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.672 [2024-07-15 15:29:04.375292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.672 [2024-07-15 15:29:04.375301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.672 [2024-07-15 15:29:04.375311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.672 [2024-07-15 15:29:04.375320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.672 [2024-07-15 15:29:04.375330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.672 [2024-07-15 15:29:04.375339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.672 [2024-07-15 15:29:04.375350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.672 [2024-07-15 15:29:04.375359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.672 [2024-07-15 15:29:04.375369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.672 [2024-07-15 15:29:04.375378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.672 [2024-07-15 15:29:04.375388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.672 [2024-07-15 15:29:04.375398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.672 [2024-07-15 15:29:04.375408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.672 [2024-07-15 15:29:04.375417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.672 [2024-07-15 15:29:04.375446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:00.672 [2024-07-15 15:29:04.375840] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x198c8d0 was disconnected and freed. reset controller. 00:24:00.672 [2024-07-15 15:29:04.375869] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with the state(5) to be set 00:24:00.672 [2024-07-15 15:29:04.375895] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with the state(5) to be set 00:24:00.672 [2024-07-15 15:29:04.375910] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with the state(5) to be set 00:24:00.672 [2024-07-15 15:29:04.375915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-15 15:29:04.375919] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with id:0 cdw10:00000000 cdw11:00000000 00:24:00.672 the state(5) to be set 00:24:00.672 [2024-07-15 15:29:04.375929] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with [2024-07-15 15:29:04.375929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:24:00.672 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.672 [2024-07-15 15:29:04.375941] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with the state(5) to be set 00:24:00.672 [2024-07-15 15:29:04.375943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.672 [2024-07-15 15:29:04.375951] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with the state(5) to be set 00:24:00.672 [2024-07-15 15:29:04.375953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.672 [2024-07-15 15:29:04.375961] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with the state(5) to be set 00:24:00.672 [2024-07-15 15:29:04.375965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.672 [2024-07-15 15:29:04.375971] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with the state(5) to be set 00:24:00.672 [2024-07-15 15:29:04.375974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.672 [2024-07-15 15:29:04.375980] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with the state(5) to be set 00:24:00.672 [2024-07-15 15:29:04.375986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.672 [2024-07-15 15:29:04.375990] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with the state(5) to be set 00:24:00.672 [2024-07-15 15:29:04.375996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.672 [2024-07-15 15:29:04.375999] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with the state(5) to be set 00:24:00.672 [2024-07-15 15:29:04.376006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e940 is same with the state(5) to be set 00:24:00.672 [2024-07-15 15:29:04.376009] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with the state(5) to be set 00:24:00.672 [2024-07-15 15:29:04.376019] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with the state(5) to be set 00:24:00.672 [2024-07-15 15:29:04.376028] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with the state(5) to be set 00:24:00.672 [2024-07-15 15:29:04.376036] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with the state(5) to be set 00:24:00.672 [2024-07-15 15:29:04.376045] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with the state(5) to be set 00:24:00.672 [2024-07-15 15:29:04.376049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.672 [2024-07-15 15:29:04.376054] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with the state(5) to be set 00:24:00.672 [2024-07-15 15:29:04.376060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-15 15:29:04.376064] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.672 the state(5) to be set 00:24:00.672 [2024-07-15 15:29:04.376075] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with [2024-07-15 15:29:04.376075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsthe state(5) to be set 00:24:00.673 id:0 cdw10:00000000 cdw11:00000000 00:24:00.673 [2024-07-15 15:29:04.376085] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with the state(5) to be set 00:24:00.673 [2024-07-15 15:29:04.376086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.673 [2024-07-15 15:29:04.376094] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with the state(5) to be set 00:24:00.673 [2024-07-15 15:29:04.376098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.673 [2024-07-15 15:29:04.376104] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with the state(5) to be set 00:24:00.673 [2024-07-15 15:29:04.376107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.673 [2024-07-15 15:29:04.376113] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with the state(5) to be set 00:24:00.673 [2024-07-15 15:29:04.376118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.673 [2024-07-15 15:29:04.376123] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with the state(5) to be set 00:24:00.673 [2024-07-15 15:29:04.376128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.673 [2024-07-15 15:29:04.376132] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with the state(5) to be set 00:24:00.673 [2024-07-15 15:29:04.376137] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197eb20 is same with the state(5) to be set 00:24:00.673 [2024-07-15 15:29:04.376141] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with the state(5) to be set 00:24:00.673 [2024-07-15 15:29:04.376152] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with the state(5) to be set 00:24:00.673 [2024-07-15 15:29:04.376161] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with the state(5) to be set 00:24:00.673 [2024-07-15 15:29:04.376163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.673 [2024-07-15 15:29:04.376169] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with the state(5) to be set 00:24:00.673 [2024-07-15 15:29:04.376174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.673 [2024-07-15 15:29:04.376179] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with the state(5) to be set 00:24:00.673 [2024-07-15 15:29:04.376184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.673 [2024-07-15 15:29:04.376188] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with the state(5) to be set 00:24:00.673 [2024-07-15 15:29:04.376194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.673 [2024-07-15 15:29:04.376197] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with the state(5) to be set 00:24:00.673 [2024-07-15 15:29:04.376205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.673 [2024-07-15 15:29:04.376214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.673 [2024-07-15 15:29:04.376217] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with the state(5) to be set 00:24:00.673 [2024-07-15 15:29:04.376224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.673 [2024-07-15 15:29:04.376227] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with the state(5) to be set 00:24:00.673 [2024-07-15 15:29:04.376234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.673 [2024-07-15 15:29:04.376236] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with the state(5) to be set 00:24:00.673 [2024-07-15 15:29:04.376244] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1952c30 is same with the state(5) to be set 00:24:00.673 [2024-07-15 15:29:04.376246] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with the state(5) to be set 00:24:00.673 [2024-07-15 15:29:04.376255] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with the state(5) to be set 00:24:00.673 [2024-07-15 15:29:04.376264] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with the state(5) to be set 00:24:00.673 [2024-07-15 15:29:04.376270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.673 [2024-07-15 15:29:04.376273] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with the state(5) to be set 00:24:00.673 [2024-07-15 15:29:04.376280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.673 [2024-07-15 15:29:04.376282] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with the state(5) to be set 00:24:00.673 [2024-07-15 15:29:04.376291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-07-15 15:29:04.376292] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with id:0 cdw10:00000000 cdw11:00000000 00:24:00.673 the state(5) to be set 00:24:00.673 [2024-07-15 15:29:04.376302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-15 15:29:04.376302] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.673 the state(5) to be set 00:24:00.673 [2024-07-15 15:29:04.376314] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with the state(5) to be set 00:24:00.673 [2024-07-15 15:29:04.376316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.673 [2024-07-15 15:29:04.376323] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with the state(5) to be set 00:24:00.673 [2024-07-15 15:29:04.376326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.673 [2024-07-15 15:29:04.376332] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with the state(5) to be set 00:24:00.673 [2024-07-15 15:29:04.376337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.673 [2024-07-15 15:29:04.376342] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with the state(5) to be set 00:24:00.673 [2024-07-15 15:29:04.376347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.673 [2024-07-15 15:29:04.376351] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with the state(5) to be set 00:24:00.673 [2024-07-15 15:29:04.376358] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad91d0 is same with the state(5) to be set 00:24:00.673 [2024-07-15 15:29:04.376360] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with the state(5) to be set 00:24:00.673 [2024-07-15 15:29:04.376371] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with the state(5) to be set 00:24:00.673 [2024-07-15 15:29:04.376380] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with the state(5) to be set 00:24:00.673 [2024-07-15 15:29:04.376389] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with the state(5) to be set 00:24:00.673 [2024-07-15 15:29:04.376398] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with the state(5) to be set 00:24:00.673 [2024-07-15 15:29:04.376406] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with the state(5) to be set 00:24:00.673 [2024-07-15 15:29:04.376415] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with the state(5) to be set 00:24:00.673 [2024-07-15 15:29:04.376424] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with the state(5) to be set 00:24:00.673 [2024-07-15 15:29:04.376432] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with the state(5) to be set 00:24:00.673 [2024-07-15 15:29:04.376441] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with the state(5) to be set 00:24:00.673 [2024-07-15 15:29:04.376450] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with the state(5) to be set 00:24:00.673 [2024-07-15 15:29:04.376458] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with the state(5) to be set 00:24:00.673 [2024-07-15 15:29:04.376467] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with the state(5) to be set 00:24:00.673 [2024-07-15 15:29:04.376476] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with the state(5) to be set 00:24:00.673 [2024-07-15 15:29:04.376484] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9430 is same with the state(5) to be set 00:24:00.673 [2024-07-15 15:29:04.376737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.673 [2024-07-15 15:29:04.376762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.673 [2024-07-15 15:29:04.376777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.673 [2024-07-15 15:29:04.376786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.673 [2024-07-15 15:29:04.376797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.673 [2024-07-15 15:29:04.376806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.673 [2024-07-15 15:29:04.376816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.673 [2024-07-15 15:29:04.376826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.673 [2024-07-15 15:29:04.376843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.673 [2024-07-15 15:29:04.376855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.673 [2024-07-15 15:29:04.376866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.673 [2024-07-15 15:29:04.376875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.673 [2024-07-15 15:29:04.376885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.673 [2024-07-15 15:29:04.376894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.673 [2024-07-15 15:29:04.376905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.673 [2024-07-15 15:29:04.376914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.673 [2024-07-15 15:29:04.376925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.673 [2024-07-15 15:29:04.376934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.673 [2024-07-15 15:29:04.376945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.673 [2024-07-15 15:29:04.376954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.673 [2024-07-15 15:29:04.376964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.673 [2024-07-15 15:29:04.376973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.673 [2024-07-15 15:29:04.376986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.673 [2024-07-15 15:29:04.376995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.673 [2024-07-15 15:29:04.377005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.673 [2024-07-15 15:29:04.377014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.673 [2024-07-15 15:29:04.377025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.673 [2024-07-15 15:29:04.377037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.673 [2024-07-15 15:29:04.377047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.673 [2024-07-15 15:29:04.377056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.673 [2024-07-15 15:29:04.377066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.673 [2024-07-15 15:29:04.377076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.673 [2024-07-15 15:29:04.377086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.673 [2024-07-15 15:29:04.377095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.673 [2024-07-15 15:29:04.377107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.673 [2024-07-15 15:29:04.377116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.673 [2024-07-15 15:29:04.377126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.673 [2024-07-15 15:29:04.377136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.673 [2024-07-15 15:29:04.377146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.673 [2024-07-15 15:29:04.377155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.673 [2024-07-15 15:29:04.377165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.673 [2024-07-15 15:29:04.377174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.673 [2024-07-15 15:29:04.377184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.673 [2024-07-15 15:29:04.377194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.673 [2024-07-15 15:29:04.377204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.673 [2024-07-15 15:29:04.377213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.673 [2024-07-15 15:29:04.377223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.673 [2024-07-15 15:29:04.377232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.674 [2024-07-15 15:29:04.377243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.674 [2024-07-15 15:29:04.377252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.674 [2024-07-15 15:29:04.377262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.674 [2024-07-15 15:29:04.377271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.674 [2024-07-15 15:29:04.377282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.674 [2024-07-15 15:29:04.377291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.674 [2024-07-15 15:29:04.377303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.674 [2024-07-15 15:29:04.377312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.674 [2024-07-15 15:29:04.377323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.674 [2024-07-15 15:29:04.377331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.674 [2024-07-15 15:29:04.377342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.674 [2024-07-15 15:29:04.377353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.674 [2024-07-15 15:29:04.377364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.674 [2024-07-15 15:29:04.377373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.674 [2024-07-15 15:29:04.377383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.674 [2024-07-15 15:29:04.377392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.674 [2024-07-15 15:29:04.377403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.674 [2024-07-15 15:29:04.377412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.674 [2024-07-15 15:29:04.377423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.674 [2024-07-15 15:29:04.377432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.674 [2024-07-15 15:29:04.377442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.674 [2024-07-15 15:29:04.377451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.674 [2024-07-15 15:29:04.377461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.674 [2024-07-15 15:29:04.377471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.674 [2024-07-15 15:29:04.377481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.674 [2024-07-15 15:29:04.377490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.674 [2024-07-15 15:29:04.377500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.674 [2024-07-15 15:29:04.377509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.674 [2024-07-15 15:29:04.377519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.674 [2024-07-15 15:29:04.377528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.674 [2024-07-15 15:29:04.377539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.674 [2024-07-15 15:29:04.377548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.674 [2024-07-15 15:29:04.377558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.674 [2024-07-15 15:29:04.377567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.674 [2024-07-15 15:29:04.377578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.674 [2024-07-15 15:29:04.377587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.674 [2024-07-15 15:29:04.377599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.674 [2024-07-15 15:29:04.377607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.674 [2024-07-15 15:29:04.377619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.674 [2024-07-15 15:29:04.377628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.674 [2024-07-15 15:29:04.377638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.674 [2024-07-15 15:29:04.377647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.674 [2024-07-15 15:29:04.377658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.674 [2024-07-15 15:29:04.377669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.674 [2024-07-15 15:29:04.377680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.674 [2024-07-15 15:29:04.377689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.674 [2024-07-15 15:29:04.377699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.674 [2024-07-15 15:29:04.377699] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with the state(5) to be set 00:24:00.674 [2024-07-15 15:29:04.377709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.674 [2024-07-15 15:29:04.377717] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with the state(5) to be set 00:24:00.674 [2024-07-15 15:29:04.377720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.674 [2024-07-15 15:29:04.377727] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with the state(5) to be set 00:24:00.674 [2024-07-15 15:29:04.377730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.674 [2024-07-15 15:29:04.377737] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with the state(5) to be set 00:24:00.674 [2024-07-15 15:29:04.377741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.674 [2024-07-15 15:29:04.377747] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with the state(5) to be set 00:24:00.674 [2024-07-15 15:29:04.377750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.674 [2024-07-15 15:29:04.377756] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with the state(5) to be set 00:24:00.674 [2024-07-15 15:29:04.377761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.674 [2024-07-15 15:29:04.377766] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with the state(5) to be set 00:24:00.674 [2024-07-15 15:29:04.377771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.674 [2024-07-15 15:29:04.377776] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with the state(5) to be set 00:24:00.674 [2024-07-15 15:29:04.377783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.674 [2024-07-15 15:29:04.377786] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with the state(5) to be set 00:24:00.674 [2024-07-15 15:29:04.377794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.674 [2024-07-15 15:29:04.377795] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with the state(5) to be set 00:24:00.674 [2024-07-15 15:29:04.377805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:12[2024-07-15 15:29:04.377806] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.674 the state(5) to be set 00:24:00.674 [2024-07-15 15:29:04.377816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 15:29:04.377817] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.674 the state(5) to be set 00:24:00.674 [2024-07-15 15:29:04.377828] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with [2024-07-15 15:29:04.377829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:12the state(5) to be set 00:24:00.674 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.674 [2024-07-15 15:29:04.377843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 15:29:04.377843] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.674 the state(5) to be set 00:24:00.674 [2024-07-15 15:29:04.377855] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with [2024-07-15 15:29:04.377856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:12the state(5) to be set 00:24:00.674 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.674 [2024-07-15 15:29:04.377865] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with the state(5) to be set 00:24:00.674 [2024-07-15 15:29:04.377867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.674 [2024-07-15 15:29:04.377875] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with the state(5) to be set 00:24:00.674 [2024-07-15 15:29:04.377879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.674 [2024-07-15 15:29:04.377885] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with the state(5) to be set 00:24:00.674 [2024-07-15 15:29:04.377889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.674 [2024-07-15 15:29:04.377894] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with the state(5) to be set 00:24:00.674 [2024-07-15 15:29:04.377900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.674 [2024-07-15 15:29:04.377903] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with the state(5) to be set 00:24:00.674 [2024-07-15 15:29:04.377910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.674 [2024-07-15 15:29:04.377913] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with the state(5) to be set 00:24:00.674 [2024-07-15 15:29:04.377922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:12[2024-07-15 15:29:04.377922] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.674 the state(5) to be set 00:24:00.674 [2024-07-15 15:29:04.377934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 15:29:04.377935] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.674 the state(5) to be set 00:24:00.674 [2024-07-15 15:29:04.377946] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with the state(5) to be set 00:24:00.674 [2024-07-15 15:29:04.377947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.674 [2024-07-15 15:29:04.377955] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with the state(5) to be set 00:24:00.674 [2024-07-15 15:29:04.377957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.674 [2024-07-15 15:29:04.377964] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with the state(5) to be set 00:24:00.674 [2024-07-15 15:29:04.377970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.674 [2024-07-15 15:29:04.377973] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with the state(5) to be set 00:24:00.674 [2024-07-15 15:29:04.377980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.674 [2024-07-15 15:29:04.377983] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.377991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:12[2024-07-15 15:29:04.377992] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.675 the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.378003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 15:29:04.378003] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.675 the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.378015] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with [2024-07-15 15:29:04.378016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:12the state(5) to be set 00:24:00.675 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.675 [2024-07-15 15:29:04.378025] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.378028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.675 [2024-07-15 15:29:04.378035] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.378039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.675 [2024-07-15 15:29:04.378045] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.378049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.675 [2024-07-15 15:29:04.378055] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.378060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.675 [2024-07-15 15:29:04.378065] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.378073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.675 [2024-07-15 15:29:04.378075] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.378085] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.378094] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.378097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:00.675 [2024-07-15 15:29:04.378103] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.378112] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.378121] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.378130] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.378138] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.378147] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.378150] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x19d32e0 was disconnected and freed. reset controller. 00:24:00.675 [2024-07-15 15:29:04.378156] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.378165] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.378174] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.378183] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.378191] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.378200] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.378209] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.378218] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.378226] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.378235] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.378244] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.378252] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.378261] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.378270] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.378281] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.378290] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.378301] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.378310] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9d70 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379182] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379208] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379218] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379228] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379237] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379246] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379256] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379265] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379273] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379282] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379291] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379300] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379309] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379317] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379326] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379334] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379343] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379352] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379360] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379369] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379377] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379386] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379396] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379409] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379418] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379427] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379437] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379446] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379459] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379468] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379477] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379485] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379495] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379504] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379513] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379522] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379531] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379540] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379549] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379558] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379566] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379576] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379585] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379594] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379602] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379611] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379620] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379629] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379637] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379646] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379656] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379665] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379673] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379682] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379691] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379700] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379708] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379717] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379725] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379734] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379744] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379752] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.379761] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca210 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.380246] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:24:00.675 [2024-07-15 15:29:04.380278] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ad91d0 (9): Bad file descriptor 00:24:00.675 [2024-07-15 15:29:04.380772] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca6b0 is same with the state(5) to be set 00:24:00.675 [2024-07-15 15:29:04.381269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.675 [2024-07-15 15:29:04.381286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.675 [2024-07-15 15:29:04.381301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.675 [2024-07-15 15:29:04.381311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.675 [2024-07-15 15:29:04.381322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.675 [2024-07-15 15:29:04.381331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.675 [2024-07-15 15:29:04.381342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.675 [2024-07-15 15:29:04.381352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.676 [2024-07-15 15:29:04.381362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.676 [2024-07-15 15:29:04.381372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.676 [2024-07-15 15:29:04.381386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.676 [2024-07-15 15:29:04.381396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.676 [2024-07-15 15:29:04.381407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.676 [2024-07-15 15:29:04.381416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.676 [2024-07-15 15:29:04.381427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.676 [2024-07-15 15:29:04.381436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.676 [2024-07-15 15:29:04.381447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.676 [2024-07-15 15:29:04.381456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.676 [2024-07-15 15:29:04.381467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.676 [2024-07-15 15:29:04.381476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.676 [2024-07-15 15:29:04.381486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.676 [2024-07-15 15:29:04.381495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.676 [2024-07-15 15:29:04.381506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.676 [2024-07-15 15:29:04.381516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.676 [2024-07-15 15:29:04.381513] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with the state(5) to be set 00:24:00.676 [2024-07-15 15:29:04.381527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.676 [2024-07-15 15:29:04.381538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 15:29:04.381537] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.676 the state(5) to be set 00:24:00.676 [2024-07-15 15:29:04.381550] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with [2024-07-15 15:29:04.381550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:12the state(5) to be set 00:24:00.676 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.676 [2024-07-15 15:29:04.381561] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with [2024-07-15 15:29:04.381562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:24:00.676 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.676 [2024-07-15 15:29:04.381572] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with the state(5) to be set 00:24:00.676 [2024-07-15 15:29:04.381574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.676 [2024-07-15 15:29:04.381581] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with the state(5) to be set 00:24:00.676 [2024-07-15 15:29:04.381584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.676 [2024-07-15 15:29:04.381591] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with the state(5) to be set 00:24:00.676 [2024-07-15 15:29:04.381597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.676 [2024-07-15 15:29:04.381600] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with the state(5) to be set 00:24:00.676 [2024-07-15 15:29:04.381607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.676 [2024-07-15 15:29:04.381610] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with the state(5) to be set 00:24:00.676 [2024-07-15 15:29:04.381619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:12[2024-07-15 15:29:04.381619] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.676 the state(5) to be set 00:24:00.676 [2024-07-15 15:29:04.381630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 15:29:04.381631] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.676 the state(5) to be set 00:24:00.676 [2024-07-15 15:29:04.381642] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with the state(5) to be set 00:24:00.676 [2024-07-15 15:29:04.381643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.676 [2024-07-15 15:29:04.381650] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with the state(5) to be set 00:24:00.676 [2024-07-15 15:29:04.381653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.676 [2024-07-15 15:29:04.381660] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with the state(5) to be set 00:24:00.676 [2024-07-15 15:29:04.381666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.676 [2024-07-15 15:29:04.381669] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with the state(5) to be set 00:24:00.676 [2024-07-15 15:29:04.381676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.676 [2024-07-15 15:29:04.381678] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with the state(5) to be set 00:24:00.676 [2024-07-15 15:29:04.381687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:12[2024-07-15 15:29:04.381688] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.676 the state(5) to be set 00:24:00.676 [2024-07-15 15:29:04.381699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 15:29:04.381699] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.676 the state(5) to be set 00:24:00.676 [2024-07-15 15:29:04.381711] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with the state(5) to be set 00:24:00.676 [2024-07-15 15:29:04.381712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.676 [2024-07-15 15:29:04.381720] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with the state(5) to be set 00:24:00.676 [2024-07-15 15:29:04.381723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.676 [2024-07-15 15:29:04.381729] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with the state(5) to be set 00:24:00.676 [2024-07-15 15:29:04.381736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.676 [2024-07-15 15:29:04.381739] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with the state(5) to be set 00:24:00.676 [2024-07-15 15:29:04.381746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.676 [2024-07-15 15:29:04.381748] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with the state(5) to be set 00:24:00.676 [2024-07-15 15:29:04.381758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:12[2024-07-15 15:29:04.381758] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.676 the state(5) to be set 00:24:00.676 [2024-07-15 15:29:04.381769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 15:29:04.381769] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.676 the state(5) to be set 00:24:00.676 [2024-07-15 15:29:04.381781] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with the state(5) to be set 00:24:00.676 [2024-07-15 15:29:04.381782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.676 [2024-07-15 15:29:04.381789] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with the state(5) to be set 00:24:00.676 [2024-07-15 15:29:04.381792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.676 [2024-07-15 15:29:04.381799] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with the state(5) to be set 00:24:00.676 [2024-07-15 15:29:04.381804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.676 [2024-07-15 15:29:04.381808] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with the state(5) to be set 00:24:00.676 [2024-07-15 15:29:04.381814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.676 [2024-07-15 15:29:04.381817] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with the state(5) to be set 00:24:00.676 [2024-07-15 15:29:04.381825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:12[2024-07-15 15:29:04.381826] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.676 the state(5) to be set 00:24:00.676 [2024-07-15 15:29:04.381842] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with [2024-07-15 15:29:04.381842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:24:00.676 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.676 [2024-07-15 15:29:04.381853] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with the state(5) to be set 00:24:00.676 [2024-07-15 15:29:04.381856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.676 [2024-07-15 15:29:04.381863] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with the state(5) to be set 00:24:00.676 [2024-07-15 15:29:04.381866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.676 [2024-07-15 15:29:04.381873] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with the state(5) to be set 00:24:00.676 [2024-07-15 15:29:04.381879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.676 [2024-07-15 15:29:04.381882] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with the state(5) to be set 00:24:00.676 [2024-07-15 15:29:04.381889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.676 [2024-07-15 15:29:04.381892] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with the state(5) to be set 00:24:00.676 [2024-07-15 15:29:04.381900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:12[2024-07-15 15:29:04.381901] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.676 the state(5) to be set 00:24:00.676 [2024-07-15 15:29:04.381912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 15:29:04.381912] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.676 the state(5) to be set 00:24:00.676 [2024-07-15 15:29:04.381924] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with the state(5) to be set 00:24:00.676 [2024-07-15 15:29:04.381925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.676 [2024-07-15 15:29:04.381933] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with the state(5) to be set 00:24:00.676 [2024-07-15 15:29:04.381936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.676 [2024-07-15 15:29:04.381942] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with the state(5) to be set 00:24:00.676 [2024-07-15 15:29:04.381947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.676 [2024-07-15 15:29:04.381951] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with the state(5) to be set 00:24:00.676 [2024-07-15 15:29:04.381958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.676 [2024-07-15 15:29:04.381961] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with the state(5) to be set 00:24:00.676 [2024-07-15 15:29:04.381969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:12[2024-07-15 15:29:04.381970] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.676 the state(5) to be set 00:24:00.676 [2024-07-15 15:29:04.381981] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with [2024-07-15 15:29:04.381981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:24:00.676 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.676 [2024-07-15 15:29:04.381992] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with the state(5) to be set 00:24:00.676 [2024-07-15 15:29:04.381995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.676 [2024-07-15 15:29:04.382001] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with the state(5) to be set 00:24:00.676 [2024-07-15 15:29:04.382005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 15:29:04.382010] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.676 the state(5) to be set 00:24:00.676 [2024-07-15 15:29:04.382020] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with the state(5) to be set 00:24:00.676 [2024-07-15 15:29:04.382021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.676 [2024-07-15 15:29:04.382029] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with the state(5) to be set 00:24:00.676 [2024-07-15 15:29:04.382032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.676 [2024-07-15 15:29:04.382038] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with the state(5) to be set 00:24:00.676 [2024-07-15 15:29:04.382044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.676 [2024-07-15 15:29:04.382051] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with the state(5) to be set 00:24:00.676 [2024-07-15 15:29:04.382053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.676 [2024-07-15 15:29:04.382060] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with the state(5) to be set 00:24:00.676 [2024-07-15 15:29:04.382065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.677 [2024-07-15 15:29:04.382070] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with the state(5) to be set 00:24:00.677 [2024-07-15 15:29:04.382075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.677 [2024-07-15 15:29:04.382080] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with the state(5) to be set 00:24:00.677 [2024-07-15 15:29:04.382086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.677 [2024-07-15 15:29:04.382089] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with the state(5) to be set 00:24:00.677 [2024-07-15 15:29:04.382096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.677 [2024-07-15 15:29:04.382149] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with the state(5) to be set 00:24:00.677 [2024-07-15 15:29:04.382207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.677 [2024-07-15 15:29:04.382258] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with the state(5) to be set 00:24:00.677 [2024-07-15 15:29:04.382321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.677 [2024-07-15 15:29:04.382375] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with the state(5) to be set 00:24:00.677 [2024-07-15 15:29:04.382435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.677 [2024-07-15 15:29:04.382486] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with the state(5) to be set 00:24:00.677 [2024-07-15 15:29:04.382545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.677 [2024-07-15 15:29:04.382599] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with the state(5) to be set 00:24:00.677 [2024-07-15 15:29:04.382664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.677 [2024-07-15 15:29:04.382717] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cab70 is same with the state(5) to be set 00:24:00.677 [2024-07-15 15:29:04.382774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.677 [2024-07-15 15:29:04.382904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.677 [2024-07-15 15:29:04.382957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.677 [2024-07-15 15:29:04.383014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.677 [2024-07-15 15:29:04.383066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.677 [2024-07-15 15:29:04.383125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.677 [2024-07-15 15:29:04.383178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.677 [2024-07-15 15:29:04.383248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.677 [2024-07-15 15:29:04.383301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.677 [2024-07-15 15:29:04.383365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.677 [2024-07-15 15:29:04.383408] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.677 [2024-07-15 15:29:04.383418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.677 [2024-07-15 15:29:04.383478] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.677 [2024-07-15 15:29:04.383531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.677 [2024-07-15 15:29:04.383597] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.677 [2024-07-15 15:29:04.383654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.677 [2024-07-15 15:29:04.383720] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.677 [2024-07-15 15:29:04.383779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.677 [2024-07-15 15:29:04.383830] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.677 [2024-07-15 15:29:04.383899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.677 [2024-07-15 15:29:04.383955] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.677 [2024-07-15 15:29:04.384021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.677 [2024-07-15 15:29:04.384057] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.677 [2024-07-15 15:29:04.384093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.677 [2024-07-15 15:29:04.384133] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.677 [2024-07-15 15:29:04.384170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.677 [2024-07-15 15:29:04.384203] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.677 [2024-07-15 15:29:04.384238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.677 [2024-07-15 15:29:04.384274] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.677 [2024-07-15 15:29:04.384310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.677 [2024-07-15 15:29:04.384343] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.677 [2024-07-15 15:29:04.384378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.677 [2024-07-15 15:29:04.384414] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.677 [2024-07-15 15:29:04.384450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.677 [2024-07-15 15:29:04.384488] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.677 [2024-07-15 15:29:04.384523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.677 [2024-07-15 15:29:04.384559] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.677 [2024-07-15 15:29:04.384596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.677 [2024-07-15 15:29:04.384630] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.677 [2024-07-15 15:29:04.384664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.677 [2024-07-15 15:29:04.384698] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.677 [2024-07-15 15:29:04.384734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.677 [2024-07-15 15:29:04.384772] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.677 [2024-07-15 15:29:04.384812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.677 [2024-07-15 15:29:04.384855] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.677 [2024-07-15 15:29:04.384891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.677 [2024-07-15 15:29:04.384922] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.677 [2024-07-15 15:29:04.384957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.677 [2024-07-15 15:29:04.384991] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.677 [2024-07-15 15:29:04.385028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.677 [2024-07-15 15:29:04.385059] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.677 [2024-07-15 15:29:04.385095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.677 [2024-07-15 15:29:04.385129] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.677 [2024-07-15 15:29:04.385165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.677 [2024-07-15 15:29:04.385198] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.677 [2024-07-15 15:29:04.385237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.677 [2024-07-15 15:29:04.385272] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.677 [2024-07-15 15:29:04.385308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.677 [2024-07-15 15:29:04.385340] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.677 [2024-07-15 15:29:04.385376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.677 [2024-07-15 15:29:04.385412] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.677 [2024-07-15 15:29:04.385448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.677 [2024-07-15 15:29:04.385480] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.677 [2024-07-15 15:29:04.385514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.677 [2024-07-15 15:29:04.385548] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.677 [2024-07-15 15:29:04.385584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.677 [2024-07-15 15:29:04.385620] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.677 [2024-07-15 15:29:04.385656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.677 [2024-07-15 15:29:04.385690] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.677 [2024-07-15 15:29:04.385733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.677 [2024-07-15 15:29:04.385767] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.677 [2024-07-15 15:29:04.385803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.677 [2024-07-15 15:29:04.385843] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.677 [2024-07-15 15:29:04.385879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.677 [2024-07-15 15:29:04.385913] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.677 [2024-07-15 15:29:04.385947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.677 [2024-07-15 15:29:04.385981] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.677 [2024-07-15 15:29:04.386017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.677 [2024-07-15 15:29:04.386056] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.677 [2024-07-15 15:29:04.386091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.677 [2024-07-15 15:29:04.386126] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.677 [2024-07-15 15:29:04.386163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.677 [2024-07-15 15:29:04.386195] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.677 [2024-07-15 15:29:04.386230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.677 [2024-07-15 15:29:04.386265] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.677 [2024-07-15 15:29:04.386301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.678 [2024-07-15 15:29:04.386337] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.678 [2024-07-15 15:29:04.386372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.678 [2024-07-15 15:29:04.386407] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.678 [2024-07-15 15:29:04.386477] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.678 [2024-07-15 15:29:04.386499] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x19d5c90 was disconnected and freed. reset controller. 00:24:00.678 [2024-07-15 15:29:04.386547] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.678 [2024-07-15 15:29:04.386582] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.678 [2024-07-15 15:29:04.386616] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.678 [2024-07-15 15:29:04.386651] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.678 [2024-07-15 15:29:04.386657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.678 [2024-07-15 15:29:04.386689] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.678 [2024-07-15 15:29:04.386725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.678 [2024-07-15 15:29:04.386758] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.678 [2024-07-15 15:29:04.386799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.678 [2024-07-15 15:29:04.386828] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.678 [2024-07-15 15:29:04.386877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.678 [2024-07-15 15:29:04.386911] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.678 [2024-07-15 15:29:04.386947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.678 [2024-07-15 15:29:04.386979] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.678 [2024-07-15 15:29:04.387014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.678 [2024-07-15 15:29:04.387047] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.678 [2024-07-15 15:29:04.387083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.678 [2024-07-15 15:29:04.387119] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.678 [2024-07-15 15:29:04.387154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.678 [2024-07-15 15:29:04.387187] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.678 [2024-07-15 15:29:04.387224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.678 [2024-07-15 15:29:04.387257] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.678 [2024-07-15 15:29:04.387291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.678 [2024-07-15 15:29:04.387325] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.678 [2024-07-15 15:29:04.387366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.678 [2024-07-15 15:29:04.387399] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.678 [2024-07-15 15:29:04.387435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.678 [2024-07-15 15:29:04.387468] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.678 [2024-07-15 15:29:04.387506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.678 [2024-07-15 15:29:04.387539] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.678 [2024-07-15 15:29:04.387579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.678 [2024-07-15 15:29:04.387613] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.678 [2024-07-15 15:29:04.387650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.678 [2024-07-15 15:29:04.387683] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.678 [2024-07-15 15:29:04.387718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.678 [2024-07-15 15:29:04.387752] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.678 [2024-07-15 15:29:04.387790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.678 [2024-07-15 15:29:04.387822] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.678 [2024-07-15 15:29:04.387864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.678 [2024-07-15 15:29:04.387897] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17cb010 is same with the state(5) to be set 00:24:00.678 [2024-07-15 15:29:04.387934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.678 [2024-07-15 15:29:04.388008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.678 [2024-07-15 15:29:04.388043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.678 [2024-07-15 15:29:04.388076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.678 [2024-07-15 15:29:04.388112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.678 [2024-07-15 15:29:04.388145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.678 [2024-07-15 15:29:04.388180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.678 [2024-07-15 15:29:04.388215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.678 [2024-07-15 15:29:04.388251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.678 [2024-07-15 15:29:04.388285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.678 [2024-07-15 15:29:04.388320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.678 [2024-07-15 15:29:04.388354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.678 [2024-07-15 15:29:04.388395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.678 [2024-07-15 15:29:04.388429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.678 [2024-07-15 15:29:04.388464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.678 [2024-07-15 15:29:04.388497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.678 [2024-07-15 15:29:04.388533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.678 [2024-07-15 15:29:04.388566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.678 [2024-07-15 15:29:04.388602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.678 [2024-07-15 15:29:04.388642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.678 [2024-07-15 15:29:04.388678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.678 [2024-07-15 15:29:04.388712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.678 [2024-07-15 15:29:04.388747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.678 [2024-07-15 15:29:04.388779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.678 [2024-07-15 15:29:04.388815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.678 [2024-07-15 15:29:04.388854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.678 [2024-07-15 15:29:04.388890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.678 [2024-07-15 15:29:04.388923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.678 [2024-07-15 15:29:04.388964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.678 [2024-07-15 15:29:04.388996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.678 [2024-07-15 15:29:04.389032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.678 [2024-07-15 15:29:04.389066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.678 [2024-07-15 15:29:04.389101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.678 [2024-07-15 15:29:04.389134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.678 [2024-07-15 15:29:04.389170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.678 [2024-07-15 15:29:04.389205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.678 [2024-07-15 15:29:04.389241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.678 [2024-07-15 15:29:04.389274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.678 [2024-07-15 15:29:04.389310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.678 [2024-07-15 15:29:04.389344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.678 [2024-07-15 15:29:04.389385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.678 [2024-07-15 15:29:04.389419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.678 [2024-07-15 15:29:04.389456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.678 [2024-07-15 15:29:04.389494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.678 [2024-07-15 15:29:04.389531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.678 [2024-07-15 15:29:04.389564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.678 [2024-07-15 15:29:04.389600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.678 [2024-07-15 15:29:04.389634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.678 [2024-07-15 15:29:04.389670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.678 [2024-07-15 15:29:04.389703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.678 [2024-07-15 15:29:04.389738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.678 [2024-07-15 15:29:04.389772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.678 [2024-07-15 15:29:04.389807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.678 [2024-07-15 15:29:04.389844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.678 [2024-07-15 15:29:04.389879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.678 [2024-07-15 15:29:04.389917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.678 [2024-07-15 15:29:04.389954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.678 [2024-07-15 15:29:04.389986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.678 [2024-07-15 15:29:04.390022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.678 [2024-07-15 15:29:04.390055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.678 [2024-07-15 15:29:04.390090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.678 [2024-07-15 15:29:04.390124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.678 [2024-07-15 15:29:04.390159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.678 [2024-07-15 15:29:04.390192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.678 [2024-07-15 15:29:04.390228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.678 [2024-07-15 15:29:04.390262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.678 [2024-07-15 15:29:04.390297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.678 [2024-07-15 15:29:04.390335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.678 [2024-07-15 15:29:04.390372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.679 [2024-07-15 15:29:04.390405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.679 [2024-07-15 15:29:04.390442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.679 [2024-07-15 15:29:04.390475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.679 [2024-07-15 15:29:04.390510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.679 [2024-07-15 15:29:04.390545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.679 [2024-07-15 15:29:04.390580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.679 [2024-07-15 15:29:04.390614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.679 [2024-07-15 15:29:04.390650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.679 [2024-07-15 15:29:04.390683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.679 [2024-07-15 15:29:04.390726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.679 [2024-07-15 15:29:04.390759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.679 [2024-07-15 15:29:04.390795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.679 [2024-07-15 15:29:04.390828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.679 [2024-07-15 15:29:04.390873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.679 [2024-07-15 15:29:04.390907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.679 [2024-07-15 15:29:04.390943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.679 [2024-07-15 15:29:04.390976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.679 [2024-07-15 15:29:04.391011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.679 [2024-07-15 15:29:04.391044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.679 [2024-07-15 15:29:04.391082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.679 [2024-07-15 15:29:04.391115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.679 [2024-07-15 15:29:04.391155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.679 [2024-07-15 15:29:04.391189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.679 [2024-07-15 15:29:04.391225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.679 [2024-07-15 15:29:04.391258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.679 [2024-07-15 15:29:04.391294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.679 [2024-07-15 15:29:04.391326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.679 [2024-07-15 15:29:04.391363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.679 [2024-07-15 15:29:04.391396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.679 [2024-07-15 15:29:04.391432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.679 [2024-07-15 15:29:04.391465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.679 [2024-07-15 15:29:04.391500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.679 [2024-07-15 15:29:04.391534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.679 [2024-07-15 15:29:04.391569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.679 [2024-07-15 15:29:04.391608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.679 [2024-07-15 15:29:04.391644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.679 [2024-07-15 15:29:04.391676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.679 [2024-07-15 15:29:04.391713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.679 [2024-07-15 15:29:04.391745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.679 [2024-07-15 15:29:04.391781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.679 [2024-07-15 15:29:04.391819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.679 [2024-07-15 15:29:04.391917] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x19896e0 was disconnected and freed. reset controller. 00:24:00.679 [2024-07-15 15:29:04.392035] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:24:00.679 [2024-07-15 15:29:04.392062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1e940 (9): Bad file descriptor 00:24:00.679 [2024-07-15 15:29:04.392110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.679 [2024-07-15 15:29:04.392122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.679 [2024-07-15 15:29:04.403289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.679 [2024-07-15 15:29:04.403315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.679 [2024-07-15 15:29:04.403329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.679 [2024-07-15 15:29:04.403342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.679 [2024-07-15 15:29:04.403356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.679 [2024-07-15 15:29:04.403368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.679 [2024-07-15 15:29:04.403380] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd150 is same with the state(5) to be set 00:24:00.679 [2024-07-15 15:29:04.403435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.679 [2024-07-15 15:29:04.403454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.679 [2024-07-15 15:29:04.403467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.679 [2024-07-15 15:29:04.403479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.679 [2024-07-15 15:29:04.403492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.679 [2024-07-15 15:29:04.403505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.679 [2024-07-15 15:29:04.403518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.679 [2024-07-15 15:29:04.403530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.679 [2024-07-15 15:29:04.403541] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a17180 is same with the state(5) to be set 00:24:00.679 [2024-07-15 15:29:04.403575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.679 [2024-07-15 15:29:04.403590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.679 [2024-07-15 15:29:04.403603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.679 [2024-07-15 15:29:04.403615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.679 [2024-07-15 15:29:04.403628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.679 [2024-07-15 15:29:04.403640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.679 [2024-07-15 15:29:04.403654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.679 [2024-07-15 15:29:04.403666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.679 [2024-07-15 15:29:04.403678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1975a10 is same with the state(5) to be set 00:24:00.679 [2024-07-15 15:29:04.403711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.679 [2024-07-15 15:29:04.403726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.679 [2024-07-15 15:29:04.403740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.679 [2024-07-15 15:29:04.403755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.679 [2024-07-15 15:29:04.403769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.679 [2024-07-15 15:29:04.403782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.679 [2024-07-15 15:29:04.403796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.679 [2024-07-15 15:29:04.403811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.679 [2024-07-15 15:29:04.403825] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455610 is same with the state(5) to be set 00:24:00.679 [2024-07-15 15:29:04.403857] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x197eb20 (9): Bad file descriptor 00:24:00.679 [2024-07-15 15:29:04.403884] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1952c30 (9): Bad file descriptor 00:24:00.679 [2024-07-15 15:29:04.403925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.679 [2024-07-15 15:29:04.403940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.679 [2024-07-15 15:29:04.403952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.679 [2024-07-15 15:29:04.403967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.679 [2024-07-15 15:29:04.403981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.679 [2024-07-15 15:29:04.403993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.679 [2024-07-15 15:29:04.404006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.679 [2024-07-15 15:29:04.404018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.679 [2024-07-15 15:29:04.404030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a18ee0 is same with the state(5) to be set 00:24:00.679 [2024-07-15 15:29:04.404065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.679 [2024-07-15 15:29:04.404079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.679 [2024-07-15 15:29:04.404092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.679 [2024-07-15 15:29:04.404104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.679 [2024-07-15 15:29:04.404118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.679 [2024-07-15 15:29:04.404130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.679 [2024-07-15 15:29:04.404143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.679 [2024-07-15 15:29:04.404155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.679 [2024-07-15 15:29:04.404167] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac51f0 is same with the state(5) to be set 00:24:00.679 [2024-07-15 15:29:04.407243] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:24:00.679 [2024-07-15 15:29:04.407291] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:24:00.679 [2024-07-15 15:29:04.407310] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a17180 (9): Bad file descriptor 00:24:00.679 [2024-07-15 15:29:04.407329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1975a10 (9): Bad file descriptor 00:24:00.679 [2024-07-15 15:29:04.407702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.679 [2024-07-15 15:29:04.407724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ad91d0 with addr=10.0.0.2, port=4420 00:24:00.679 [2024-07-15 15:29:04.407742] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad91d0 is same with the state(5) to be set 00:24:00.679 [2024-07-15 15:29:04.407789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19fd150 (9): Bad file descriptor 00:24:00.679 [2024-07-15 15:29:04.407815] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1455610 (9): Bad file descriptor 00:24:00.679 [2024-07-15 15:29:04.407862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a18ee0 (9): Bad file descriptor 00:24:00.679 [2024-07-15 15:29:04.407888] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac51f0 (9): Bad file descriptor 00:24:00.679 [2024-07-15 15:29:04.407907] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ad91d0 (9): Bad file descriptor 00:24:00.679 [2024-07-15 15:29:04.408891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.679 [2024-07-15 15:29:04.408920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1e940 with addr=10.0.0.2, port=4420 00:24:00.679 [2024-07-15 15:29:04.408933] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e940 is same with the state(5) to be set 00:24:00.679 [2024-07-15 15:29:04.409015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.680 [2024-07-15 15:29:04.409032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.680 [2024-07-15 15:29:04.409052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.680 [2024-07-15 15:29:04.409065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.680 [2024-07-15 15:29:04.409081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.680 [2024-07-15 15:29:04.409094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.680 [2024-07-15 15:29:04.409109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.680 [2024-07-15 15:29:04.409122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.680 [2024-07-15 15:29:04.409137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.680 [2024-07-15 15:29:04.409150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.680 [2024-07-15 15:29:04.409165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.680 [2024-07-15 15:29:04.409178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.680 [2024-07-15 15:29:04.409193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.680 [2024-07-15 15:29:04.409206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.680 [2024-07-15 15:29:04.409221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.680 [2024-07-15 15:29:04.409234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.680 [2024-07-15 15:29:04.409248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.680 [2024-07-15 15:29:04.409265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.680 [2024-07-15 15:29:04.409280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.680 [2024-07-15 15:29:04.409293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.680 [2024-07-15 15:29:04.409309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.680 [2024-07-15 15:29:04.409321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.680 [2024-07-15 15:29:04.409336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.680 [2024-07-15 15:29:04.409349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.680 [2024-07-15 15:29:04.409364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.680 [2024-07-15 15:29:04.409377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.680 [2024-07-15 15:29:04.409392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.680 [2024-07-15 15:29:04.409405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.680 [2024-07-15 15:29:04.409420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.680 [2024-07-15 15:29:04.409432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.680 [2024-07-15 15:29:04.409448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.680 [2024-07-15 15:29:04.409461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.680 [2024-07-15 15:29:04.409476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.680 [2024-07-15 15:29:04.409488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.680 [2024-07-15 15:29:04.409503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.680 [2024-07-15 15:29:04.409516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.680 [2024-07-15 15:29:04.409530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.680 [2024-07-15 15:29:04.409543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.680 [2024-07-15 15:29:04.409558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.680 [2024-07-15 15:29:04.409571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.680 [2024-07-15 15:29:04.409585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.680 [2024-07-15 15:29:04.409598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.680 [2024-07-15 15:29:04.409615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.680 [2024-07-15 15:29:04.409628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.680 [2024-07-15 15:29:04.409643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.680 [2024-07-15 15:29:04.409656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.680 [2024-07-15 15:29:04.409670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.680 [2024-07-15 15:29:04.409683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.680 [2024-07-15 15:29:04.409698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.680 [2024-07-15 15:29:04.409711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.680 [2024-07-15 15:29:04.409725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.680 [2024-07-15 15:29:04.409739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.680 [2024-07-15 15:29:04.409754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.680 [2024-07-15 15:29:04.409767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.680 [2024-07-15 15:29:04.409781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.680 [2024-07-15 15:29:04.409794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.680 [2024-07-15 15:29:04.409809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.680 [2024-07-15 15:29:04.409822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.680 [2024-07-15 15:29:04.409843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.680 [2024-07-15 15:29:04.409856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.680 [2024-07-15 15:29:04.409871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.680 [2024-07-15 15:29:04.409884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.680 [2024-07-15 15:29:04.409899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.680 [2024-07-15 15:29:04.409912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.680 [2024-07-15 15:29:04.409926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.680 [2024-07-15 15:29:04.409939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.680 [2024-07-15 15:29:04.409954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.680 [2024-07-15 15:29:04.409972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.680 [2024-07-15 15:29:04.409988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.680 [2024-07-15 15:29:04.410000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.680 [2024-07-15 15:29:04.410015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.680 [2024-07-15 15:29:04.410028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.680 [2024-07-15 15:29:04.410043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.680 [2024-07-15 15:29:04.410055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.680 [2024-07-15 15:29:04.410070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.680 [2024-07-15 15:29:04.410083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.680 [2024-07-15 15:29:04.410098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.680 [2024-07-15 15:29:04.410110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.680 [2024-07-15 15:29:04.410125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.680 [2024-07-15 15:29:04.410138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.680 [2024-07-15 15:29:04.410153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.680 [2024-07-15 15:29:04.410166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.680 [2024-07-15 15:29:04.410180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.680 [2024-07-15 15:29:04.410193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.680 [2024-07-15 15:29:04.410208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.680 [2024-07-15 15:29:04.410221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.680 [2024-07-15 15:29:04.410236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.680 [2024-07-15 15:29:04.410249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.680 [2024-07-15 15:29:04.410264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.680 [2024-07-15 15:29:04.410277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.680 [2024-07-15 15:29:04.410292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.680 [2024-07-15 15:29:04.410305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.680 [2024-07-15 15:29:04.410320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.680 [2024-07-15 15:29:04.410335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.680 [2024-07-15 15:29:04.410350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.680 [2024-07-15 15:29:04.410362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.680 [2024-07-15 15:29:04.410377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.680 [2024-07-15 15:29:04.410390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.680 [2024-07-15 15:29:04.410405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.680 [2024-07-15 15:29:04.410417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.680 [2024-07-15 15:29:04.410432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.680 [2024-07-15 15:29:04.410444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.680 [2024-07-15 15:29:04.410459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.681 [2024-07-15 15:29:04.410472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.681 [2024-07-15 15:29:04.410487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.681 [2024-07-15 15:29:04.410500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.681 [2024-07-15 15:29:04.410514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.681 [2024-07-15 15:29:04.410527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.681 [2024-07-15 15:29:04.410541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.681 [2024-07-15 15:29:04.410554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.681 [2024-07-15 15:29:04.410569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.681 [2024-07-15 15:29:04.410581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.681 [2024-07-15 15:29:04.410596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.681 [2024-07-15 15:29:04.410609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.681 [2024-07-15 15:29:04.410623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.681 [2024-07-15 15:29:04.410636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.681 [2024-07-15 15:29:04.410651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.681 [2024-07-15 15:29:04.410664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.681 [2024-07-15 15:29:04.410680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.681 [2024-07-15 15:29:04.410693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.681 [2024-07-15 15:29:04.410707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.681 [2024-07-15 15:29:04.410720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.681 [2024-07-15 15:29:04.410735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.681 [2024-07-15 15:29:04.410748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.681 [2024-07-15 15:29:04.410762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.681 [2024-07-15 15:29:04.410775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.681 [2024-07-15 15:29:04.410790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.681 [2024-07-15 15:29:04.410803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.681 [2024-07-15 15:29:04.412153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.681 [2024-07-15 15:29:04.412174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.681 [2024-07-15 15:29:04.412191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.681 [2024-07-15 15:29:04.412204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.681 [2024-07-15 15:29:04.412219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.681 [2024-07-15 15:29:04.412232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.681 [2024-07-15 15:29:04.412247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.681 [2024-07-15 15:29:04.412261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.681 [2024-07-15 15:29:04.412276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.681 [2024-07-15 15:29:04.412288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.681 [2024-07-15 15:29:04.412304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.681 [2024-07-15 15:29:04.412317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.681 [2024-07-15 15:29:04.412331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.681 [2024-07-15 15:29:04.412344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.681 [2024-07-15 15:29:04.412359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.681 [2024-07-15 15:29:04.412375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.681 [2024-07-15 15:29:04.412390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.681 [2024-07-15 15:29:04.412403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.681 [2024-07-15 15:29:04.412418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.681 [2024-07-15 15:29:04.412431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.681 [2024-07-15 15:29:04.412445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.681 [2024-07-15 15:29:04.412458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.681 [2024-07-15 15:29:04.412473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.681 [2024-07-15 15:29:04.412486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.681 [2024-07-15 15:29:04.412501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.681 [2024-07-15 15:29:04.412513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.681 [2024-07-15 15:29:04.412528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.681 [2024-07-15 15:29:04.412541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.681 [2024-07-15 15:29:04.412556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.681 [2024-07-15 15:29:04.412569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.681 [2024-07-15 15:29:04.412583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.681 [2024-07-15 15:29:04.412596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.681 [2024-07-15 15:29:04.412611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.681 [2024-07-15 15:29:04.412624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.681 [2024-07-15 15:29:04.412639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.681 [2024-07-15 15:29:04.412651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.681 [2024-07-15 15:29:04.412666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.681 [2024-07-15 15:29:04.412679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.681 [2024-07-15 15:29:04.412694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.681 [2024-07-15 15:29:04.412706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.681 [2024-07-15 15:29:04.412723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.681 [2024-07-15 15:29:04.412736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.681 [2024-07-15 15:29:04.412750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.681 [2024-07-15 15:29:04.412763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.681 [2024-07-15 15:29:04.412778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.681 [2024-07-15 15:29:04.412791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.681 [2024-07-15 15:29:04.412806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.681 [2024-07-15 15:29:04.412818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.681 [2024-07-15 15:29:04.412839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.681 [2024-07-15 15:29:04.412853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.681 [2024-07-15 15:29:04.412868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.681 [2024-07-15 15:29:04.412881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.681 [2024-07-15 15:29:04.412896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.681 [2024-07-15 15:29:04.412909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.681 [2024-07-15 15:29:04.412924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.681 [2024-07-15 15:29:04.412937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.681 [2024-07-15 15:29:04.412952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.681 [2024-07-15 15:29:04.412965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.681 [2024-07-15 15:29:04.412980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.681 [2024-07-15 15:29:04.412992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.681 [2024-07-15 15:29:04.413007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.681 [2024-07-15 15:29:04.413020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.681 [2024-07-15 15:29:04.413035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.681 [2024-07-15 15:29:04.413048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.681 [2024-07-15 15:29:04.413062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.681 [2024-07-15 15:29:04.413077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.681 [2024-07-15 15:29:04.413092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.681 [2024-07-15 15:29:04.413105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.681 [2024-07-15 15:29:04.413120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.681 [2024-07-15 15:29:04.413133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.681 [2024-07-15 15:29:04.413148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.681 [2024-07-15 15:29:04.413160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.681 [2024-07-15 15:29:04.413176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.681 [2024-07-15 15:29:04.413189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.681 [2024-07-15 15:29:04.413203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.681 [2024-07-15 15:29:04.413216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.681 [2024-07-15 15:29:04.413231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.681 [2024-07-15 15:29:04.413244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.681 [2024-07-15 15:29:04.413259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.681 [2024-07-15 15:29:04.413271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.681 [2024-07-15 15:29:04.413286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.681 [2024-07-15 15:29:04.413299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.681 [2024-07-15 15:29:04.413314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.681 [2024-07-15 15:29:04.413327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.681 [2024-07-15 15:29:04.413341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.681 [2024-07-15 15:29:04.413354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.681 [2024-07-15 15:29:04.413369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.681 [2024-07-15 15:29:04.413382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.682 [2024-07-15 15:29:04.413397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.682 [2024-07-15 15:29:04.413410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.682 [2024-07-15 15:29:04.413427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.682 [2024-07-15 15:29:04.413440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.682 [2024-07-15 15:29:04.413455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.682 [2024-07-15 15:29:04.413467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.682 [2024-07-15 15:29:04.413483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.682 [2024-07-15 15:29:04.413495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.682 [2024-07-15 15:29:04.413510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.682 [2024-07-15 15:29:04.413523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.682 [2024-07-15 15:29:04.413538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.682 [2024-07-15 15:29:04.413551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.682 [2024-07-15 15:29:04.413566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.682 [2024-07-15 15:29:04.413579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.682 [2024-07-15 15:29:04.413594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.682 [2024-07-15 15:29:04.413607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.682 [2024-07-15 15:29:04.413622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.682 [2024-07-15 15:29:04.413635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.682 [2024-07-15 15:29:04.413649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.682 [2024-07-15 15:29:04.413662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.682 [2024-07-15 15:29:04.413677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.682 [2024-07-15 15:29:04.413689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.682 [2024-07-15 15:29:04.413704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.682 [2024-07-15 15:29:04.413717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.682 [2024-07-15 15:29:04.413732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.682 [2024-07-15 15:29:04.413745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.682 [2024-07-15 15:29:04.413760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.682 [2024-07-15 15:29:04.413774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.682 [2024-07-15 15:29:04.413789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.682 [2024-07-15 15:29:04.413802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.682 [2024-07-15 15:29:04.413816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.682 [2024-07-15 15:29:04.413829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.682 [2024-07-15 15:29:04.413849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.682 [2024-07-15 15:29:04.413862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.682 [2024-07-15 15:29:04.413877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.682 [2024-07-15 15:29:04.413890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.682 [2024-07-15 15:29:04.413904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.682 [2024-07-15 15:29:04.413917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.682 [2024-07-15 15:29:04.413932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.682 [2024-07-15 15:29:04.413945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.682 [2024-07-15 15:29:04.415655] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.682 [2024-07-15 15:29:04.415682] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:24:00.682 [2024-07-15 15:29:04.415987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.682 [2024-07-15 15:29:04.416005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1975a10 with addr=10.0.0.2, port=4420 00:24:00.682 [2024-07-15 15:29:04.416017] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1975a10 is same with the state(5) to be set 00:24:00.682 [2024-07-15 15:29:04.416283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.682 [2024-07-15 15:29:04.416295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a17180 with addr=10.0.0.2, port=4420 00:24:00.682 [2024-07-15 15:29:04.416305] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a17180 is same with the state(5) to be set 00:24:00.682 [2024-07-15 15:29:04.416318] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1e940 (9): Bad file descriptor 00:24:00.682 [2024-07-15 15:29:04.416329] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:24:00.682 [2024-07-15 15:29:04.416339] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:24:00.682 [2024-07-15 15:29:04.416350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:24:00.682 [2024-07-15 15:29:04.416452] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:00.682 [2024-07-15 15:29:04.416509] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:00.682 [2024-07-15 15:29:04.416563] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:00.682 [2024-07-15 15:29:04.416616] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:00.682 [2024-07-15 15:29:04.416631] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.682 [2024-07-15 15:29:04.416925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.682 [2024-07-15 15:29:04.416941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1952c30 with addr=10.0.0.2, port=4420 00:24:00.682 [2024-07-15 15:29:04.416951] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1952c30 is same with the state(5) to be set 00:24:00.682 [2024-07-15 15:29:04.417205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.682 [2024-07-15 15:29:04.417218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x197eb20 with addr=10.0.0.2, port=4420 00:24:00.682 [2024-07-15 15:29:04.417228] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197eb20 is same with the state(5) to be set 00:24:00.682 [2024-07-15 15:29:04.417240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1975a10 (9): Bad file descriptor 00:24:00.682 [2024-07-15 15:29:04.417252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a17180 (9): Bad file descriptor 00:24:00.682 [2024-07-15 15:29:04.417264] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:00.682 [2024-07-15 15:29:04.417273] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:24:00.682 [2024-07-15 15:29:04.417283] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:00.682 [2024-07-15 15:29:04.417809] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.682 [2024-07-15 15:29:04.417824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1952c30 (9): Bad file descriptor 00:24:00.682 [2024-07-15 15:29:04.417841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x197eb20 (9): Bad file descriptor 00:24:00.682 [2024-07-15 15:29:04.417852] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:24:00.682 [2024-07-15 15:29:04.417861] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:24:00.682 [2024-07-15 15:29:04.417870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:24:00.682 [2024-07-15 15:29:04.417884] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:24:00.682 [2024-07-15 15:29:04.417893] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:24:00.682 [2024-07-15 15:29:04.417902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:24:00.682 [2024-07-15 15:29:04.417980] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.682 [2024-07-15 15:29:04.417991] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.682 [2024-07-15 15:29:04.418019] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.682 [2024-07-15 15:29:04.418028] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.682 [2024-07-15 15:29:04.418038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.682 [2024-07-15 15:29:04.418049] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:24:00.682 [2024-07-15 15:29:04.418059] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:24:00.682 [2024-07-15 15:29:04.418068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:24:00.682 [2024-07-15 15:29:04.418130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.682 [2024-07-15 15:29:04.418144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.682 [2024-07-15 15:29:04.418159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.682 [2024-07-15 15:29:04.418169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.682 [2024-07-15 15:29:04.418180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.682 [2024-07-15 15:29:04.418190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.682 [2024-07-15 15:29:04.418202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.682 [2024-07-15 15:29:04.418211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.682 [2024-07-15 15:29:04.418223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.682 [2024-07-15 15:29:04.418232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.682 [2024-07-15 15:29:04.418243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.682 [2024-07-15 15:29:04.418253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.682 [2024-07-15 15:29:04.418265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.682 [2024-07-15 15:29:04.418275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.682 [2024-07-15 15:29:04.418286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.682 [2024-07-15 15:29:04.418295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.682 [2024-07-15 15:29:04.418306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.682 [2024-07-15 15:29:04.418316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.682 [2024-07-15 15:29:04.418327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.682 [2024-07-15 15:29:04.418337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.682 [2024-07-15 15:29:04.418348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.682 [2024-07-15 15:29:04.418358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.682 [2024-07-15 15:29:04.418369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.682 [2024-07-15 15:29:04.418379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.682 [2024-07-15 15:29:04.418390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.682 [2024-07-15 15:29:04.418400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.682 [2024-07-15 15:29:04.418413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.682 [2024-07-15 15:29:04.418423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.682 [2024-07-15 15:29:04.418434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.682 [2024-07-15 15:29:04.418444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.682 [2024-07-15 15:29:04.418455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.682 [2024-07-15 15:29:04.418465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.682 [2024-07-15 15:29:04.418476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.682 [2024-07-15 15:29:04.418486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.682 [2024-07-15 15:29:04.418497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.683 [2024-07-15 15:29:04.418507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.683 [2024-07-15 15:29:04.418518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.683 [2024-07-15 15:29:04.418528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.683 [2024-07-15 15:29:04.418539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.683 [2024-07-15 15:29:04.418549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.683 [2024-07-15 15:29:04.418560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.683 [2024-07-15 15:29:04.418570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.683 [2024-07-15 15:29:04.418581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.683 [2024-07-15 15:29:04.418591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.683 [2024-07-15 15:29:04.418603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.683 [2024-07-15 15:29:04.418612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.683 [2024-07-15 15:29:04.418624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.683 [2024-07-15 15:29:04.418634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.683 [2024-07-15 15:29:04.418645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.683 [2024-07-15 15:29:04.418654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.683 [2024-07-15 15:29:04.418666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.683 [2024-07-15 15:29:04.418677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.683 [2024-07-15 15:29:04.418688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.683 [2024-07-15 15:29:04.418698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.683 [2024-07-15 15:29:04.418710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.683 [2024-07-15 15:29:04.418720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.683 [2024-07-15 15:29:04.418731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.683 [2024-07-15 15:29:04.418741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.683 [2024-07-15 15:29:04.418752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.683 [2024-07-15 15:29:04.418762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.683 [2024-07-15 15:29:04.418773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.683 [2024-07-15 15:29:04.418782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.683 [2024-07-15 15:29:04.418794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.683 [2024-07-15 15:29:04.418804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.683 [2024-07-15 15:29:04.418815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.683 [2024-07-15 15:29:04.418825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.683 [2024-07-15 15:29:04.418841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.683 [2024-07-15 15:29:04.418851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.683 [2024-07-15 15:29:04.418862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.683 [2024-07-15 15:29:04.418872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.683 [2024-07-15 15:29:04.418884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.683 [2024-07-15 15:29:04.418893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.683 [2024-07-15 15:29:04.418905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.683 [2024-07-15 15:29:04.418914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.683 [2024-07-15 15:29:04.418925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.683 [2024-07-15 15:29:04.418935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.683 [2024-07-15 15:29:04.418948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.683 [2024-07-15 15:29:04.418958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.683 [2024-07-15 15:29:04.418969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.683 [2024-07-15 15:29:04.418979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.683 [2024-07-15 15:29:04.418990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.683 [2024-07-15 15:29:04.419000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.683 [2024-07-15 15:29:04.419011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.683 [2024-07-15 15:29:04.419021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.683 [2024-07-15 15:29:04.419032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.683 [2024-07-15 15:29:04.419042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.683 [2024-07-15 15:29:04.419054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.683 [2024-07-15 15:29:04.419064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.683 [2024-07-15 15:29:04.419075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.683 [2024-07-15 15:29:04.419085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.683 [2024-07-15 15:29:04.419096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.683 [2024-07-15 15:29:04.419106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.683 [2024-07-15 15:29:04.419117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.683 [2024-07-15 15:29:04.419127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.683 [2024-07-15 15:29:04.419139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.683 [2024-07-15 15:29:04.419150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.683 [2024-07-15 15:29:04.419161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.683 [2024-07-15 15:29:04.419171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.683 [2024-07-15 15:29:04.419182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.683 [2024-07-15 15:29:04.419191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.683 [2024-07-15 15:29:04.419203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.683 [2024-07-15 15:29:04.419214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.683 [2024-07-15 15:29:04.419225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.683 [2024-07-15 15:29:04.419235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.683 [2024-07-15 15:29:04.419246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.683 [2024-07-15 15:29:04.419256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.683 [2024-07-15 15:29:04.419267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.683 [2024-07-15 15:29:04.419277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.683 [2024-07-15 15:29:04.419288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.683 [2024-07-15 15:29:04.419297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.683 [2024-07-15 15:29:04.419309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.683 [2024-07-15 15:29:04.419318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.683 [2024-07-15 15:29:04.419330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.683 [2024-07-15 15:29:04.419339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.683 [2024-07-15 15:29:04.419351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.683 [2024-07-15 15:29:04.419360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.683 [2024-07-15 15:29:04.419372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.683 [2024-07-15 15:29:04.419382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.683 [2024-07-15 15:29:04.419393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.683 [2024-07-15 15:29:04.419403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.683 [2024-07-15 15:29:04.419414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.683 [2024-07-15 15:29:04.419424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.683 [2024-07-15 15:29:04.419436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.683 [2024-07-15 15:29:04.419445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.683 [2024-07-15 15:29:04.419457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.683 [2024-07-15 15:29:04.419466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.683 [2024-07-15 15:29:04.419479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.683 [2024-07-15 15:29:04.419489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.683 [2024-07-15 15:29:04.419499] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d7120 is same with the state(5) to be set 00:24:00.683 [2024-07-15 15:29:04.420525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.683 [2024-07-15 15:29:04.420540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.683 [2024-07-15 15:29:04.420553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.683 [2024-07-15 15:29:04.420564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.683 [2024-07-15 15:29:04.420575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.683 [2024-07-15 15:29:04.420585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.683 [2024-07-15 15:29:04.420596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.684 [2024-07-15 15:29:04.420606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.684 [2024-07-15 15:29:04.420617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.684 [2024-07-15 15:29:04.420627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.684 [2024-07-15 15:29:04.420639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.684 [2024-07-15 15:29:04.420648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.684 [2024-07-15 15:29:04.420660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.684 [2024-07-15 15:29:04.420670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.684 [2024-07-15 15:29:04.420682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.684 [2024-07-15 15:29:04.420691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.684 [2024-07-15 15:29:04.420703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.684 [2024-07-15 15:29:04.420712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.684 [2024-07-15 15:29:04.420724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.684 [2024-07-15 15:29:04.420734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.684 [2024-07-15 15:29:04.420745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.684 [2024-07-15 15:29:04.420755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.684 [2024-07-15 15:29:04.420771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.684 [2024-07-15 15:29:04.420781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.684 [2024-07-15 15:29:04.420793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.684 [2024-07-15 15:29:04.420802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.684 [2024-07-15 15:29:04.420814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.684 [2024-07-15 15:29:04.420823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.684 [2024-07-15 15:29:04.420840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.684 [2024-07-15 15:29:04.420850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.684 [2024-07-15 15:29:04.420861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.684 [2024-07-15 15:29:04.420871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.684 [2024-07-15 15:29:04.420882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.684 [2024-07-15 15:29:04.420892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.684 [2024-07-15 15:29:04.420904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.684 [2024-07-15 15:29:04.420913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.684 [2024-07-15 15:29:04.420925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.684 [2024-07-15 15:29:04.420935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.684 [2024-07-15 15:29:04.420946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.684 [2024-07-15 15:29:04.420956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.684 [2024-07-15 15:29:04.420968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.684 [2024-07-15 15:29:04.420978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.684 [2024-07-15 15:29:04.420989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.684 [2024-07-15 15:29:04.420999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.684 [2024-07-15 15:29:04.421010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.684 [2024-07-15 15:29:04.421020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.684 [2024-07-15 15:29:04.421031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.684 [2024-07-15 15:29:04.421043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.684 [2024-07-15 15:29:04.421054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.684 [2024-07-15 15:29:04.421064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.684 [2024-07-15 15:29:04.421075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.684 [2024-07-15 15:29:04.421085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.684 [2024-07-15 15:29:04.421096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.684 [2024-07-15 15:29:04.421106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.684 [2024-07-15 15:29:04.421117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.684 [2024-07-15 15:29:04.421127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.684 [2024-07-15 15:29:04.421138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.684 [2024-07-15 15:29:04.421148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.684 [2024-07-15 15:29:04.421159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.684 [2024-07-15 15:29:04.421169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.684 [2024-07-15 15:29:04.421181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.684 [2024-07-15 15:29:04.421191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.684 [2024-07-15 15:29:04.421202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.684 [2024-07-15 15:29:04.421212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.684 [2024-07-15 15:29:04.421223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.684 [2024-07-15 15:29:04.421233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.684 [2024-07-15 15:29:04.421244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.684 [2024-07-15 15:29:04.421254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.684 [2024-07-15 15:29:04.421265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.684 [2024-07-15 15:29:04.421274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.684 [2024-07-15 15:29:04.421286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.684 [2024-07-15 15:29:04.421296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.684 [2024-07-15 15:29:04.421308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.684 [2024-07-15 15:29:04.421318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.684 [2024-07-15 15:29:04.421330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.684 [2024-07-15 15:29:04.421339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.684 [2024-07-15 15:29:04.421351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.684 [2024-07-15 15:29:04.421361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.684 [2024-07-15 15:29:04.421373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.684 [2024-07-15 15:29:04.421383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.684 [2024-07-15 15:29:04.421394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.684 [2024-07-15 15:29:04.421404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.684 [2024-07-15 15:29:04.421415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.684 [2024-07-15 15:29:04.421425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.684 [2024-07-15 15:29:04.421436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.684 [2024-07-15 15:29:04.421446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.684 [2024-07-15 15:29:04.421457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.684 [2024-07-15 15:29:04.421467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.684 [2024-07-15 15:29:04.421478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.684 [2024-07-15 15:29:04.421488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.684 [2024-07-15 15:29:04.421499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.684 [2024-07-15 15:29:04.421509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.684 [2024-07-15 15:29:04.421521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.684 [2024-07-15 15:29:04.421531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.684 [2024-07-15 15:29:04.421542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.684 [2024-07-15 15:29:04.421552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.684 [2024-07-15 15:29:04.421563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.684 [2024-07-15 15:29:04.421574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.684 [2024-07-15 15:29:04.421586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.684 [2024-07-15 15:29:04.421596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.684 [2024-07-15 15:29:04.421607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.684 [2024-07-15 15:29:04.421616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.684 [2024-07-15 15:29:04.421628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.684 [2024-07-15 15:29:04.421638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.684 [2024-07-15 15:29:04.421650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.684 [2024-07-15 15:29:04.421660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.684 [2024-07-15 15:29:04.421672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.684 [2024-07-15 15:29:04.421681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.684 [2024-07-15 15:29:04.421693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.684 [2024-07-15 15:29:04.421702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.684 [2024-07-15 15:29:04.421713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.684 [2024-07-15 15:29:04.421723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.684 [2024-07-15 15:29:04.421734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.684 [2024-07-15 15:29:04.421744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.684 [2024-07-15 15:29:04.421755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.684 [2024-07-15 15:29:04.421765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.684 [2024-07-15 15:29:04.421776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.684 [2024-07-15 15:29:04.421786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.684 [2024-07-15 15:29:04.421797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.684 [2024-07-15 15:29:04.421807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.685 [2024-07-15 15:29:04.421818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.685 [2024-07-15 15:29:04.421828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.685 [2024-07-15 15:29:04.421845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.685 [2024-07-15 15:29:04.421856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.685 [2024-07-15 15:29:04.421867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.685 [2024-07-15 15:29:04.421877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.685 [2024-07-15 15:29:04.421888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.685 [2024-07-15 15:29:04.421898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.685 [2024-07-15 15:29:04.421909] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x194e300 is same with the state(5) to be set 00:24:00.685 [2024-07-15 15:29:04.422944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.685 [2024-07-15 15:29:04.422974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.685 [2024-07-15 15:29:04.422988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.685 [2024-07-15 15:29:04.422998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.685 [2024-07-15 15:29:04.423010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.685 [2024-07-15 15:29:04.423019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.685 [2024-07-15 15:29:04.423031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.685 [2024-07-15 15:29:04.423040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.685 [2024-07-15 15:29:04.423052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.685 [2024-07-15 15:29:04.423062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.685 [2024-07-15 15:29:04.423073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.685 [2024-07-15 15:29:04.423083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.685 [2024-07-15 15:29:04.423094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.685 [2024-07-15 15:29:04.423104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.685 [2024-07-15 15:29:04.423115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.685 [2024-07-15 15:29:04.423125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.685 [2024-07-15 15:29:04.423136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.685 [2024-07-15 15:29:04.423146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.685 [2024-07-15 15:29:04.423159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.685 [2024-07-15 15:29:04.423169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.685 [2024-07-15 15:29:04.423181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.685 [2024-07-15 15:29:04.423190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.685 [2024-07-15 15:29:04.423201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.685 [2024-07-15 15:29:04.423211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.685 [2024-07-15 15:29:04.423222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.685 [2024-07-15 15:29:04.423232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.685 [2024-07-15 15:29:04.423243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.685 [2024-07-15 15:29:04.423253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.685 [2024-07-15 15:29:04.423264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.685 [2024-07-15 15:29:04.423274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.685 [2024-07-15 15:29:04.423285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.685 [2024-07-15 15:29:04.423295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.685 [2024-07-15 15:29:04.423306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.685 [2024-07-15 15:29:04.423315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.685 [2024-07-15 15:29:04.423327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.685 [2024-07-15 15:29:04.423337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.685 [2024-07-15 15:29:04.423348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.685 [2024-07-15 15:29:04.423358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.685 [2024-07-15 15:29:04.423369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.685 [2024-07-15 15:29:04.423379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.685 [2024-07-15 15:29:04.423390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.685 [2024-07-15 15:29:04.423400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.685 [2024-07-15 15:29:04.423411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.685 [2024-07-15 15:29:04.423422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.685 [2024-07-15 15:29:04.423433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.685 [2024-07-15 15:29:04.423443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.685 [2024-07-15 15:29:04.423454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.685 [2024-07-15 15:29:04.423464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.685 [2024-07-15 15:29:04.423475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.685 [2024-07-15 15:29:04.423485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.685 [2024-07-15 15:29:04.423496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.685 [2024-07-15 15:29:04.423506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.685 [2024-07-15 15:29:04.423517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.685 [2024-07-15 15:29:04.423527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.685 [2024-07-15 15:29:04.423539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.685 [2024-07-15 15:29:04.423550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.685 [2024-07-15 15:29:04.423563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.685 [2024-07-15 15:29:04.423573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.685 [2024-07-15 15:29:04.423586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.685 [2024-07-15 15:29:04.423598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.685 [2024-07-15 15:29:04.423609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.685 [2024-07-15 15:29:04.423619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.685 [2024-07-15 15:29:04.423632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.685 [2024-07-15 15:29:04.423642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.685 [2024-07-15 15:29:04.423654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.685 [2024-07-15 15:29:04.423663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.685 [2024-07-15 15:29:04.423675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.685 [2024-07-15 15:29:04.423685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.685 [2024-07-15 15:29:04.423696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.685 [2024-07-15 15:29:04.423707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.685 [2024-07-15 15:29:04.423719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.685 [2024-07-15 15:29:04.423729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.685 [2024-07-15 15:29:04.423740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.685 [2024-07-15 15:29:04.423750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.685 [2024-07-15 15:29:04.423761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.685 [2024-07-15 15:29:04.423771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.685 [2024-07-15 15:29:04.423783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.685 [2024-07-15 15:29:04.423793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.685 [2024-07-15 15:29:04.423805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.685 [2024-07-15 15:29:04.423814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.685 [2024-07-15 15:29:04.423826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.685 [2024-07-15 15:29:04.423841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.685 [2024-07-15 15:29:04.423852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.685 [2024-07-15 15:29:04.423862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.685 [2024-07-15 15:29:04.423873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.685 [2024-07-15 15:29:04.423883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.685 [2024-07-15 15:29:04.423894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.685 [2024-07-15 15:29:04.423904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.685 [2024-07-15 15:29:04.423916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.685 [2024-07-15 15:29:04.423926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.685 [2024-07-15 15:29:04.423938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.685 [2024-07-15 15:29:04.423947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.685 [2024-07-15 15:29:04.423959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.685 [2024-07-15 15:29:04.423969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.685 [2024-07-15 15:29:04.423982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.685 [2024-07-15 15:29:04.423992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.685 [2024-07-15 15:29:04.424003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.685 [2024-07-15 15:29:04.424012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.685 [2024-07-15 15:29:04.424024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.685 [2024-07-15 15:29:04.424033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.685 [2024-07-15 15:29:04.424044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.685 [2024-07-15 15:29:04.424054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.685 [2024-07-15 15:29:04.424065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.685 [2024-07-15 15:29:04.424075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.686 [2024-07-15 15:29:04.424086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.686 [2024-07-15 15:29:04.424096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.686 [2024-07-15 15:29:04.424107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.686 [2024-07-15 15:29:04.424117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.686 [2024-07-15 15:29:04.424128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.686 [2024-07-15 15:29:04.424138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.686 [2024-07-15 15:29:04.424149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.686 [2024-07-15 15:29:04.424159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.686 [2024-07-15 15:29:04.424170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.686 [2024-07-15 15:29:04.424180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.686 [2024-07-15 15:29:04.424191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.686 [2024-07-15 15:29:04.424201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.686 [2024-07-15 15:29:04.424212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.686 [2024-07-15 15:29:04.424232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.686 [2024-07-15 15:29:04.424243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.686 [2024-07-15 15:29:04.424254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.686 [2024-07-15 15:29:04.424265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.686 [2024-07-15 15:29:04.424274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.686 [2024-07-15 15:29:04.424285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.686 [2024-07-15 15:29:04.424294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.686 [2024-07-15 15:29:04.424305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.686 [2024-07-15 15:29:04.424314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.686 [2024-07-15 15:29:04.424324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.686 [2024-07-15 15:29:04.424333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.686 [2024-07-15 15:29:04.424343] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ab70 is same with the state(5) to be set 00:24:00.686 [2024-07-15 15:29:04.425311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.686 [2024-07-15 15:29:04.425326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.686 [2024-07-15 15:29:04.425340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.686 [2024-07-15 15:29:04.425351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.686 [2024-07-15 15:29:04.425362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.686 [2024-07-15 15:29:04.425374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.686 [2024-07-15 15:29:04.425387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.686 [2024-07-15 15:29:04.425396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.686 [2024-07-15 15:29:04.425408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.686 [2024-07-15 15:29:04.425420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.686 [2024-07-15 15:29:04.425433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.686 [2024-07-15 15:29:04.425443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.686 [2024-07-15 15:29:04.425455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.686 [2024-07-15 15:29:04.425464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.686 [2024-07-15 15:29:04.425476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.686 [2024-07-15 15:29:04.425488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.686 [2024-07-15 15:29:04.425498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.686 [2024-07-15 15:29:04.425508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.686 [2024-07-15 15:29:04.425519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.686 [2024-07-15 15:29:04.425529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.686 [2024-07-15 15:29:04.425540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.686 [2024-07-15 15:29:04.425549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.686 [2024-07-15 15:29:04.425560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.686 [2024-07-15 15:29:04.425570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.686 [2024-07-15 15:29:04.425581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.686 [2024-07-15 15:29:04.425590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.686 [2024-07-15 15:29:04.425601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.686 [2024-07-15 15:29:04.425610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.686 [2024-07-15 15:29:04.425621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.686 [2024-07-15 15:29:04.425630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.686 [2024-07-15 15:29:04.425640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.686 [2024-07-15 15:29:04.425650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.686 [2024-07-15 15:29:04.425660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.686 [2024-07-15 15:29:04.425669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.686 [2024-07-15 15:29:04.425680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.686 [2024-07-15 15:29:04.425689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.686 [2024-07-15 15:29:04.425700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.686 [2024-07-15 15:29:04.425709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.686 [2024-07-15 15:29:04.425720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.686 [2024-07-15 15:29:04.425729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.686 [2024-07-15 15:29:04.425741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.686 [2024-07-15 15:29:04.425750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.686 [2024-07-15 15:29:04.425761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.686 [2024-07-15 15:29:04.425770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.686 [2024-07-15 15:29:04.425781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.686 [2024-07-15 15:29:04.425790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.686 [2024-07-15 15:29:04.425801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.686 [2024-07-15 15:29:04.425810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.686 [2024-07-15 15:29:04.425820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.686 [2024-07-15 15:29:04.425830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.686 [2024-07-15 15:29:04.425845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.686 [2024-07-15 15:29:04.425854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.686 [2024-07-15 15:29:04.425865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.686 [2024-07-15 15:29:04.425874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.686 [2024-07-15 15:29:04.425885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.686 [2024-07-15 15:29:04.425894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.686 [2024-07-15 15:29:04.425906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.686 [2024-07-15 15:29:04.425915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.686 [2024-07-15 15:29:04.425925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.686 [2024-07-15 15:29:04.425934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.686 [2024-07-15 15:29:04.425945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.686 [2024-07-15 15:29:04.425954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.686 [2024-07-15 15:29:04.425964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.686 [2024-07-15 15:29:04.425974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.686 [2024-07-15 15:29:04.425985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.686 [2024-07-15 15:29:04.425996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.686 [2024-07-15 15:29:04.426007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.686 [2024-07-15 15:29:04.426016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.686 [2024-07-15 15:29:04.426027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.686 [2024-07-15 15:29:04.426037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.686 [2024-07-15 15:29:04.426047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.686 [2024-07-15 15:29:04.426056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.686 [2024-07-15 15:29:04.426067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.686 [2024-07-15 15:29:04.426076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.686 [2024-07-15 15:29:04.426087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.686 [2024-07-15 15:29:04.426097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.687 [2024-07-15 15:29:04.426107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.687 [2024-07-15 15:29:04.426117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.687 [2024-07-15 15:29:04.426128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.687 [2024-07-15 15:29:04.426139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.687 [2024-07-15 15:29:04.426151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.687 [2024-07-15 15:29:04.426159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.687 [2024-07-15 15:29:04.426170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.687 [2024-07-15 15:29:04.426181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.687 [2024-07-15 15:29:04.426192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.687 [2024-07-15 15:29:04.426202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.687 [2024-07-15 15:29:04.426212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.687 [2024-07-15 15:29:04.426222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.687 [2024-07-15 15:29:04.426235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.687 [2024-07-15 15:29:04.426245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.687 [2024-07-15 15:29:04.426257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.687 [2024-07-15 15:29:04.426266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.687 [2024-07-15 15:29:04.426279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.687 [2024-07-15 15:29:04.426288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.687 [2024-07-15 15:29:04.426299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.687 [2024-07-15 15:29:04.426308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.687 [2024-07-15 15:29:04.426319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.687 [2024-07-15 15:29:04.426329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.687 [2024-07-15 15:29:04.426340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.687 [2024-07-15 15:29:04.426350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.687 [2024-07-15 15:29:04.426362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.687 [2024-07-15 15:29:04.426371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.687 [2024-07-15 15:29:04.426382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.687 [2024-07-15 15:29:04.426392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.687 [2024-07-15 15:29:04.426402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.687 [2024-07-15 15:29:04.426414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.687 [2024-07-15 15:29:04.426425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.687 [2024-07-15 15:29:04.426433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.687 [2024-07-15 15:29:04.426445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.687 [2024-07-15 15:29:04.426455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.687 [2024-07-15 15:29:04.426466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.687 [2024-07-15 15:29:04.426477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.687 [2024-07-15 15:29:04.426487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.687 [2024-07-15 15:29:04.426496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.687 [2024-07-15 15:29:04.426508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.687 [2024-07-15 15:29:04.426519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.687 [2024-07-15 15:29:04.426530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.687 [2024-07-15 15:29:04.426539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.687 [2024-07-15 15:29:04.426550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.687 [2024-07-15 15:29:04.426560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.687 [2024-07-15 15:29:04.426571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.687 [2024-07-15 15:29:04.426580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.687 [2024-07-15 15:29:04.426590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.687 [2024-07-15 15:29:04.426599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.687 [2024-07-15 15:29:04.426610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.687 [2024-07-15 15:29:04.426620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.687 [2024-07-15 15:29:04.426632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.687 [2024-07-15 15:29:04.426641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.687 [2024-07-15 15:29:04.426651] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac3820 is same with the state(5) to be set 00:24:00.687 [2024-07-15 15:29:04.428519] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:24:00.687 [2024-07-15 15:29:04.428541] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:24:00.687 [2024-07-15 15:29:04.428552] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.687 [2024-07-15 15:29:04.428561] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.687 [2024-07-15 15:29:04.428570] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:24:00.687 [2024-07-15 15:29:04.428583] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:24:00.687 [2024-07-15 15:29:04.428652] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:00.687 [2024-07-15 15:29:04.428673] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:00.687 [2024-07-15 15:29:04.428720] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:24:00.687 task offset: 20608 on job bdev=Nvme10n1 fails 00:24:00.687 00:24:00.687 Latency(us) 00:24:00.687 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:00.687 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:00.687 Job: Nvme1n1 ended in about 0.60 seconds with error 00:24:00.687 Verification LBA range: start 0x0 length 0x400 00:24:00.687 Nvme1n1 : 0.60 212.34 13.27 106.17 0.00 198160.11 17930.65 201326.59 00:24:00.687 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:00.687 Job: Nvme2n1 ended in about 0.57 seconds with error 00:24:00.687 Verification LBA range: start 0x0 length 0x400 00:24:00.687 Nvme2n1 : 0.57 223.66 13.98 111.83 0.00 183056.04 3879.73 203843.17 00:24:00.687 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:00.687 Job: Nvme3n1 ended in about 0.61 seconds with error 00:24:00.687 Verification LBA range: start 0x0 length 0x400 00:24:00.687 Nvme3n1 : 0.61 211.25 13.20 105.63 0.00 189297.73 15518.92 205520.90 00:24:00.687 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:00.687 Job: Nvme4n1 ended in about 0.60 seconds with error 00:24:00.687 Verification LBA range: start 0x0 length 0x400 00:24:00.687 Nvme4n1 : 0.60 214.64 13.42 107.32 0.00 181112.83 18559.80 179516.21 00:24:00.687 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:00.687 Job: Nvme5n1 ended in about 0.61 seconds with error 00:24:00.687 Verification LBA range: start 0x0 length 0x400 00:24:00.687 Nvme5n1 : 0.61 104.68 6.54 104.68 0.00 271875.28 36700.16 208876.34 00:24:00.687 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:00.687 Job: Nvme6n1 ended in about 0.61 seconds with error 00:24:00.687 Verification LBA range: start 0x0 length 0x400 00:24:00.687 Nvme6n1 : 0.61 104.27 6.52 104.27 0.00 265730.05 19503.51 226492.42 00:24:00.687 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:00.687 Job: Nvme7n1 ended in about 0.60 seconds with error 00:24:00.687 Verification LBA range: start 0x0 length 0x400 00:24:00.687 Nvme7n1 : 0.60 214.18 13.39 107.09 0.00 166739.42 20656.95 226492.42 00:24:00.687 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:00.687 Job: Nvme8n1 ended in about 0.62 seconds with error 00:24:00.687 Verification LBA range: start 0x0 length 0x400 00:24:00.687 Nvme8n1 : 0.62 207.72 12.98 103.86 0.00 167931.08 19188.94 182871.65 00:24:00.687 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:00.687 Job: Nvme9n1 ended in about 0.62 seconds with error 00:24:00.687 Verification LBA range: start 0x0 length 0x400 00:24:00.687 Nvme9n1 : 0.62 103.47 6.47 103.47 0.00 245766.55 39216.74 216426.09 00:24:00.687 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:00.687 Job: Nvme10n1 ended in about 0.57 seconds with error 00:24:00.687 Verification LBA range: start 0x0 length 0x400 00:24:00.687 Nvme10n1 : 0.57 224.04 14.00 112.02 0.00 143121.68 5505.02 180355.07 00:24:00.687 =================================================================================================================== 00:24:00.687 Total : 1820.25 113.77 1066.34 0.00 194629.65 3879.73 226492.42 00:24:00.687 [2024-07-15 15:29:04.451509] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:00.687 [2024-07-15 15:29:04.451548] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:24:00.687 [2024-07-15 15:29:04.452003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.687 [2024-07-15 15:29:04.452024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ad91d0 with addr=10.0.0.2, port=4420 00:24:00.687 [2024-07-15 15:29:04.452036] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad91d0 is same with the state(5) to be set 00:24:00.687 [2024-07-15 15:29:04.452365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.687 [2024-07-15 15:29:04.452377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1e940 with addr=10.0.0.2, port=4420 00:24:00.687 [2024-07-15 15:29:04.452387] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e940 is same with the state(5) to be set 00:24:00.687 [2024-07-15 15:29:04.452729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.687 [2024-07-15 15:29:04.452741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a18ee0 with addr=10.0.0.2, port=4420 00:24:00.687 [2024-07-15 15:29:04.452755] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a18ee0 is same with the state(5) to be set 00:24:00.687 [2024-07-15 15:29:04.453063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.687 [2024-07-15 15:29:04.453075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1455610 with addr=10.0.0.2, port=4420 00:24:00.687 [2024-07-15 15:29:04.453085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455610 is same with the state(5) to be set 00:24:00.687 [2024-07-15 15:29:04.454007] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:24:00.687 [2024-07-15 15:29:04.454027] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:24:00.687 [2024-07-15 15:29:04.454039] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:24:00.687 [2024-07-15 15:29:04.454050] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.687 [2024-07-15 15:29:04.454405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.687 [2024-07-15 15:29:04.454420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac51f0 with addr=10.0.0.2, port=4420 00:24:00.687 [2024-07-15 15:29:04.454430] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac51f0 is same with the state(5) to be set 00:24:00.687 [2024-07-15 15:29:04.454662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.687 [2024-07-15 15:29:04.454675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19fd150 with addr=10.0.0.2, port=4420 00:24:00.687 [2024-07-15 15:29:04.454684] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd150 is same with the state(5) to be set 00:24:00.687 [2024-07-15 15:29:04.454699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ad91d0 (9): Bad file descriptor 00:24:00.687 [2024-07-15 15:29:04.454712] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1e940 (9): Bad file descriptor 00:24:00.687 [2024-07-15 15:29:04.454723] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a18ee0 (9): Bad file descriptor 00:24:00.687 [2024-07-15 15:29:04.454734] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1455610 (9): Bad file descriptor 00:24:00.687 [2024-07-15 15:29:04.454770] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:00.687 [2024-07-15 15:29:04.454783] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:00.688 [2024-07-15 15:29:04.454795] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:00.688 [2024-07-15 15:29:04.454810] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:00.688 [2024-07-15 15:29:04.455081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.688 [2024-07-15 15:29:04.455096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a17180 with addr=10.0.0.2, port=4420 00:24:00.688 [2024-07-15 15:29:04.455106] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a17180 is same with the state(5) to be set 00:24:00.688 [2024-07-15 15:29:04.455365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.688 [2024-07-15 15:29:04.455377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1975a10 with addr=10.0.0.2, port=4420 00:24:00.688 [2024-07-15 15:29:04.455386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1975a10 is same with the state(5) to be set 00:24:00.688 [2024-07-15 15:29:04.455566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.688 [2024-07-15 15:29:04.455578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x197eb20 with addr=10.0.0.2, port=4420 00:24:00.688 [2024-07-15 15:29:04.455591] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197eb20 is same with the state(5) to be set 00:24:00.688 [2024-07-15 15:29:04.455825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.688 [2024-07-15 15:29:04.455844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1952c30 with addr=10.0.0.2, port=4420 00:24:00.688 [2024-07-15 15:29:04.455854] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1952c30 is same with the state(5) to be set 00:24:00.688 [2024-07-15 15:29:04.455866] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac51f0 (9): Bad file descriptor 00:24:00.688 [2024-07-15 15:29:04.455877] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19fd150 (9): Bad file descriptor 00:24:00.688 [2024-07-15 15:29:04.455888] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:24:00.688 [2024-07-15 15:29:04.455897] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:24:00.688 [2024-07-15 15:29:04.455907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:24:00.688 [2024-07-15 15:29:04.455919] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:00.688 [2024-07-15 15:29:04.455928] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:24:00.688 [2024-07-15 15:29:04.455936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:00.688 [2024-07-15 15:29:04.455947] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:24:00.688 [2024-07-15 15:29:04.455956] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:24:00.688 [2024-07-15 15:29:04.455964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:24:00.688 [2024-07-15 15:29:04.455975] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:24:00.688 [2024-07-15 15:29:04.455984] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:24:00.688 [2024-07-15 15:29:04.455993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:24:00.688 [2024-07-15 15:29:04.456058] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.688 [2024-07-15 15:29:04.456068] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.688 [2024-07-15 15:29:04.456076] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.688 [2024-07-15 15:29:04.456083] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.688 [2024-07-15 15:29:04.456092] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a17180 (9): Bad file descriptor 00:24:00.688 [2024-07-15 15:29:04.456103] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1975a10 (9): Bad file descriptor 00:24:00.688 [2024-07-15 15:29:04.456114] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x197eb20 (9): Bad file descriptor 00:24:00.688 [2024-07-15 15:29:04.456124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1952c30 (9): Bad file descriptor 00:24:00.688 [2024-07-15 15:29:04.456134] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:24:00.688 [2024-07-15 15:29:04.456143] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:24:00.688 [2024-07-15 15:29:04.456151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:24:00.688 [2024-07-15 15:29:04.456161] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:24:00.688 [2024-07-15 15:29:04.456172] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:24:00.688 [2024-07-15 15:29:04.456181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:24:00.688 [2024-07-15 15:29:04.456206] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.688 [2024-07-15 15:29:04.456214] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.688 [2024-07-15 15:29:04.456222] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:24:00.688 [2024-07-15 15:29:04.456231] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:24:00.688 [2024-07-15 15:29:04.456239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:24:00.688 [2024-07-15 15:29:04.456249] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:24:00.688 [2024-07-15 15:29:04.456258] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:24:00.688 [2024-07-15 15:29:04.456267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:24:00.688 [2024-07-15 15:29:04.456277] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:24:00.688 [2024-07-15 15:29:04.456285] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:24:00.688 [2024-07-15 15:29:04.456294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:24:00.688 [2024-07-15 15:29:04.456304] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.688 [2024-07-15 15:29:04.456312] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.688 [2024-07-15 15:29:04.456321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.688 [2024-07-15 15:29:04.456347] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.688 [2024-07-15 15:29:04.456356] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.688 [2024-07-15 15:29:04.456363] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.688 [2024-07-15 15:29:04.456371] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.947 15:29:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:24:00.947 15:29:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:24:02.324 15:29:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 3127021 00:24:02.324 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (3127021) - No such process 00:24:02.324 15:29:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:24:02.324 15:29:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:24:02.324 15:29:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:02.324 15:29:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:02.324 15:29:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:02.324 15:29:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:02.324 15:29:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:02.324 15:29:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:24:02.324 15:29:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:02.324 15:29:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:24:02.324 15:29:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:02.324 15:29:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:02.324 rmmod nvme_tcp 00:24:02.324 rmmod nvme_fabrics 00:24:02.324 rmmod nvme_keyring 00:24:02.324 15:29:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:02.324 15:29:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:24:02.324 15:29:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:24:02.324 15:29:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:24:02.324 15:29:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:02.324 15:29:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:02.324 15:29:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:02.324 15:29:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:02.324 15:29:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:02.324 15:29:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:02.324 15:29:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:02.324 15:29:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.228 15:29:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:04.228 00:24:04.228 real 0m7.690s 00:24:04.228 user 0m17.937s 00:24:04.229 sys 0m1.520s 00:24:04.229 15:29:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:04.229 15:29:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:04.229 ************************************ 00:24:04.229 END TEST nvmf_shutdown_tc3 00:24:04.229 ************************************ 00:24:04.229 15:29:08 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:24:04.229 15:29:08 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:24:04.229 00:24:04.229 real 0m32.145s 00:24:04.229 user 1m14.852s 00:24:04.229 sys 0m10.134s 00:24:04.229 15:29:08 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:04.229 15:29:08 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:04.229 ************************************ 00:24:04.229 END TEST nvmf_shutdown 00:24:04.229 ************************************ 00:24:04.229 15:29:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:04.229 15:29:08 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:24:04.229 15:29:08 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:04.229 15:29:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:04.229 15:29:08 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:24:04.229 15:29:08 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:04.229 15:29:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:04.229 15:29:08 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:24:04.229 15:29:08 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:04.229 15:29:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:04.229 15:29:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:04.229 15:29:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:04.487 ************************************ 00:24:04.487 START TEST nvmf_multicontroller 00:24:04.487 ************************************ 00:24:04.487 15:29:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:04.487 * Looking for test storage... 00:24:04.487 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:04.487 15:29:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:04.487 15:29:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:24:04.487 15:29:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:04.487 15:29:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:04.487 15:29:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:04.487 15:29:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:04.487 15:29:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:04.487 15:29:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:04.487 15:29:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:04.487 15:29:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:04.487 15:29:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:04.487 15:29:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:04.487 15:29:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:04.487 15:29:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:24:04.487 15:29:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:04.487 15:29:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:04.487 15:29:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:04.487 15:29:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:04.487 15:29:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:04.487 15:29:08 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:04.487 15:29:08 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:04.487 15:29:08 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:04.487 15:29:08 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.487 15:29:08 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.487 15:29:08 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.487 15:29:08 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:24:04.487 15:29:08 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.487 15:29:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:24:04.487 15:29:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:04.487 15:29:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:04.487 15:29:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:04.487 15:29:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:04.487 15:29:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:04.487 15:29:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:04.487 15:29:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:04.487 15:29:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:04.487 15:29:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:04.487 15:29:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:04.487 15:29:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:24:04.487 15:29:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:24:04.487 15:29:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:04.487 15:29:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:24:04.487 15:29:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:24:04.487 15:29:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:04.487 15:29:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:04.487 15:29:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:04.487 15:29:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:04.487 15:29:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:04.487 15:29:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.487 15:29:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:04.487 15:29:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.487 15:29:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:04.488 15:29:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:04.488 15:29:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:24:04.488 15:29:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:11.087 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:11.088 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:24:11.088 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:11.088 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:11.088 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:11.088 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:11.088 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:11.088 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:24:11.088 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:11.088 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:24:11.088 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:24:11.088 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:24:11.088 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:24:11.088 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:24:11.088 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:24:11.088 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:11.088 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:11.088 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:11.088 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:11.088 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:11.088 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:11.088 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:11.088 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:11.088 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:11.088 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:11.088 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:11.088 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:11.088 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:11.088 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:11.088 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:11.088 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:11.088 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:11.088 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:11.088 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:11.088 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:11.088 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:11.088 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:11.088 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:11.088 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:11.088 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:11.088 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:11.088 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:11.088 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:11.088 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:11.088 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:11.088 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:11.088 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:11.088 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:11.088 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:11.088 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:11.088 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:11.362 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:11.362 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:11.362 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:11.362 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:11.362 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:11.362 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:11.362 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:11.362 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:11.362 Found net devices under 0000:af:00.0: cvl_0_0 00:24:11.362 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:11.362 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:11.362 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:11.362 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:11.362 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:11.362 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:11.362 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:11.362 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:11.362 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:11.362 Found net devices under 0000:af:00.1: cvl_0_1 00:24:11.362 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:11.362 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:11.362 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:24:11.362 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:11.362 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:11.362 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:11.362 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:11.362 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:11.362 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:11.362 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:11.362 15:29:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:11.362 15:29:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:11.362 15:29:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:11.362 15:29:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:11.362 15:29:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:11.362 15:29:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:11.362 15:29:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:11.362 15:29:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:11.362 15:29:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:11.362 15:29:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:11.362 15:29:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:11.362 15:29:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:11.362 15:29:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:11.362 15:29:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:11.621 15:29:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:11.621 15:29:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:11.621 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:11.621 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:24:11.621 00:24:11.621 --- 10.0.0.2 ping statistics --- 00:24:11.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:11.621 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:24:11.621 15:29:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:11.621 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:11.621 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:24:11.621 00:24:11.621 --- 10.0.0.1 ping statistics --- 00:24:11.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:11.621 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:24:11.621 15:29:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:11.621 15:29:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:24:11.621 15:29:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:11.621 15:29:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:11.621 15:29:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:11.621 15:29:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:11.621 15:29:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:11.621 15:29:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:11.621 15:29:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:11.621 15:29:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:24:11.621 15:29:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:11.621 15:29:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:11.621 15:29:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:11.621 15:29:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=3131292 00:24:11.621 15:29:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:11.621 15:29:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 3131292 00:24:11.621 15:29:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 3131292 ']' 00:24:11.621 15:29:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:11.621 15:29:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:11.621 15:29:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:11.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:11.621 15:29:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:11.621 15:29:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:11.621 [2024-07-15 15:29:15.398879] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:24:11.621 [2024-07-15 15:29:15.398927] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:11.621 EAL: No free 2048 kB hugepages reported on node 1 00:24:11.621 [2024-07-15 15:29:15.475298] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:11.879 [2024-07-15 15:29:15.546044] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:11.879 [2024-07-15 15:29:15.546085] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:11.879 [2024-07-15 15:29:15.546095] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:11.879 [2024-07-15 15:29:15.546103] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:11.879 [2024-07-15 15:29:15.546125] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:11.879 [2024-07-15 15:29:15.546226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:11.879 [2024-07-15 15:29:15.546318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:11.879 [2024-07-15 15:29:15.546319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:12.445 15:29:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:12.445 15:29:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:24:12.445 15:29:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:12.445 15:29:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:12.445 15:29:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:12.445 15:29:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:12.445 15:29:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:12.445 15:29:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.445 15:29:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:12.445 [2024-07-15 15:29:16.249881] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:12.445 15:29:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.445 15:29:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:12.445 15:29:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.445 15:29:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:12.445 Malloc0 00:24:12.445 15:29:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.445 15:29:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:12.445 15:29:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.445 15:29:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:12.445 15:29:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.445 15:29:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:12.445 15:29:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.445 15:29:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:12.445 15:29:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.446 15:29:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:12.446 15:29:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.446 15:29:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:12.446 [2024-07-15 15:29:16.311969] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:12.446 15:29:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.446 15:29:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:12.446 15:29:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.446 15:29:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:12.446 [2024-07-15 15:29:16.319899] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:12.446 15:29:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.446 15:29:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:12.446 15:29:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.446 15:29:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:12.446 Malloc1 00:24:12.446 15:29:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.446 15:29:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:24:12.446 15:29:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.446 15:29:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:12.704 15:29:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.704 15:29:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:24:12.704 15:29:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.704 15:29:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:12.704 15:29:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.704 15:29:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:12.704 15:29:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.704 15:29:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:12.704 15:29:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.704 15:29:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:24:12.704 15:29:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.704 15:29:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:12.704 15:29:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.704 15:29:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3131570 00:24:12.704 15:29:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:12.704 15:29:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:24:12.704 15:29:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3131570 /var/tmp/bdevperf.sock 00:24:12.704 15:29:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 3131570 ']' 00:24:12.704 15:29:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:12.704 15:29:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:12.704 15:29:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:12.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:12.704 15:29:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:12.704 15:29:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:13.640 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:13.640 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:24:13.640 15:29:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:24:13.640 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.640 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:13.640 NVMe0n1 00:24:13.640 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.640 15:29:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:13.640 15:29:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:24:13.640 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.640 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:13.640 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.640 1 00:24:13.640 15:29:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:13.640 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:24:13.640 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:13.640 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:13.640 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:13.640 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:13.640 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:13.640 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:13.640 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.640 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:13.640 request: 00:24:13.640 { 00:24:13.640 "name": "NVMe0", 00:24:13.640 "trtype": "tcp", 00:24:13.640 "traddr": "10.0.0.2", 00:24:13.640 "adrfam": "ipv4", 00:24:13.640 "trsvcid": "4420", 00:24:13.640 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:13.640 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:24:13.640 "hostaddr": "10.0.0.2", 00:24:13.640 "hostsvcid": "60000", 00:24:13.640 "prchk_reftag": false, 00:24:13.640 "prchk_guard": false, 00:24:13.640 "hdgst": false, 00:24:13.640 "ddgst": false, 00:24:13.640 "method": "bdev_nvme_attach_controller", 00:24:13.640 "req_id": 1 00:24:13.640 } 00:24:13.640 Got JSON-RPC error response 00:24:13.640 response: 00:24:13.640 { 00:24:13.640 "code": -114, 00:24:13.640 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:13.640 } 00:24:13.640 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:13.640 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:24:13.640 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:13.640 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:13.640 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:13.640 15:29:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:13.640 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:24:13.640 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:13.640 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:13.640 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:13.640 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:13.640 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:13.640 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:13.640 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.641 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:13.641 request: 00:24:13.641 { 00:24:13.641 "name": "NVMe0", 00:24:13.641 "trtype": "tcp", 00:24:13.641 "traddr": "10.0.0.2", 00:24:13.641 "adrfam": "ipv4", 00:24:13.641 "trsvcid": "4420", 00:24:13.641 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:13.641 "hostaddr": "10.0.0.2", 00:24:13.641 "hostsvcid": "60000", 00:24:13.641 "prchk_reftag": false, 00:24:13.641 "prchk_guard": false, 00:24:13.641 "hdgst": false, 00:24:13.641 "ddgst": false, 00:24:13.641 "method": "bdev_nvme_attach_controller", 00:24:13.641 "req_id": 1 00:24:13.641 } 00:24:13.641 Got JSON-RPC error response 00:24:13.641 response: 00:24:13.641 { 00:24:13.641 "code": -114, 00:24:13.641 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:13.641 } 00:24:13.641 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:13.641 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:24:13.641 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:13.641 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:13.641 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:13.641 15:29:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:13.641 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:24:13.641 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:13.641 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:13.641 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:13.641 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:13.641 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:13.641 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:13.641 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.641 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:13.641 request: 00:24:13.641 { 00:24:13.641 "name": "NVMe0", 00:24:13.641 "trtype": "tcp", 00:24:13.641 "traddr": "10.0.0.2", 00:24:13.641 "adrfam": "ipv4", 00:24:13.641 "trsvcid": "4420", 00:24:13.641 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:13.641 "hostaddr": "10.0.0.2", 00:24:13.641 "hostsvcid": "60000", 00:24:13.641 "prchk_reftag": false, 00:24:13.641 "prchk_guard": false, 00:24:13.641 "hdgst": false, 00:24:13.641 "ddgst": false, 00:24:13.641 "multipath": "disable", 00:24:13.641 "method": "bdev_nvme_attach_controller", 00:24:13.641 "req_id": 1 00:24:13.641 } 00:24:13.641 Got JSON-RPC error response 00:24:13.641 response: 00:24:13.641 { 00:24:13.641 "code": -114, 00:24:13.641 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:24:13.641 } 00:24:13.641 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:13.641 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:24:13.641 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:13.641 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:13.641 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:13.641 15:29:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:13.641 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:24:13.641 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:13.641 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:13.641 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:13.641 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:13.641 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:13.641 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:13.641 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.641 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:13.641 request: 00:24:13.641 { 00:24:13.641 "name": "NVMe0", 00:24:13.641 "trtype": "tcp", 00:24:13.641 "traddr": "10.0.0.2", 00:24:13.641 "adrfam": "ipv4", 00:24:13.641 "trsvcid": "4420", 00:24:13.641 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:13.641 "hostaddr": "10.0.0.2", 00:24:13.641 "hostsvcid": "60000", 00:24:13.641 "prchk_reftag": false, 00:24:13.641 "prchk_guard": false, 00:24:13.641 "hdgst": false, 00:24:13.641 "ddgst": false, 00:24:13.641 "multipath": "failover", 00:24:13.641 "method": "bdev_nvme_attach_controller", 00:24:13.641 "req_id": 1 00:24:13.641 } 00:24:13.641 Got JSON-RPC error response 00:24:13.641 response: 00:24:13.641 { 00:24:13.641 "code": -114, 00:24:13.641 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:13.641 } 00:24:13.641 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:13.641 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:24:13.641 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:13.641 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:13.641 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:13.641 15:29:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:13.641 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.641 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:13.900 00:24:13.900 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.900 15:29:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:13.900 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.900 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:13.900 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.900 15:29:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:24:13.900 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.900 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:13.900 00:24:13.900 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.900 15:29:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:13.900 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.900 15:29:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:24:13.900 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:13.900 15:29:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.900 15:29:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:24:13.900 15:29:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:15.278 0 00:24:15.278 15:29:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:24:15.278 15:29:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.278 15:29:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:15.278 15:29:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.278 15:29:18 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 3131570 00:24:15.278 15:29:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 3131570 ']' 00:24:15.278 15:29:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 3131570 00:24:15.278 15:29:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:24:15.278 15:29:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:15.278 15:29:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3131570 00:24:15.278 15:29:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:15.278 15:29:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:15.278 15:29:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3131570' 00:24:15.278 killing process with pid 3131570 00:24:15.278 15:29:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 3131570 00:24:15.278 15:29:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 3131570 00:24:15.278 15:29:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:15.278 15:29:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.278 15:29:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:15.278 15:29:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.278 15:29:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:15.278 15:29:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.278 15:29:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:15.278 15:29:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.278 15:29:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:24:15.278 15:29:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:15.278 15:29:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:24:15.278 15:29:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:24:15.278 15:29:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:24:15.278 15:29:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:24:15.278 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:15.278 [2024-07-15 15:29:16.424977] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:24:15.278 [2024-07-15 15:29:16.425030] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3131570 ] 00:24:15.278 EAL: No free 2048 kB hugepages reported on node 1 00:24:15.278 [2024-07-15 15:29:16.494109] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:15.278 [2024-07-15 15:29:16.570668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:15.278 [2024-07-15 15:29:17.677266] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name f0137692-c641-4433-a4a3-fea5be61dbef already exists 00:24:15.278 [2024-07-15 15:29:17.677297] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:f0137692-c641-4433-a4a3-fea5be61dbef alias for bdev NVMe1n1 00:24:15.278 [2024-07-15 15:29:17.677308] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:24:15.278 Running I/O for 1 seconds... 00:24:15.278 00:24:15.278 Latency(us) 00:24:15.278 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:15.278 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:24:15.278 NVMe0n1 : 1.00 26037.65 101.71 0.00 0.00 4906.19 2883.58 9961.47 00:24:15.278 =================================================================================================================== 00:24:15.278 Total : 26037.65 101.71 0.00 0.00 4906.19 2883.58 9961.47 00:24:15.278 Received shutdown signal, test time was about 1.000000 seconds 00:24:15.278 00:24:15.278 Latency(us) 00:24:15.278 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:15.278 =================================================================================================================== 00:24:15.278 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:15.278 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:15.278 15:29:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:15.278 15:29:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:24:15.278 15:29:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:24:15.278 15:29:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:15.278 15:29:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:24:15.278 15:29:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:15.278 15:29:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:24:15.278 15:29:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:15.278 15:29:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:15.278 rmmod nvme_tcp 00:24:15.278 rmmod nvme_fabrics 00:24:15.278 rmmod nvme_keyring 00:24:15.278 15:29:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:15.278 15:29:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:24:15.278 15:29:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:24:15.278 15:29:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 3131292 ']' 00:24:15.278 15:29:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 3131292 00:24:15.278 15:29:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 3131292 ']' 00:24:15.278 15:29:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 3131292 00:24:15.278 15:29:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:24:15.278 15:29:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:15.278 15:29:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3131292 00:24:15.537 15:29:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:15.537 15:29:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:15.537 15:29:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3131292' 00:24:15.537 killing process with pid 3131292 00:24:15.537 15:29:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 3131292 00:24:15.537 15:29:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 3131292 00:24:15.537 15:29:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:15.537 15:29:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:15.537 15:29:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:15.537 15:29:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:15.537 15:29:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:15.537 15:29:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:15.537 15:29:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:15.537 15:29:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:18.073 15:29:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:18.073 00:24:18.073 real 0m13.365s 00:24:18.073 user 0m16.395s 00:24:18.073 sys 0m6.404s 00:24:18.073 15:29:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:18.073 15:29:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:18.073 ************************************ 00:24:18.073 END TEST nvmf_multicontroller 00:24:18.073 ************************************ 00:24:18.073 15:29:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:18.073 15:29:21 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:18.073 15:29:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:18.073 15:29:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:18.073 15:29:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:18.073 ************************************ 00:24:18.073 START TEST nvmf_aer 00:24:18.073 ************************************ 00:24:18.073 15:29:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:18.073 * Looking for test storage... 00:24:18.073 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:18.073 15:29:21 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:18.073 15:29:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:24:18.073 15:29:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:18.073 15:29:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:18.073 15:29:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:18.073 15:29:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:18.073 15:29:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:18.073 15:29:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:18.073 15:29:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:18.073 15:29:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:18.073 15:29:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:18.073 15:29:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:18.073 15:29:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:18.073 15:29:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:24:18.073 15:29:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:18.073 15:29:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:18.073 15:29:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:18.073 15:29:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:18.073 15:29:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:18.073 15:29:21 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:18.073 15:29:21 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:18.073 15:29:21 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:18.073 15:29:21 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.074 15:29:21 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.074 15:29:21 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.074 15:29:21 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:24:18.074 15:29:21 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.074 15:29:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:24:18.074 15:29:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:18.074 15:29:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:18.074 15:29:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:18.074 15:29:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:18.074 15:29:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:18.074 15:29:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:18.074 15:29:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:18.074 15:29:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:18.074 15:29:21 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:24:18.074 15:29:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:18.074 15:29:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:18.074 15:29:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:18.074 15:29:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:18.074 15:29:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:18.074 15:29:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:18.074 15:29:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:18.074 15:29:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:18.074 15:29:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:18.074 15:29:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:18.074 15:29:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:24:18.074 15:29:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:24.635 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:24.635 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:24:24.635 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:24.635 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:24.635 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:24.635 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:24.635 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:24.635 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:24:24.635 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:24.636 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:24.636 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:24.636 Found net devices under 0000:af:00.0: cvl_0_0 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:24.636 Found net devices under 0000:af:00.1: cvl_0_1 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:24.636 15:29:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:24.636 15:29:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:24.636 15:29:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:24.636 15:29:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:24.636 15:29:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:24.636 15:29:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:24.636 15:29:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:24.636 15:29:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:24.636 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:24.636 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:24:24.636 00:24:24.636 --- 10.0.0.2 ping statistics --- 00:24:24.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:24.636 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:24:24.636 15:29:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:24.636 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:24.636 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:24:24.636 00:24:24.636 --- 10.0.0.1 ping statistics --- 00:24:24.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:24.636 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:24:24.636 15:29:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:24.636 15:29:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:24:24.636 15:29:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:24.636 15:29:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:24.636 15:29:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:24.636 15:29:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:24.636 15:29:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:24.636 15:29:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:24.636 15:29:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:24.636 15:29:28 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:24.636 15:29:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:24.636 15:29:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:24.636 15:29:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:24.636 15:29:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=3135600 00:24:24.636 15:29:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:24.636 15:29:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 3135600 00:24:24.636 15:29:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 3135600 ']' 00:24:24.636 15:29:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:24.636 15:29:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:24.636 15:29:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:24.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:24.636 15:29:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:24.636 15:29:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:24.636 [2024-07-15 15:29:28.337254] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:24:24.636 [2024-07-15 15:29:28.337304] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:24.636 EAL: No free 2048 kB hugepages reported on node 1 00:24:24.636 [2024-07-15 15:29:28.412397] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:24.636 [2024-07-15 15:29:28.487029] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:24.636 [2024-07-15 15:29:28.487068] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:24.636 [2024-07-15 15:29:28.487078] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:24.636 [2024-07-15 15:29:28.487086] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:24.636 [2024-07-15 15:29:28.487109] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:24.636 [2024-07-15 15:29:28.487385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:24.636 [2024-07-15 15:29:28.487483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:24.636 [2024-07-15 15:29:28.487570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:24.636 [2024-07-15 15:29:28.487572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:25.572 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:25.572 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:24:25.572 15:29:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:25.572 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:25.572 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:25.572 15:29:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:25.572 15:29:29 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:25.572 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.572 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:25.572 [2024-07-15 15:29:29.186756] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:25.572 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.572 15:29:29 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:25.572 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.572 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:25.572 Malloc0 00:24:25.572 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.572 15:29:29 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:25.572 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.572 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:25.572 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.572 15:29:29 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:25.572 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.572 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:25.572 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.572 15:29:29 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:25.572 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.572 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:25.572 [2024-07-15 15:29:29.241565] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:25.572 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.572 15:29:29 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:25.572 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.572 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:25.572 [ 00:24:25.572 { 00:24:25.572 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:25.572 "subtype": "Discovery", 00:24:25.572 "listen_addresses": [], 00:24:25.572 "allow_any_host": true, 00:24:25.572 "hosts": [] 00:24:25.572 }, 00:24:25.572 { 00:24:25.572 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:25.572 "subtype": "NVMe", 00:24:25.572 "listen_addresses": [ 00:24:25.572 { 00:24:25.572 "trtype": "TCP", 00:24:25.572 "adrfam": "IPv4", 00:24:25.572 "traddr": "10.0.0.2", 00:24:25.572 "trsvcid": "4420" 00:24:25.572 } 00:24:25.572 ], 00:24:25.572 "allow_any_host": true, 00:24:25.572 "hosts": [], 00:24:25.572 "serial_number": "SPDK00000000000001", 00:24:25.572 "model_number": "SPDK bdev Controller", 00:24:25.572 "max_namespaces": 2, 00:24:25.572 "min_cntlid": 1, 00:24:25.572 "max_cntlid": 65519, 00:24:25.572 "namespaces": [ 00:24:25.572 { 00:24:25.572 "nsid": 1, 00:24:25.572 "bdev_name": "Malloc0", 00:24:25.572 "name": "Malloc0", 00:24:25.572 "nguid": "773002C5AD53458D88AB5C107715210F", 00:24:25.572 "uuid": "773002c5-ad53-458d-88ab-5c107715210f" 00:24:25.572 } 00:24:25.572 ] 00:24:25.572 } 00:24:25.572 ] 00:24:25.572 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.572 15:29:29 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:25.572 15:29:29 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:25.572 15:29:29 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=3135822 00:24:25.572 15:29:29 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:25.572 15:29:29 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:25.572 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:24:25.573 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:25.573 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:24:25.573 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:24:25.573 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:24:25.573 EAL: No free 2048 kB hugepages reported on node 1 00:24:25.573 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:25.573 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:24:25.573 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:24:25.573 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:24:25.832 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:25.832 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:25.832 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:24:25.832 15:29:29 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:25.832 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.832 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:25.832 Malloc1 00:24:25.832 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.832 15:29:29 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:25.832 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.832 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:25.832 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.832 15:29:29 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:25.832 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.832 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:25.832 Asynchronous Event Request test 00:24:25.832 Attaching to 10.0.0.2 00:24:25.832 Attached to 10.0.0.2 00:24:25.832 Registering asynchronous event callbacks... 00:24:25.832 Starting namespace attribute notice tests for all controllers... 00:24:25.832 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:25.832 aer_cb - Changed Namespace 00:24:25.832 Cleaning up... 00:24:25.832 [ 00:24:25.832 { 00:24:25.832 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:25.832 "subtype": "Discovery", 00:24:25.832 "listen_addresses": [], 00:24:25.832 "allow_any_host": true, 00:24:25.832 "hosts": [] 00:24:25.832 }, 00:24:25.832 { 00:24:25.832 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:25.832 "subtype": "NVMe", 00:24:25.832 "listen_addresses": [ 00:24:25.832 { 00:24:25.832 "trtype": "TCP", 00:24:25.832 "adrfam": "IPv4", 00:24:25.832 "traddr": "10.0.0.2", 00:24:25.832 "trsvcid": "4420" 00:24:25.832 } 00:24:25.832 ], 00:24:25.832 "allow_any_host": true, 00:24:25.832 "hosts": [], 00:24:25.832 "serial_number": "SPDK00000000000001", 00:24:25.832 "model_number": "SPDK bdev Controller", 00:24:25.832 "max_namespaces": 2, 00:24:25.832 "min_cntlid": 1, 00:24:25.832 "max_cntlid": 65519, 00:24:25.832 "namespaces": [ 00:24:25.832 { 00:24:25.832 "nsid": 1, 00:24:25.832 "bdev_name": "Malloc0", 00:24:25.832 "name": "Malloc0", 00:24:25.832 "nguid": "773002C5AD53458D88AB5C107715210F", 00:24:25.832 "uuid": "773002c5-ad53-458d-88ab-5c107715210f" 00:24:25.832 }, 00:24:25.832 { 00:24:25.832 "nsid": 2, 00:24:25.832 "bdev_name": "Malloc1", 00:24:25.832 "name": "Malloc1", 00:24:25.832 "nguid": "D7496E2E2B7044868D2D1E6CC4EC4E7D", 00:24:25.832 "uuid": "d7496e2e-2b70-4486-8d2d-1e6cc4ec4e7d" 00:24:25.832 } 00:24:25.832 ] 00:24:25.832 } 00:24:25.832 ] 00:24:25.832 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.832 15:29:29 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 3135822 00:24:25.832 15:29:29 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:25.832 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.832 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:25.832 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.832 15:29:29 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:25.832 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.832 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:25.832 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.832 15:29:29 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:25.832 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.832 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:25.832 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.833 15:29:29 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:25.833 15:29:29 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:24:25.833 15:29:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:25.833 15:29:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:24:25.833 15:29:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:25.833 15:29:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:24:25.833 15:29:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:25.833 15:29:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:25.833 rmmod nvme_tcp 00:24:25.833 rmmod nvme_fabrics 00:24:25.833 rmmod nvme_keyring 00:24:25.833 15:29:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:25.833 15:29:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:24:25.833 15:29:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:24:25.833 15:29:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 3135600 ']' 00:24:25.833 15:29:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 3135600 00:24:25.833 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 3135600 ']' 00:24:25.833 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 3135600 00:24:25.833 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:24:25.833 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:25.833 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3135600 00:24:25.833 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:25.833 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:25.833 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3135600' 00:24:25.833 killing process with pid 3135600 00:24:25.833 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 3135600 00:24:25.833 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 3135600 00:24:26.092 15:29:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:26.092 15:29:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:26.092 15:29:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:26.092 15:29:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:26.092 15:29:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:26.092 15:29:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:26.092 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:26.092 15:29:29 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.624 15:29:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:28.624 00:24:28.624 real 0m10.399s 00:24:28.624 user 0m7.435s 00:24:28.624 sys 0m5.541s 00:24:28.624 15:29:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:28.624 15:29:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:28.624 ************************************ 00:24:28.624 END TEST nvmf_aer 00:24:28.624 ************************************ 00:24:28.624 15:29:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:28.624 15:29:32 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:28.624 15:29:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:28.624 15:29:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:28.624 15:29:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:28.624 ************************************ 00:24:28.624 START TEST nvmf_async_init 00:24:28.624 ************************************ 00:24:28.624 15:29:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:28.624 * Looking for test storage... 00:24:28.625 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:28.625 15:29:32 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:28.625 15:29:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:24:28.625 15:29:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:28.625 15:29:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:28.625 15:29:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:28.625 15:29:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:28.625 15:29:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:28.625 15:29:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:28.625 15:29:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:28.625 15:29:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:28.625 15:29:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:28.625 15:29:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:28.625 15:29:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:28.625 15:29:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:24:28.625 15:29:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:28.625 15:29:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:28.625 15:29:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:28.625 15:29:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:28.625 15:29:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:28.625 15:29:32 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:28.625 15:29:32 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:28.625 15:29:32 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:28.625 15:29:32 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.625 15:29:32 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.625 15:29:32 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.625 15:29:32 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:24:28.625 15:29:32 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.625 15:29:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:24:28.625 15:29:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:28.625 15:29:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:28.625 15:29:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:28.625 15:29:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:28.625 15:29:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:28.625 15:29:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:28.625 15:29:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:28.625 15:29:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:28.625 15:29:32 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:28.625 15:29:32 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:24:28.625 15:29:32 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:24:28.625 15:29:32 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:28.625 15:29:32 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:24:28.625 15:29:32 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:24:28.625 15:29:32 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=21233b6eecfa4489812c1963f2468045 00:24:28.625 15:29:32 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:24:28.625 15:29:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:28.625 15:29:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:28.625 15:29:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:28.625 15:29:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:28.625 15:29:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:28.625 15:29:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.625 15:29:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:28.625 15:29:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.625 15:29:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:28.625 15:29:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:28.625 15:29:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:24:28.625 15:29:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:35.191 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:35.191 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:35.191 Found net devices under 0000:af:00.0: cvl_0_0 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:35.191 Found net devices under 0000:af:00.1: cvl_0_1 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:35.191 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:35.191 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:24:35.191 00:24:35.191 --- 10.0.0.2 ping statistics --- 00:24:35.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:35.191 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:35.191 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:35.191 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.248 ms 00:24:35.191 00:24:35.191 --- 10.0.0.1 ping statistics --- 00:24:35.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:35.191 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:35.191 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:24:35.192 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:35.192 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:35.192 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:35.192 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:35.192 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:35.192 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:35.192 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:35.192 15:29:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:35.192 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:35.192 15:29:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:35.192 15:29:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:35.192 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=3139492 00:24:35.192 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:35.192 15:29:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 3139492 00:24:35.192 15:29:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 3139492 ']' 00:24:35.192 15:29:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:35.192 15:29:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:35.192 15:29:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:35.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:35.192 15:29:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:35.192 15:29:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:35.192 [2024-07-15 15:29:38.824900] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:24:35.192 [2024-07-15 15:29:38.824949] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:35.192 EAL: No free 2048 kB hugepages reported on node 1 00:24:35.192 [2024-07-15 15:29:38.899491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:35.192 [2024-07-15 15:29:38.972868] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:35.192 [2024-07-15 15:29:38.972906] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:35.192 [2024-07-15 15:29:38.972916] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:35.192 [2024-07-15 15:29:38.972924] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:35.192 [2024-07-15 15:29:38.972931] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:35.192 [2024-07-15 15:29:38.972952] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:35.759 15:29:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:35.759 15:29:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:24:35.759 15:29:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:35.759 15:29:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:35.759 15:29:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:35.759 15:29:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:35.759 15:29:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:35.759 15:29:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.759 15:29:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:36.018 [2024-07-15 15:29:39.671269] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:36.018 15:29:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.018 15:29:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:36.018 15:29:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.018 15:29:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:36.018 null0 00:24:36.018 15:29:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.018 15:29:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:36.018 15:29:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.018 15:29:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:36.018 15:29:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.018 15:29:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:36.018 15:29:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.018 15:29:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:36.018 15:29:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.018 15:29:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 21233b6eecfa4489812c1963f2468045 00:24:36.018 15:29:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.018 15:29:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:36.018 15:29:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.018 15:29:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:36.018 15:29:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.018 15:29:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:36.018 [2024-07-15 15:29:39.711476] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:36.018 15:29:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.018 15:29:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:36.018 15:29:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.018 15:29:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:36.277 nvme0n1 00:24:36.277 15:29:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.277 15:29:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:36.277 15:29:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.277 15:29:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:36.277 [ 00:24:36.277 { 00:24:36.277 "name": "nvme0n1", 00:24:36.277 "aliases": [ 00:24:36.277 "21233b6e-ecfa-4489-812c-1963f2468045" 00:24:36.277 ], 00:24:36.277 "product_name": "NVMe disk", 00:24:36.277 "block_size": 512, 00:24:36.277 "num_blocks": 2097152, 00:24:36.277 "uuid": "21233b6e-ecfa-4489-812c-1963f2468045", 00:24:36.277 "assigned_rate_limits": { 00:24:36.277 "rw_ios_per_sec": 0, 00:24:36.277 "rw_mbytes_per_sec": 0, 00:24:36.277 "r_mbytes_per_sec": 0, 00:24:36.277 "w_mbytes_per_sec": 0 00:24:36.277 }, 00:24:36.277 "claimed": false, 00:24:36.277 "zoned": false, 00:24:36.277 "supported_io_types": { 00:24:36.277 "read": true, 00:24:36.277 "write": true, 00:24:36.277 "unmap": false, 00:24:36.277 "flush": true, 00:24:36.277 "reset": true, 00:24:36.277 "nvme_admin": true, 00:24:36.277 "nvme_io": true, 00:24:36.277 "nvme_io_md": false, 00:24:36.277 "write_zeroes": true, 00:24:36.277 "zcopy": false, 00:24:36.277 "get_zone_info": false, 00:24:36.277 "zone_management": false, 00:24:36.277 "zone_append": false, 00:24:36.277 "compare": true, 00:24:36.277 "compare_and_write": true, 00:24:36.277 "abort": true, 00:24:36.277 "seek_hole": false, 00:24:36.277 "seek_data": false, 00:24:36.277 "copy": true, 00:24:36.277 "nvme_iov_md": false 00:24:36.277 }, 00:24:36.277 "memory_domains": [ 00:24:36.277 { 00:24:36.278 "dma_device_id": "system", 00:24:36.278 "dma_device_type": 1 00:24:36.278 } 00:24:36.278 ], 00:24:36.278 "driver_specific": { 00:24:36.278 "nvme": [ 00:24:36.278 { 00:24:36.278 "trid": { 00:24:36.278 "trtype": "TCP", 00:24:36.278 "adrfam": "IPv4", 00:24:36.278 "traddr": "10.0.0.2", 00:24:36.278 "trsvcid": "4420", 00:24:36.278 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:36.278 }, 00:24:36.278 "ctrlr_data": { 00:24:36.278 "cntlid": 1, 00:24:36.278 "vendor_id": "0x8086", 00:24:36.278 "model_number": "SPDK bdev Controller", 00:24:36.278 "serial_number": "00000000000000000000", 00:24:36.278 "firmware_revision": "24.09", 00:24:36.278 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:36.278 "oacs": { 00:24:36.278 "security": 0, 00:24:36.278 "format": 0, 00:24:36.278 "firmware": 0, 00:24:36.278 "ns_manage": 0 00:24:36.278 }, 00:24:36.278 "multi_ctrlr": true, 00:24:36.278 "ana_reporting": false 00:24:36.278 }, 00:24:36.278 "vs": { 00:24:36.278 "nvme_version": "1.3" 00:24:36.278 }, 00:24:36.278 "ns_data": { 00:24:36.278 "id": 1, 00:24:36.278 "can_share": true 00:24:36.278 } 00:24:36.278 } 00:24:36.278 ], 00:24:36.278 "mp_policy": "active_passive" 00:24:36.278 } 00:24:36.278 } 00:24:36.278 ] 00:24:36.278 15:29:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.278 15:29:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:36.278 15:29:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.278 15:29:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:36.278 [2024-07-15 15:29:39.976043] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:36.278 [2024-07-15 15:29:39.976096] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1189e30 (9): Bad file descriptor 00:24:36.278 [2024-07-15 15:29:40.107932] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:36.278 15:29:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.278 15:29:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:36.278 15:29:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.278 15:29:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:36.278 [ 00:24:36.278 { 00:24:36.278 "name": "nvme0n1", 00:24:36.278 "aliases": [ 00:24:36.278 "21233b6e-ecfa-4489-812c-1963f2468045" 00:24:36.278 ], 00:24:36.278 "product_name": "NVMe disk", 00:24:36.278 "block_size": 512, 00:24:36.278 "num_blocks": 2097152, 00:24:36.278 "uuid": "21233b6e-ecfa-4489-812c-1963f2468045", 00:24:36.278 "assigned_rate_limits": { 00:24:36.278 "rw_ios_per_sec": 0, 00:24:36.278 "rw_mbytes_per_sec": 0, 00:24:36.278 "r_mbytes_per_sec": 0, 00:24:36.278 "w_mbytes_per_sec": 0 00:24:36.278 }, 00:24:36.278 "claimed": false, 00:24:36.278 "zoned": false, 00:24:36.278 "supported_io_types": { 00:24:36.278 "read": true, 00:24:36.278 "write": true, 00:24:36.278 "unmap": false, 00:24:36.278 "flush": true, 00:24:36.278 "reset": true, 00:24:36.278 "nvme_admin": true, 00:24:36.278 "nvme_io": true, 00:24:36.278 "nvme_io_md": false, 00:24:36.278 "write_zeroes": true, 00:24:36.278 "zcopy": false, 00:24:36.278 "get_zone_info": false, 00:24:36.278 "zone_management": false, 00:24:36.278 "zone_append": false, 00:24:36.278 "compare": true, 00:24:36.278 "compare_and_write": true, 00:24:36.278 "abort": true, 00:24:36.278 "seek_hole": false, 00:24:36.278 "seek_data": false, 00:24:36.278 "copy": true, 00:24:36.278 "nvme_iov_md": false 00:24:36.278 }, 00:24:36.278 "memory_domains": [ 00:24:36.278 { 00:24:36.278 "dma_device_id": "system", 00:24:36.278 "dma_device_type": 1 00:24:36.278 } 00:24:36.278 ], 00:24:36.278 "driver_specific": { 00:24:36.278 "nvme": [ 00:24:36.278 { 00:24:36.278 "trid": { 00:24:36.278 "trtype": "TCP", 00:24:36.278 "adrfam": "IPv4", 00:24:36.278 "traddr": "10.0.0.2", 00:24:36.278 "trsvcid": "4420", 00:24:36.278 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:36.278 }, 00:24:36.278 "ctrlr_data": { 00:24:36.278 "cntlid": 2, 00:24:36.278 "vendor_id": "0x8086", 00:24:36.278 "model_number": "SPDK bdev Controller", 00:24:36.278 "serial_number": "00000000000000000000", 00:24:36.278 "firmware_revision": "24.09", 00:24:36.278 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:36.278 "oacs": { 00:24:36.278 "security": 0, 00:24:36.278 "format": 0, 00:24:36.278 "firmware": 0, 00:24:36.278 "ns_manage": 0 00:24:36.278 }, 00:24:36.278 "multi_ctrlr": true, 00:24:36.278 "ana_reporting": false 00:24:36.278 }, 00:24:36.278 "vs": { 00:24:36.278 "nvme_version": "1.3" 00:24:36.278 }, 00:24:36.278 "ns_data": { 00:24:36.278 "id": 1, 00:24:36.278 "can_share": true 00:24:36.278 } 00:24:36.278 } 00:24:36.278 ], 00:24:36.278 "mp_policy": "active_passive" 00:24:36.278 } 00:24:36.278 } 00:24:36.278 ] 00:24:36.278 15:29:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.278 15:29:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.278 15:29:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.278 15:29:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:36.278 15:29:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.278 15:29:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:24:36.279 15:29:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.73aUIfH9TV 00:24:36.279 15:29:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:36.279 15:29:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.73aUIfH9TV 00:24:36.279 15:29:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:36.279 15:29:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.279 15:29:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:36.279 15:29:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.279 15:29:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:24:36.279 15:29:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.279 15:29:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:36.279 [2024-07-15 15:29:40.180711] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:36.279 [2024-07-15 15:29:40.180866] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:36.537 15:29:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.537 15:29:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.73aUIfH9TV 00:24:36.537 15:29:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.537 15:29:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:36.537 [2024-07-15 15:29:40.188725] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:36.537 15:29:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.537 15:29:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.73aUIfH9TV 00:24:36.537 15:29:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.537 15:29:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:36.537 [2024-07-15 15:29:40.200778] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:36.537 [2024-07-15 15:29:40.200828] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:36.537 nvme0n1 00:24:36.537 15:29:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.537 15:29:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:36.537 15:29:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.537 15:29:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:36.537 [ 00:24:36.537 { 00:24:36.537 "name": "nvme0n1", 00:24:36.537 "aliases": [ 00:24:36.537 "21233b6e-ecfa-4489-812c-1963f2468045" 00:24:36.537 ], 00:24:36.537 "product_name": "NVMe disk", 00:24:36.537 "block_size": 512, 00:24:36.537 "num_blocks": 2097152, 00:24:36.537 "uuid": "21233b6e-ecfa-4489-812c-1963f2468045", 00:24:36.537 "assigned_rate_limits": { 00:24:36.537 "rw_ios_per_sec": 0, 00:24:36.537 "rw_mbytes_per_sec": 0, 00:24:36.537 "r_mbytes_per_sec": 0, 00:24:36.537 "w_mbytes_per_sec": 0 00:24:36.537 }, 00:24:36.537 "claimed": false, 00:24:36.537 "zoned": false, 00:24:36.537 "supported_io_types": { 00:24:36.537 "read": true, 00:24:36.537 "write": true, 00:24:36.537 "unmap": false, 00:24:36.537 "flush": true, 00:24:36.537 "reset": true, 00:24:36.537 "nvme_admin": true, 00:24:36.537 "nvme_io": true, 00:24:36.537 "nvme_io_md": false, 00:24:36.537 "write_zeroes": true, 00:24:36.537 "zcopy": false, 00:24:36.537 "get_zone_info": false, 00:24:36.537 "zone_management": false, 00:24:36.537 "zone_append": false, 00:24:36.537 "compare": true, 00:24:36.537 "compare_and_write": true, 00:24:36.537 "abort": true, 00:24:36.537 "seek_hole": false, 00:24:36.537 "seek_data": false, 00:24:36.537 "copy": true, 00:24:36.537 "nvme_iov_md": false 00:24:36.537 }, 00:24:36.537 "memory_domains": [ 00:24:36.537 { 00:24:36.537 "dma_device_id": "system", 00:24:36.537 "dma_device_type": 1 00:24:36.537 } 00:24:36.537 ], 00:24:36.537 "driver_specific": { 00:24:36.537 "nvme": [ 00:24:36.537 { 00:24:36.537 "trid": { 00:24:36.537 "trtype": "TCP", 00:24:36.537 "adrfam": "IPv4", 00:24:36.537 "traddr": "10.0.0.2", 00:24:36.537 "trsvcid": "4421", 00:24:36.537 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:36.537 }, 00:24:36.537 "ctrlr_data": { 00:24:36.537 "cntlid": 3, 00:24:36.537 "vendor_id": "0x8086", 00:24:36.537 "model_number": "SPDK bdev Controller", 00:24:36.537 "serial_number": "00000000000000000000", 00:24:36.537 "firmware_revision": "24.09", 00:24:36.537 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:36.537 "oacs": { 00:24:36.537 "security": 0, 00:24:36.537 "format": 0, 00:24:36.537 "firmware": 0, 00:24:36.537 "ns_manage": 0 00:24:36.537 }, 00:24:36.537 "multi_ctrlr": true, 00:24:36.537 "ana_reporting": false 00:24:36.537 }, 00:24:36.537 "vs": { 00:24:36.537 "nvme_version": "1.3" 00:24:36.537 }, 00:24:36.537 "ns_data": { 00:24:36.537 "id": 1, 00:24:36.537 "can_share": true 00:24:36.537 } 00:24:36.537 } 00:24:36.537 ], 00:24:36.537 "mp_policy": "active_passive" 00:24:36.537 } 00:24:36.537 } 00:24:36.537 ] 00:24:36.537 15:29:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.537 15:29:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.537 15:29:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.537 15:29:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:36.537 15:29:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.537 15:29:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.73aUIfH9TV 00:24:36.537 15:29:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:24:36.537 15:29:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:24:36.537 15:29:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:36.537 15:29:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:24:36.537 15:29:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:36.537 15:29:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:24:36.537 15:29:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:36.537 15:29:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:36.537 rmmod nvme_tcp 00:24:36.537 rmmod nvme_fabrics 00:24:36.537 rmmod nvme_keyring 00:24:36.537 15:29:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:36.537 15:29:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:24:36.537 15:29:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:24:36.537 15:29:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 3139492 ']' 00:24:36.537 15:29:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 3139492 00:24:36.537 15:29:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 3139492 ']' 00:24:36.537 15:29:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 3139492 00:24:36.537 15:29:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:24:36.537 15:29:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:36.537 15:29:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3139492 00:24:36.537 15:29:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:36.537 15:29:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:36.537 15:29:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3139492' 00:24:36.537 killing process with pid 3139492 00:24:36.537 15:29:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 3139492 00:24:36.537 [2024-07-15 15:29:40.438587] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:36.538 [2024-07-15 15:29:40.438615] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:36.538 15:29:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 3139492 00:24:36.796 15:29:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:36.796 15:29:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:36.796 15:29:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:36.796 15:29:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:36.796 15:29:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:36.796 15:29:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:36.796 15:29:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:36.796 15:29:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:39.365 15:29:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:39.365 00:24:39.365 real 0m10.613s 00:24:39.365 user 0m3.741s 00:24:39.365 sys 0m5.495s 00:24:39.365 15:29:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:39.365 15:29:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:39.365 ************************************ 00:24:39.365 END TEST nvmf_async_init 00:24:39.365 ************************************ 00:24:39.365 15:29:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:39.365 15:29:42 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:39.365 15:29:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:39.365 15:29:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:39.365 15:29:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:39.365 ************************************ 00:24:39.365 START TEST dma 00:24:39.365 ************************************ 00:24:39.365 15:29:42 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:39.365 * Looking for test storage... 00:24:39.366 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:39.366 15:29:42 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:39.366 15:29:42 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:24:39.366 15:29:42 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:39.366 15:29:42 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:39.366 15:29:42 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:39.366 15:29:42 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:39.366 15:29:42 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:39.366 15:29:42 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:39.366 15:29:42 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:39.366 15:29:42 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:39.366 15:29:42 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:39.366 15:29:42 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:39.366 15:29:42 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:39.366 15:29:42 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:24:39.366 15:29:42 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:39.366 15:29:42 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:39.366 15:29:42 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:39.366 15:29:42 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:39.366 15:29:42 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:39.366 15:29:42 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:39.366 15:29:42 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:39.366 15:29:42 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:39.366 15:29:42 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.366 15:29:42 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.366 15:29:42 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.366 15:29:42 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:24:39.366 15:29:42 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.366 15:29:42 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:24:39.366 15:29:42 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:39.366 15:29:42 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:39.366 15:29:42 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:39.366 15:29:42 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:39.366 15:29:42 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:39.366 15:29:42 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:39.366 15:29:42 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:39.366 15:29:42 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:39.366 15:29:42 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:39.366 15:29:42 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:24:39.366 00:24:39.366 real 0m0.123s 00:24:39.366 user 0m0.046s 00:24:39.366 sys 0m0.087s 00:24:39.366 15:29:42 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:39.366 15:29:42 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:24:39.366 ************************************ 00:24:39.366 END TEST dma 00:24:39.366 ************************************ 00:24:39.366 15:29:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:39.366 15:29:42 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:39.366 15:29:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:39.366 15:29:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:39.366 15:29:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:39.366 ************************************ 00:24:39.366 START TEST nvmf_identify 00:24:39.366 ************************************ 00:24:39.366 15:29:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:39.366 * Looking for test storage... 00:24:39.366 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:39.366 15:29:43 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:39.366 15:29:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:24:39.366 15:29:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:39.366 15:29:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:39.366 15:29:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:39.366 15:29:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:39.366 15:29:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:39.366 15:29:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:39.366 15:29:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:39.366 15:29:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:39.366 15:29:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:39.366 15:29:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:39.366 15:29:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:39.366 15:29:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:24:39.366 15:29:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:39.366 15:29:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:39.366 15:29:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:39.366 15:29:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:39.366 15:29:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:39.367 15:29:43 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:39.367 15:29:43 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:39.367 15:29:43 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:39.367 15:29:43 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.367 15:29:43 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.367 15:29:43 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.367 15:29:43 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:24:39.367 15:29:43 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.367 15:29:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:24:39.367 15:29:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:39.367 15:29:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:39.367 15:29:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:39.367 15:29:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:39.367 15:29:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:39.367 15:29:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:39.367 15:29:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:39.367 15:29:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:39.367 15:29:43 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:39.367 15:29:43 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:39.367 15:29:43 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:24:39.367 15:29:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:39.367 15:29:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:39.367 15:29:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:39.367 15:29:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:39.367 15:29:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:39.367 15:29:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:39.367 15:29:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:39.367 15:29:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:39.367 15:29:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:39.367 15:29:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:39.367 15:29:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:24:39.367 15:29:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:45.961 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:45.961 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:45.961 Found net devices under 0000:af:00.0: cvl_0_0 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:45.961 Found net devices under 0000:af:00.1: cvl_0_1 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:45.961 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:45.962 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:45.962 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:45.962 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:45.962 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:45.962 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:45.962 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:45.962 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:45.962 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:45.962 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:45.962 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:45.962 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:24:45.962 00:24:45.962 --- 10.0.0.2 ping statistics --- 00:24:45.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.962 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:24:45.962 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:45.962 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:45.962 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.259 ms 00:24:45.962 00:24:45.962 --- 10.0.0.1 ping statistics --- 00:24:45.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.962 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:24:45.962 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:45.962 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:24:45.962 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:45.962 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:45.962 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:45.962 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:45.962 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:45.962 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:45.962 15:29:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:45.962 15:29:49 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:45.962 15:29:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:45.962 15:29:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:45.962 15:29:49 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3143515 00:24:45.962 15:29:49 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:45.962 15:29:49 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:45.962 15:29:49 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3143515 00:24:45.962 15:29:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 3143515 ']' 00:24:45.962 15:29:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:45.962 15:29:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:45.962 15:29:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:45.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:45.962 15:29:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:45.962 15:29:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:46.221 [2024-07-15 15:29:49.913380] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:24:46.221 [2024-07-15 15:29:49.913435] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:46.221 EAL: No free 2048 kB hugepages reported on node 1 00:24:46.221 [2024-07-15 15:29:49.988497] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:46.221 [2024-07-15 15:29:50.067380] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:46.221 [2024-07-15 15:29:50.067423] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:46.221 [2024-07-15 15:29:50.067433] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:46.221 [2024-07-15 15:29:50.067442] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:46.221 [2024-07-15 15:29:50.067449] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:46.221 [2024-07-15 15:29:50.067495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:46.221 [2024-07-15 15:29:50.067588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:46.221 [2024-07-15 15:29:50.067673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:46.221 [2024-07-15 15:29:50.067674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:47.174 15:29:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:47.174 15:29:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:24:47.174 15:29:50 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:47.174 15:29:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.174 15:29:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:47.174 [2024-07-15 15:29:50.722619] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:47.174 15:29:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.174 15:29:50 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:47.174 15:29:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:47.174 15:29:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:47.174 15:29:50 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:47.174 15:29:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.174 15:29:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:47.174 Malloc0 00:24:47.174 15:29:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.174 15:29:50 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:47.174 15:29:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.174 15:29:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:47.174 15:29:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.174 15:29:50 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:47.174 15:29:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.174 15:29:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:47.174 15:29:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.174 15:29:50 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:47.174 15:29:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.174 15:29:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:47.174 [2024-07-15 15:29:50.821319] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:47.174 15:29:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.174 15:29:50 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:47.174 15:29:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.174 15:29:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:47.174 15:29:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.174 15:29:50 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:47.174 15:29:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.174 15:29:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:47.174 [ 00:24:47.174 { 00:24:47.174 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:47.174 "subtype": "Discovery", 00:24:47.174 "listen_addresses": [ 00:24:47.174 { 00:24:47.174 "trtype": "TCP", 00:24:47.174 "adrfam": "IPv4", 00:24:47.174 "traddr": "10.0.0.2", 00:24:47.174 "trsvcid": "4420" 00:24:47.174 } 00:24:47.174 ], 00:24:47.174 "allow_any_host": true, 00:24:47.174 "hosts": [] 00:24:47.174 }, 00:24:47.174 { 00:24:47.174 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:47.174 "subtype": "NVMe", 00:24:47.174 "listen_addresses": [ 00:24:47.174 { 00:24:47.174 "trtype": "TCP", 00:24:47.174 "adrfam": "IPv4", 00:24:47.174 "traddr": "10.0.0.2", 00:24:47.174 "trsvcid": "4420" 00:24:47.174 } 00:24:47.174 ], 00:24:47.174 "allow_any_host": true, 00:24:47.174 "hosts": [], 00:24:47.174 "serial_number": "SPDK00000000000001", 00:24:47.174 "model_number": "SPDK bdev Controller", 00:24:47.174 "max_namespaces": 32, 00:24:47.174 "min_cntlid": 1, 00:24:47.174 "max_cntlid": 65519, 00:24:47.174 "namespaces": [ 00:24:47.174 { 00:24:47.174 "nsid": 1, 00:24:47.174 "bdev_name": "Malloc0", 00:24:47.174 "name": "Malloc0", 00:24:47.174 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:47.174 "eui64": "ABCDEF0123456789", 00:24:47.174 "uuid": "1f7ad079-ab62-49c1-a861-be444e4d046f" 00:24:47.174 } 00:24:47.174 ] 00:24:47.174 } 00:24:47.174 ] 00:24:47.174 15:29:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.175 15:29:50 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:47.175 [2024-07-15 15:29:50.879454] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:24:47.175 [2024-07-15 15:29:50.879495] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3143749 ] 00:24:47.175 EAL: No free 2048 kB hugepages reported on node 1 00:24:47.175 [2024-07-15 15:29:50.911259] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:24:47.175 [2024-07-15 15:29:50.911310] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:47.175 [2024-07-15 15:29:50.911316] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:47.175 [2024-07-15 15:29:50.911328] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:47.175 [2024-07-15 15:29:50.911336] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:47.175 [2024-07-15 15:29:50.911779] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:24:47.175 [2024-07-15 15:29:50.911809] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1466f00 0 00:24:47.175 [2024-07-15 15:29:50.925844] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:47.175 [2024-07-15 15:29:50.925860] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:47.175 [2024-07-15 15:29:50.925865] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:47.175 [2024-07-15 15:29:50.925870] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:47.175 [2024-07-15 15:29:50.925911] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.175 [2024-07-15 15:29:50.925918] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.175 [2024-07-15 15:29:50.925924] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1466f00) 00:24:47.175 [2024-07-15 15:29:50.925937] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:47.175 [2024-07-15 15:29:50.925955] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d1e40, cid 0, qid 0 00:24:47.175 [2024-07-15 15:29:50.933845] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.175 [2024-07-15 15:29:50.933855] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.175 [2024-07-15 15:29:50.933859] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.175 [2024-07-15 15:29:50.933865] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14d1e40) on tqpair=0x1466f00 00:24:47.175 [2024-07-15 15:29:50.933875] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:47.175 [2024-07-15 15:29:50.933882] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:24:47.175 [2024-07-15 15:29:50.933889] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:24:47.175 [2024-07-15 15:29:50.933903] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.175 [2024-07-15 15:29:50.933908] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.175 [2024-07-15 15:29:50.933913] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1466f00) 00:24:47.175 [2024-07-15 15:29:50.933921] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.175 [2024-07-15 15:29:50.933934] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d1e40, cid 0, qid 0 00:24:47.175 [2024-07-15 15:29:50.934135] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.175 [2024-07-15 15:29:50.934142] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.175 [2024-07-15 15:29:50.934146] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.175 [2024-07-15 15:29:50.934151] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14d1e40) on tqpair=0x1466f00 00:24:47.175 [2024-07-15 15:29:50.934158] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:24:47.175 [2024-07-15 15:29:50.934167] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:24:47.175 [2024-07-15 15:29:50.934175] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.175 [2024-07-15 15:29:50.934180] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.175 [2024-07-15 15:29:50.934185] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1466f00) 00:24:47.175 [2024-07-15 15:29:50.934192] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.175 [2024-07-15 15:29:50.934204] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d1e40, cid 0, qid 0 00:24:47.175 [2024-07-15 15:29:50.934298] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.175 [2024-07-15 15:29:50.934305] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.175 [2024-07-15 15:29:50.934309] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.175 [2024-07-15 15:29:50.934314] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14d1e40) on tqpair=0x1466f00 00:24:47.175 [2024-07-15 15:29:50.934320] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:24:47.175 [2024-07-15 15:29:50.934330] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:24:47.175 [2024-07-15 15:29:50.934337] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.175 [2024-07-15 15:29:50.934342] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.175 [2024-07-15 15:29:50.934347] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1466f00) 00:24:47.175 [2024-07-15 15:29:50.934354] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.175 [2024-07-15 15:29:50.934365] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d1e40, cid 0, qid 0 00:24:47.175 [2024-07-15 15:29:50.934466] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.175 [2024-07-15 15:29:50.934473] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.175 [2024-07-15 15:29:50.934478] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.175 [2024-07-15 15:29:50.934482] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14d1e40) on tqpair=0x1466f00 00:24:47.175 [2024-07-15 15:29:50.934488] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:47.175 [2024-07-15 15:29:50.934499] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.175 [2024-07-15 15:29:50.934504] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.175 [2024-07-15 15:29:50.934509] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1466f00) 00:24:47.175 [2024-07-15 15:29:50.934515] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.175 [2024-07-15 15:29:50.934527] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d1e40, cid 0, qid 0 00:24:47.175 [2024-07-15 15:29:50.934699] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.175 [2024-07-15 15:29:50.934706] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.175 [2024-07-15 15:29:50.934711] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.175 [2024-07-15 15:29:50.934715] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14d1e40) on tqpair=0x1466f00 00:24:47.175 [2024-07-15 15:29:50.934721] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:24:47.175 [2024-07-15 15:29:50.934727] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:24:47.175 [2024-07-15 15:29:50.934737] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:47.175 [2024-07-15 15:29:50.934843] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:24:47.175 [2024-07-15 15:29:50.934850] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:47.175 [2024-07-15 15:29:50.934859] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.175 [2024-07-15 15:29:50.934863] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.175 [2024-07-15 15:29:50.934868] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1466f00) 00:24:47.175 [2024-07-15 15:29:50.934877] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.175 [2024-07-15 15:29:50.934889] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d1e40, cid 0, qid 0 00:24:47.175 [2024-07-15 15:29:50.935002] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.175 [2024-07-15 15:29:50.935008] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.175 [2024-07-15 15:29:50.935013] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.175 [2024-07-15 15:29:50.935018] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14d1e40) on tqpair=0x1466f00 00:24:47.175 [2024-07-15 15:29:50.935023] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:47.175 [2024-07-15 15:29:50.935034] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.175 [2024-07-15 15:29:50.935039] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.175 [2024-07-15 15:29:50.935043] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1466f00) 00:24:47.175 [2024-07-15 15:29:50.935050] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.175 [2024-07-15 15:29:50.935062] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d1e40, cid 0, qid 0 00:24:47.175 [2024-07-15 15:29:50.935168] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.175 [2024-07-15 15:29:50.935174] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.175 [2024-07-15 15:29:50.935179] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.175 [2024-07-15 15:29:50.935184] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14d1e40) on tqpair=0x1466f00 00:24:47.175 [2024-07-15 15:29:50.935189] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:47.175 [2024-07-15 15:29:50.935195] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:24:47.175 [2024-07-15 15:29:50.935204] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:24:47.175 [2024-07-15 15:29:50.935214] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:24:47.175 [2024-07-15 15:29:50.935224] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.175 [2024-07-15 15:29:50.935229] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1466f00) 00:24:47.175 [2024-07-15 15:29:50.935236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.175 [2024-07-15 15:29:50.935248] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d1e40, cid 0, qid 0 00:24:47.175 [2024-07-15 15:29:50.935369] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:47.175 [2024-07-15 15:29:50.935376] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:47.175 [2024-07-15 15:29:50.935380] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:47.175 [2024-07-15 15:29:50.935385] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1466f00): datao=0, datal=4096, cccid=0 00:24:47.175 [2024-07-15 15:29:50.935391] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14d1e40) on tqpair(0x1466f00): expected_datao=0, payload_size=4096 00:24:47.176 [2024-07-15 15:29:50.935397] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.176 [2024-07-15 15:29:50.935504] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:47.176 [2024-07-15 15:29:50.935510] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:47.176 [2024-07-15 15:29:50.975927] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.176 [2024-07-15 15:29:50.975941] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.176 [2024-07-15 15:29:50.975949] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.176 [2024-07-15 15:29:50.975954] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14d1e40) on tqpair=0x1466f00 00:24:47.176 [2024-07-15 15:29:50.975963] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:24:47.176 [2024-07-15 15:29:50.975973] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:24:47.176 [2024-07-15 15:29:50.975979] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:24:47.176 [2024-07-15 15:29:50.975986] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:24:47.176 [2024-07-15 15:29:50.975991] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:24:47.176 [2024-07-15 15:29:50.975998] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:24:47.176 [2024-07-15 15:29:50.976008] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:24:47.176 [2024-07-15 15:29:50.976016] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.176 [2024-07-15 15:29:50.976022] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.176 [2024-07-15 15:29:50.976026] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1466f00) 00:24:47.176 [2024-07-15 15:29:50.976034] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:47.176 [2024-07-15 15:29:50.976048] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d1e40, cid 0, qid 0 00:24:47.176 [2024-07-15 15:29:50.976141] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.176 [2024-07-15 15:29:50.976147] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.176 [2024-07-15 15:29:50.976152] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.176 [2024-07-15 15:29:50.976157] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14d1e40) on tqpair=0x1466f00 00:24:47.176 [2024-07-15 15:29:50.976165] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.176 [2024-07-15 15:29:50.976170] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.176 [2024-07-15 15:29:50.976174] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1466f00) 00:24:47.176 [2024-07-15 15:29:50.976181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.176 [2024-07-15 15:29:50.976188] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.176 [2024-07-15 15:29:50.976193] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.176 [2024-07-15 15:29:50.976198] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1466f00) 00:24:47.176 [2024-07-15 15:29:50.976204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.176 [2024-07-15 15:29:50.976211] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.176 [2024-07-15 15:29:50.976216] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.176 [2024-07-15 15:29:50.976221] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1466f00) 00:24:47.176 [2024-07-15 15:29:50.976227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.176 [2024-07-15 15:29:50.976234] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.176 [2024-07-15 15:29:50.976238] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.176 [2024-07-15 15:29:50.976243] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1466f00) 00:24:47.176 [2024-07-15 15:29:50.976252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.176 [2024-07-15 15:29:50.976258] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:24:47.176 [2024-07-15 15:29:50.976271] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:47.176 [2024-07-15 15:29:50.976278] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.176 [2024-07-15 15:29:50.976283] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1466f00) 00:24:47.176 [2024-07-15 15:29:50.976290] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.176 [2024-07-15 15:29:50.976303] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d1e40, cid 0, qid 0 00:24:47.176 [2024-07-15 15:29:50.976308] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d1fc0, cid 1, qid 0 00:24:47.176 [2024-07-15 15:29:50.976314] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d2140, cid 2, qid 0 00:24:47.176 [2024-07-15 15:29:50.976319] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d22c0, cid 3, qid 0 00:24:47.176 [2024-07-15 15:29:50.976325] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d2440, cid 4, qid 0 00:24:47.176 [2024-07-15 15:29:50.976547] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.176 [2024-07-15 15:29:50.976554] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.176 [2024-07-15 15:29:50.976558] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.176 [2024-07-15 15:29:50.976563] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14d2440) on tqpair=0x1466f00 00:24:47.176 [2024-07-15 15:29:50.976568] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:24:47.176 [2024-07-15 15:29:50.976575] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:24:47.176 [2024-07-15 15:29:50.976586] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.176 [2024-07-15 15:29:50.976592] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1466f00) 00:24:47.176 [2024-07-15 15:29:50.976598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.176 [2024-07-15 15:29:50.976610] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d2440, cid 4, qid 0 00:24:47.176 [2024-07-15 15:29:50.976707] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:47.176 [2024-07-15 15:29:50.976714] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:47.176 [2024-07-15 15:29:50.976719] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:47.176 [2024-07-15 15:29:50.976723] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1466f00): datao=0, datal=4096, cccid=4 00:24:47.176 [2024-07-15 15:29:50.976729] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14d2440) on tqpair(0x1466f00): expected_datao=0, payload_size=4096 00:24:47.176 [2024-07-15 15:29:50.976735] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.176 [2024-07-15 15:29:50.976867] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:47.176 [2024-07-15 15:29:50.976872] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:47.176 [2024-07-15 15:29:50.976936] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.176 [2024-07-15 15:29:50.976943] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.176 [2024-07-15 15:29:50.976947] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.176 [2024-07-15 15:29:50.976952] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14d2440) on tqpair=0x1466f00 00:24:47.176 [2024-07-15 15:29:50.976965] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:24:47.176 [2024-07-15 15:29:50.976990] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.176 [2024-07-15 15:29:50.976996] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1466f00) 00:24:47.176 [2024-07-15 15:29:50.977003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.176 [2024-07-15 15:29:50.977011] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.176 [2024-07-15 15:29:50.977015] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.176 [2024-07-15 15:29:50.977020] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1466f00) 00:24:47.176 [2024-07-15 15:29:50.977026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.176 [2024-07-15 15:29:50.977042] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d2440, cid 4, qid 0 00:24:47.176 [2024-07-15 15:29:50.977048] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d25c0, cid 5, qid 0 00:24:47.176 [2024-07-15 15:29:50.977172] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:47.176 [2024-07-15 15:29:50.977179] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:47.176 [2024-07-15 15:29:50.977183] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:47.176 [2024-07-15 15:29:50.977188] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1466f00): datao=0, datal=1024, cccid=4 00:24:47.176 [2024-07-15 15:29:50.977194] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14d2440) on tqpair(0x1466f00): expected_datao=0, payload_size=1024 00:24:47.176 [2024-07-15 15:29:50.977199] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.176 [2024-07-15 15:29:50.977206] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:47.176 [2024-07-15 15:29:50.977211] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:47.176 [2024-07-15 15:29:50.977217] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.176 [2024-07-15 15:29:50.977223] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.176 [2024-07-15 15:29:50.977227] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.176 [2024-07-15 15:29:50.977232] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14d25c0) on tqpair=0x1466f00 00:24:47.176 [2024-07-15 15:29:51.021842] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.176 [2024-07-15 15:29:51.021855] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.176 [2024-07-15 15:29:51.021859] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.176 [2024-07-15 15:29:51.021865] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14d2440) on tqpair=0x1466f00 00:24:47.176 [2024-07-15 15:29:51.021883] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.176 [2024-07-15 15:29:51.021888] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1466f00) 00:24:47.176 [2024-07-15 15:29:51.021896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.176 [2024-07-15 15:29:51.021915] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d2440, cid 4, qid 0 00:24:47.176 [2024-07-15 15:29:51.022088] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:47.176 [2024-07-15 15:29:51.022095] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:47.176 [2024-07-15 15:29:51.022099] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:47.176 [2024-07-15 15:29:51.022104] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1466f00): datao=0, datal=3072, cccid=4 00:24:47.176 [2024-07-15 15:29:51.022110] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14d2440) on tqpair(0x1466f00): expected_datao=0, payload_size=3072 00:24:47.176 [2024-07-15 15:29:51.022116] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.176 [2024-07-15 15:29:51.022225] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:47.177 [2024-07-15 15:29:51.022230] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:47.177 [2024-07-15 15:29:51.063010] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.177 [2024-07-15 15:29:51.063021] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.177 [2024-07-15 15:29:51.063026] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.177 [2024-07-15 15:29:51.063031] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14d2440) on tqpair=0x1466f00 00:24:47.177 [2024-07-15 15:29:51.063042] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.177 [2024-07-15 15:29:51.063047] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1466f00) 00:24:47.177 [2024-07-15 15:29:51.063054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.177 [2024-07-15 15:29:51.063072] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d2440, cid 4, qid 0 00:24:47.177 [2024-07-15 15:29:51.063263] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:47.177 [2024-07-15 15:29:51.063269] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:47.177 [2024-07-15 15:29:51.063274] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:47.177 [2024-07-15 15:29:51.063278] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1466f00): datao=0, datal=8, cccid=4 00:24:47.177 [2024-07-15 15:29:51.063284] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14d2440) on tqpair(0x1466f00): expected_datao=0, payload_size=8 00:24:47.177 [2024-07-15 15:29:51.063290] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.177 [2024-07-15 15:29:51.063297] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:47.177 [2024-07-15 15:29:51.063302] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:47.438 [2024-07-15 15:29:51.104027] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.439 [2024-07-15 15:29:51.104039] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.439 [2024-07-15 15:29:51.104044] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.439 [2024-07-15 15:29:51.104049] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14d2440) on tqpair=0x1466f00 00:24:47.439 ===================================================== 00:24:47.439 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:47.439 ===================================================== 00:24:47.439 Controller Capabilities/Features 00:24:47.439 ================================ 00:24:47.439 Vendor ID: 0000 00:24:47.439 Subsystem Vendor ID: 0000 00:24:47.439 Serial Number: .................... 00:24:47.439 Model Number: ........................................ 00:24:47.439 Firmware Version: 24.09 00:24:47.439 Recommended Arb Burst: 0 00:24:47.439 IEEE OUI Identifier: 00 00 00 00:24:47.439 Multi-path I/O 00:24:47.439 May have multiple subsystem ports: No 00:24:47.439 May have multiple controllers: No 00:24:47.439 Associated with SR-IOV VF: No 00:24:47.439 Max Data Transfer Size: 131072 00:24:47.439 Max Number of Namespaces: 0 00:24:47.439 Max Number of I/O Queues: 1024 00:24:47.439 NVMe Specification Version (VS): 1.3 00:24:47.439 NVMe Specification Version (Identify): 1.3 00:24:47.439 Maximum Queue Entries: 128 00:24:47.439 Contiguous Queues Required: Yes 00:24:47.439 Arbitration Mechanisms Supported 00:24:47.439 Weighted Round Robin: Not Supported 00:24:47.439 Vendor Specific: Not Supported 00:24:47.439 Reset Timeout: 15000 ms 00:24:47.439 Doorbell Stride: 4 bytes 00:24:47.439 NVM Subsystem Reset: Not Supported 00:24:47.439 Command Sets Supported 00:24:47.439 NVM Command Set: Supported 00:24:47.439 Boot Partition: Not Supported 00:24:47.439 Memory Page Size Minimum: 4096 bytes 00:24:47.439 Memory Page Size Maximum: 4096 bytes 00:24:47.439 Persistent Memory Region: Not Supported 00:24:47.439 Optional Asynchronous Events Supported 00:24:47.439 Namespace Attribute Notices: Not Supported 00:24:47.439 Firmware Activation Notices: Not Supported 00:24:47.439 ANA Change Notices: Not Supported 00:24:47.439 PLE Aggregate Log Change Notices: Not Supported 00:24:47.439 LBA Status Info Alert Notices: Not Supported 00:24:47.439 EGE Aggregate Log Change Notices: Not Supported 00:24:47.439 Normal NVM Subsystem Shutdown event: Not Supported 00:24:47.439 Zone Descriptor Change Notices: Not Supported 00:24:47.439 Discovery Log Change Notices: Supported 00:24:47.439 Controller Attributes 00:24:47.439 128-bit Host Identifier: Not Supported 00:24:47.439 Non-Operational Permissive Mode: Not Supported 00:24:47.439 NVM Sets: Not Supported 00:24:47.439 Read Recovery Levels: Not Supported 00:24:47.439 Endurance Groups: Not Supported 00:24:47.439 Predictable Latency Mode: Not Supported 00:24:47.439 Traffic Based Keep ALive: Not Supported 00:24:47.439 Namespace Granularity: Not Supported 00:24:47.439 SQ Associations: Not Supported 00:24:47.439 UUID List: Not Supported 00:24:47.439 Multi-Domain Subsystem: Not Supported 00:24:47.439 Fixed Capacity Management: Not Supported 00:24:47.439 Variable Capacity Management: Not Supported 00:24:47.439 Delete Endurance Group: Not Supported 00:24:47.439 Delete NVM Set: Not Supported 00:24:47.439 Extended LBA Formats Supported: Not Supported 00:24:47.439 Flexible Data Placement Supported: Not Supported 00:24:47.439 00:24:47.439 Controller Memory Buffer Support 00:24:47.439 ================================ 00:24:47.439 Supported: No 00:24:47.439 00:24:47.439 Persistent Memory Region Support 00:24:47.439 ================================ 00:24:47.439 Supported: No 00:24:47.439 00:24:47.439 Admin Command Set Attributes 00:24:47.439 ============================ 00:24:47.439 Security Send/Receive: Not Supported 00:24:47.439 Format NVM: Not Supported 00:24:47.439 Firmware Activate/Download: Not Supported 00:24:47.439 Namespace Management: Not Supported 00:24:47.439 Device Self-Test: Not Supported 00:24:47.439 Directives: Not Supported 00:24:47.439 NVMe-MI: Not Supported 00:24:47.439 Virtualization Management: Not Supported 00:24:47.439 Doorbell Buffer Config: Not Supported 00:24:47.439 Get LBA Status Capability: Not Supported 00:24:47.439 Command & Feature Lockdown Capability: Not Supported 00:24:47.439 Abort Command Limit: 1 00:24:47.439 Async Event Request Limit: 4 00:24:47.439 Number of Firmware Slots: N/A 00:24:47.439 Firmware Slot 1 Read-Only: N/A 00:24:47.439 Firmware Activation Without Reset: N/A 00:24:47.439 Multiple Update Detection Support: N/A 00:24:47.439 Firmware Update Granularity: No Information Provided 00:24:47.439 Per-Namespace SMART Log: No 00:24:47.439 Asymmetric Namespace Access Log Page: Not Supported 00:24:47.439 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:47.439 Command Effects Log Page: Not Supported 00:24:47.439 Get Log Page Extended Data: Supported 00:24:47.439 Telemetry Log Pages: Not Supported 00:24:47.439 Persistent Event Log Pages: Not Supported 00:24:47.439 Supported Log Pages Log Page: May Support 00:24:47.439 Commands Supported & Effects Log Page: Not Supported 00:24:47.439 Feature Identifiers & Effects Log Page:May Support 00:24:47.439 NVMe-MI Commands & Effects Log Page: May Support 00:24:47.439 Data Area 4 for Telemetry Log: Not Supported 00:24:47.439 Error Log Page Entries Supported: 128 00:24:47.439 Keep Alive: Not Supported 00:24:47.439 00:24:47.439 NVM Command Set Attributes 00:24:47.439 ========================== 00:24:47.439 Submission Queue Entry Size 00:24:47.439 Max: 1 00:24:47.439 Min: 1 00:24:47.439 Completion Queue Entry Size 00:24:47.439 Max: 1 00:24:47.439 Min: 1 00:24:47.439 Number of Namespaces: 0 00:24:47.439 Compare Command: Not Supported 00:24:47.439 Write Uncorrectable Command: Not Supported 00:24:47.439 Dataset Management Command: Not Supported 00:24:47.439 Write Zeroes Command: Not Supported 00:24:47.439 Set Features Save Field: Not Supported 00:24:47.439 Reservations: Not Supported 00:24:47.439 Timestamp: Not Supported 00:24:47.439 Copy: Not Supported 00:24:47.439 Volatile Write Cache: Not Present 00:24:47.439 Atomic Write Unit (Normal): 1 00:24:47.439 Atomic Write Unit (PFail): 1 00:24:47.439 Atomic Compare & Write Unit: 1 00:24:47.439 Fused Compare & Write: Supported 00:24:47.439 Scatter-Gather List 00:24:47.439 SGL Command Set: Supported 00:24:47.439 SGL Keyed: Supported 00:24:47.439 SGL Bit Bucket Descriptor: Not Supported 00:24:47.439 SGL Metadata Pointer: Not Supported 00:24:47.439 Oversized SGL: Not Supported 00:24:47.439 SGL Metadata Address: Not Supported 00:24:47.439 SGL Offset: Supported 00:24:47.439 Transport SGL Data Block: Not Supported 00:24:47.439 Replay Protected Memory Block: Not Supported 00:24:47.439 00:24:47.439 Firmware Slot Information 00:24:47.439 ========================= 00:24:47.439 Active slot: 0 00:24:47.439 00:24:47.439 00:24:47.439 Error Log 00:24:47.439 ========= 00:24:47.439 00:24:47.439 Active Namespaces 00:24:47.439 ================= 00:24:47.439 Discovery Log Page 00:24:47.439 ================== 00:24:47.439 Generation Counter: 2 00:24:47.439 Number of Records: 2 00:24:47.439 Record Format: 0 00:24:47.439 00:24:47.439 Discovery Log Entry 0 00:24:47.439 ---------------------- 00:24:47.439 Transport Type: 3 (TCP) 00:24:47.439 Address Family: 1 (IPv4) 00:24:47.439 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:47.439 Entry Flags: 00:24:47.439 Duplicate Returned Information: 1 00:24:47.439 Explicit Persistent Connection Support for Discovery: 1 00:24:47.439 Transport Requirements: 00:24:47.439 Secure Channel: Not Required 00:24:47.439 Port ID: 0 (0x0000) 00:24:47.439 Controller ID: 65535 (0xffff) 00:24:47.439 Admin Max SQ Size: 128 00:24:47.439 Transport Service Identifier: 4420 00:24:47.439 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:47.439 Transport Address: 10.0.0.2 00:24:47.439 Discovery Log Entry 1 00:24:47.439 ---------------------- 00:24:47.439 Transport Type: 3 (TCP) 00:24:47.439 Address Family: 1 (IPv4) 00:24:47.439 Subsystem Type: 2 (NVM Subsystem) 00:24:47.439 Entry Flags: 00:24:47.439 Duplicate Returned Information: 0 00:24:47.439 Explicit Persistent Connection Support for Discovery: 0 00:24:47.439 Transport Requirements: 00:24:47.439 Secure Channel: Not Required 00:24:47.439 Port ID: 0 (0x0000) 00:24:47.439 Controller ID: 65535 (0xffff) 00:24:47.439 Admin Max SQ Size: 128 00:24:47.439 Transport Service Identifier: 4420 00:24:47.439 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:47.439 Transport Address: 10.0.0.2 [2024-07-15 15:29:51.104137] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:24:47.439 [2024-07-15 15:29:51.104149] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14d1e40) on tqpair=0x1466f00 00:24:47.439 [2024-07-15 15:29:51.104156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.439 [2024-07-15 15:29:51.104162] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14d1fc0) on tqpair=0x1466f00 00:24:47.439 [2024-07-15 15:29:51.104168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.439 [2024-07-15 15:29:51.104174] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14d2140) on tqpair=0x1466f00 00:24:47.439 [2024-07-15 15:29:51.104179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.439 [2024-07-15 15:29:51.104185] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14d22c0) on tqpair=0x1466f00 00:24:47.439 [2024-07-15 15:29:51.104191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.439 [2024-07-15 15:29:51.104202] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.439 [2024-07-15 15:29:51.104208] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.439 [2024-07-15 15:29:51.104212] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1466f00) 00:24:47.439 [2024-07-15 15:29:51.104220] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.439 [2024-07-15 15:29:51.104237] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d22c0, cid 3, qid 0 00:24:47.439 [2024-07-15 15:29:51.104386] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.439 [2024-07-15 15:29:51.104393] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.439 [2024-07-15 15:29:51.104398] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.439 [2024-07-15 15:29:51.104403] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14d22c0) on tqpair=0x1466f00 00:24:47.439 [2024-07-15 15:29:51.104411] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.439 [2024-07-15 15:29:51.104416] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.439 [2024-07-15 15:29:51.104420] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1466f00) 00:24:47.439 [2024-07-15 15:29:51.104427] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.439 [2024-07-15 15:29:51.104443] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d22c0, cid 3, qid 0 00:24:47.439 [2024-07-15 15:29:51.104584] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.439 [2024-07-15 15:29:51.104591] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.439 [2024-07-15 15:29:51.104595] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.439 [2024-07-15 15:29:51.104600] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14d22c0) on tqpair=0x1466f00 00:24:47.439 [2024-07-15 15:29:51.104606] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:24:47.439 [2024-07-15 15:29:51.104612] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:24:47.439 [2024-07-15 15:29:51.104623] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.439 [2024-07-15 15:29:51.104628] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.439 [2024-07-15 15:29:51.104632] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1466f00) 00:24:47.439 [2024-07-15 15:29:51.104639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.440 [2024-07-15 15:29:51.104651] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d22c0, cid 3, qid 0 00:24:47.440 [2024-07-15 15:29:51.107840] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.440 [2024-07-15 15:29:51.107850] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.440 [2024-07-15 15:29:51.107854] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.440 [2024-07-15 15:29:51.107859] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14d22c0) on tqpair=0x1466f00 00:24:47.440 [2024-07-15 15:29:51.107871] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.440 [2024-07-15 15:29:51.107876] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.440 [2024-07-15 15:29:51.107880] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1466f00) 00:24:47.440 [2024-07-15 15:29:51.107888] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.440 [2024-07-15 15:29:51.107901] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d22c0, cid 3, qid 0 00:24:47.440 [2024-07-15 15:29:51.108083] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.440 [2024-07-15 15:29:51.108090] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.440 [2024-07-15 15:29:51.108095] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.440 [2024-07-15 15:29:51.108099] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14d22c0) on tqpair=0x1466f00 00:24:47.440 [2024-07-15 15:29:51.108108] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 3 milliseconds 00:24:47.440 00:24:47.440 15:29:51 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:47.440 [2024-07-15 15:29:51.149602] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:24:47.440 [2024-07-15 15:29:51.149643] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3143797 ] 00:24:47.440 EAL: No free 2048 kB hugepages reported on node 1 00:24:47.440 [2024-07-15 15:29:51.181923] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:24:47.440 [2024-07-15 15:29:51.181973] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:47.440 [2024-07-15 15:29:51.181979] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:47.440 [2024-07-15 15:29:51.181994] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:47.440 [2024-07-15 15:29:51.182001] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:47.440 [2024-07-15 15:29:51.182424] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:24:47.440 [2024-07-15 15:29:51.182450] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1c3df00 0 00:24:47.440 [2024-07-15 15:29:51.196843] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:47.440 [2024-07-15 15:29:51.196857] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:47.440 [2024-07-15 15:29:51.196862] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:47.440 [2024-07-15 15:29:51.196867] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:47.440 [2024-07-15 15:29:51.196903] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.440 [2024-07-15 15:29:51.196909] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.440 [2024-07-15 15:29:51.196914] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c3df00) 00:24:47.440 [2024-07-15 15:29:51.196935] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:47.440 [2024-07-15 15:29:51.196951] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca8e40, cid 0, qid 0 00:24:47.440 [2024-07-15 15:29:51.204844] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.440 [2024-07-15 15:29:51.204853] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.440 [2024-07-15 15:29:51.204858] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.440 [2024-07-15 15:29:51.204863] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca8e40) on tqpair=0x1c3df00 00:24:47.440 [2024-07-15 15:29:51.204876] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:47.440 [2024-07-15 15:29:51.204882] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:24:47.440 [2024-07-15 15:29:51.204889] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:24:47.440 [2024-07-15 15:29:51.204902] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.440 [2024-07-15 15:29:51.204908] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.440 [2024-07-15 15:29:51.204912] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c3df00) 00:24:47.440 [2024-07-15 15:29:51.204921] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.440 [2024-07-15 15:29:51.204935] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca8e40, cid 0, qid 0 00:24:47.440 [2024-07-15 15:29:51.205166] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.440 [2024-07-15 15:29:51.205173] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.440 [2024-07-15 15:29:51.205178] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.440 [2024-07-15 15:29:51.205183] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca8e40) on tqpair=0x1c3df00 00:24:47.440 [2024-07-15 15:29:51.205189] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:24:47.440 [2024-07-15 15:29:51.205198] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:24:47.440 [2024-07-15 15:29:51.205206] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.440 [2024-07-15 15:29:51.205211] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.440 [2024-07-15 15:29:51.205216] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c3df00) 00:24:47.440 [2024-07-15 15:29:51.205223] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.440 [2024-07-15 15:29:51.205236] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca8e40, cid 0, qid 0 00:24:47.440 [2024-07-15 15:29:51.205361] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.440 [2024-07-15 15:29:51.205368] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.440 [2024-07-15 15:29:51.205373] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.440 [2024-07-15 15:29:51.205377] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca8e40) on tqpair=0x1c3df00 00:24:47.440 [2024-07-15 15:29:51.205383] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:24:47.440 [2024-07-15 15:29:51.205393] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:24:47.440 [2024-07-15 15:29:51.205400] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.440 [2024-07-15 15:29:51.205405] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.440 [2024-07-15 15:29:51.205409] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c3df00) 00:24:47.440 [2024-07-15 15:29:51.205416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.440 [2024-07-15 15:29:51.205428] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca8e40, cid 0, qid 0 00:24:47.440 [2024-07-15 15:29:51.205519] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.440 [2024-07-15 15:29:51.205526] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.440 [2024-07-15 15:29:51.205531] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.440 [2024-07-15 15:29:51.205535] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca8e40) on tqpair=0x1c3df00 00:24:47.440 [2024-07-15 15:29:51.205541] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:47.440 [2024-07-15 15:29:51.205552] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.440 [2024-07-15 15:29:51.205557] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.440 [2024-07-15 15:29:51.205561] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c3df00) 00:24:47.440 [2024-07-15 15:29:51.205568] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.440 [2024-07-15 15:29:51.205580] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca8e40, cid 0, qid 0 00:24:47.440 [2024-07-15 15:29:51.205713] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.440 [2024-07-15 15:29:51.205720] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.440 [2024-07-15 15:29:51.205724] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.440 [2024-07-15 15:29:51.205731] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca8e40) on tqpair=0x1c3df00 00:24:47.440 [2024-07-15 15:29:51.205737] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:24:47.440 [2024-07-15 15:29:51.205743] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:24:47.440 [2024-07-15 15:29:51.205752] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:47.440 [2024-07-15 15:29:51.205859] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:24:47.440 [2024-07-15 15:29:51.205864] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:47.440 [2024-07-15 15:29:51.205872] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.440 [2024-07-15 15:29:51.205877] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.440 [2024-07-15 15:29:51.205882] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c3df00) 00:24:47.440 [2024-07-15 15:29:51.205889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.440 [2024-07-15 15:29:51.205901] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca8e40, cid 0, qid 0 00:24:47.440 [2024-07-15 15:29:51.205993] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.440 [2024-07-15 15:29:51.206000] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.440 [2024-07-15 15:29:51.206005] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.440 [2024-07-15 15:29:51.206009] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca8e40) on tqpair=0x1c3df00 00:24:47.440 [2024-07-15 15:29:51.206015] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:47.440 [2024-07-15 15:29:51.206026] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.440 [2024-07-15 15:29:51.206031] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.440 [2024-07-15 15:29:51.206035] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c3df00) 00:24:47.440 [2024-07-15 15:29:51.206042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.440 [2024-07-15 15:29:51.206054] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca8e40, cid 0, qid 0 00:24:47.440 [2024-07-15 15:29:51.206196] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.440 [2024-07-15 15:29:51.206202] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.440 [2024-07-15 15:29:51.206207] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.440 [2024-07-15 15:29:51.206212] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca8e40) on tqpair=0x1c3df00 00:24:47.440 [2024-07-15 15:29:51.206217] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:47.440 [2024-07-15 15:29:51.206223] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:24:47.440 [2024-07-15 15:29:51.206232] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:24:47.440 [2024-07-15 15:29:51.206242] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:24:47.440 [2024-07-15 15:29:51.206251] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.440 [2024-07-15 15:29:51.206256] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c3df00) 00:24:47.440 [2024-07-15 15:29:51.206263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.440 [2024-07-15 15:29:51.206278] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca8e40, cid 0, qid 0 00:24:47.440 [2024-07-15 15:29:51.206402] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:47.440 [2024-07-15 15:29:51.206408] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:47.440 [2024-07-15 15:29:51.206413] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:47.440 [2024-07-15 15:29:51.206418] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c3df00): datao=0, datal=4096, cccid=0 00:24:47.440 [2024-07-15 15:29:51.206424] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ca8e40) on tqpair(0x1c3df00): expected_datao=0, payload_size=4096 00:24:47.440 [2024-07-15 15:29:51.206429] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.440 [2024-07-15 15:29:51.206437] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:47.440 [2024-07-15 15:29:51.206442] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:47.440 [2024-07-15 15:29:51.206498] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.440 [2024-07-15 15:29:51.206504] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.440 [2024-07-15 15:29:51.206509] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.440 [2024-07-15 15:29:51.206514] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca8e40) on tqpair=0x1c3df00 00:24:47.440 [2024-07-15 15:29:51.206522] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:24:47.440 [2024-07-15 15:29:51.206531] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:24:47.440 [2024-07-15 15:29:51.206537] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:24:47.440 [2024-07-15 15:29:51.206541] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:24:47.440 [2024-07-15 15:29:51.206547] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:24:47.440 [2024-07-15 15:29:51.206553] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:24:47.440 [2024-07-15 15:29:51.206563] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:24:47.440 [2024-07-15 15:29:51.206571] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.440 [2024-07-15 15:29:51.206575] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.440 [2024-07-15 15:29:51.206580] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c3df00) 00:24:47.440 [2024-07-15 15:29:51.206587] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:47.440 [2024-07-15 15:29:51.206600] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca8e40, cid 0, qid 0 00:24:47.440 [2024-07-15 15:29:51.206692] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.440 [2024-07-15 15:29:51.206698] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.440 [2024-07-15 15:29:51.206703] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.440 [2024-07-15 15:29:51.206708] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca8e40) on tqpair=0x1c3df00 00:24:47.440 [2024-07-15 15:29:51.206714] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.440 [2024-07-15 15:29:51.206719] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.440 [2024-07-15 15:29:51.206724] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c3df00) 00:24:47.440 [2024-07-15 15:29:51.206730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.440 [2024-07-15 15:29:51.206737] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.440 [2024-07-15 15:29:51.206744] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.440 [2024-07-15 15:29:51.206749] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1c3df00) 00:24:47.441 [2024-07-15 15:29:51.206755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.441 [2024-07-15 15:29:51.206762] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.441 [2024-07-15 15:29:51.206766] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.441 [2024-07-15 15:29:51.206771] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1c3df00) 00:24:47.441 [2024-07-15 15:29:51.206777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.441 [2024-07-15 15:29:51.206784] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.441 [2024-07-15 15:29:51.206789] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.441 [2024-07-15 15:29:51.206793] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3df00) 00:24:47.441 [2024-07-15 15:29:51.206799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.441 [2024-07-15 15:29:51.206805] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:47.441 [2024-07-15 15:29:51.206818] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:47.441 [2024-07-15 15:29:51.206825] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.441 [2024-07-15 15:29:51.206830] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c3df00) 00:24:47.441 [2024-07-15 15:29:51.206842] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.441 [2024-07-15 15:29:51.206856] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca8e40, cid 0, qid 0 00:24:47.441 [2024-07-15 15:29:51.206878] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca8fc0, cid 1, qid 0 00:24:47.441 [2024-07-15 15:29:51.206883] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca9140, cid 2, qid 0 00:24:47.441 [2024-07-15 15:29:51.206889] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca92c0, cid 3, qid 0 00:24:47.441 [2024-07-15 15:29:51.206894] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca9440, cid 4, qid 0 00:24:47.441 [2024-07-15 15:29:51.207040] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.441 [2024-07-15 15:29:51.207047] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.441 [2024-07-15 15:29:51.207052] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.441 [2024-07-15 15:29:51.207057] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca9440) on tqpair=0x1c3df00 00:24:47.441 [2024-07-15 15:29:51.207062] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:24:47.441 [2024-07-15 15:29:51.207069] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:47.441 [2024-07-15 15:29:51.207079] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:24:47.441 [2024-07-15 15:29:51.207086] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:47.441 [2024-07-15 15:29:51.207093] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.441 [2024-07-15 15:29:51.207098] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.441 [2024-07-15 15:29:51.207103] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c3df00) 00:24:47.441 [2024-07-15 15:29:51.207112] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:47.441 [2024-07-15 15:29:51.207124] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca9440, cid 4, qid 0 00:24:47.441 [2024-07-15 15:29:51.207241] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.441 [2024-07-15 15:29:51.207248] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.441 [2024-07-15 15:29:51.207253] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.441 [2024-07-15 15:29:51.207257] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca9440) on tqpair=0x1c3df00 00:24:47.441 [2024-07-15 15:29:51.207309] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:24:47.441 [2024-07-15 15:29:51.207320] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:47.441 [2024-07-15 15:29:51.207328] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.441 [2024-07-15 15:29:51.207333] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c3df00) 00:24:47.441 [2024-07-15 15:29:51.207340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.441 [2024-07-15 15:29:51.207352] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca9440, cid 4, qid 0 00:24:47.441 [2024-07-15 15:29:51.207456] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:47.441 [2024-07-15 15:29:51.207463] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:47.441 [2024-07-15 15:29:51.207468] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:47.441 [2024-07-15 15:29:51.207473] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c3df00): datao=0, datal=4096, cccid=4 00:24:47.441 [2024-07-15 15:29:51.207479] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ca9440) on tqpair(0x1c3df00): expected_datao=0, payload_size=4096 00:24:47.441 [2024-07-15 15:29:51.207485] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.441 [2024-07-15 15:29:51.207599] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:47.441 [2024-07-15 15:29:51.207604] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:47.441 [2024-07-15 15:29:51.248009] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.441 [2024-07-15 15:29:51.248024] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.441 [2024-07-15 15:29:51.248029] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.441 [2024-07-15 15:29:51.248034] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca9440) on tqpair=0x1c3df00 00:24:47.441 [2024-07-15 15:29:51.248046] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:24:47.441 [2024-07-15 15:29:51.248062] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:24:47.441 [2024-07-15 15:29:51.248073] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:24:47.441 [2024-07-15 15:29:51.248082] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.441 [2024-07-15 15:29:51.248087] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c3df00) 00:24:47.441 [2024-07-15 15:29:51.248095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.441 [2024-07-15 15:29:51.248109] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca9440, cid 4, qid 0 00:24:47.441 [2024-07-15 15:29:51.251843] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:47.441 [2024-07-15 15:29:51.251854] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:47.441 [2024-07-15 15:29:51.251859] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:47.441 [2024-07-15 15:29:51.251866] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c3df00): datao=0, datal=4096, cccid=4 00:24:47.441 [2024-07-15 15:29:51.251873] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ca9440) on tqpair(0x1c3df00): expected_datao=0, payload_size=4096 00:24:47.441 [2024-07-15 15:29:51.251879] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.441 [2024-07-15 15:29:51.251886] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:47.441 [2024-07-15 15:29:51.251891] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:47.441 [2024-07-15 15:29:51.251898] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.441 [2024-07-15 15:29:51.251906] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.441 [2024-07-15 15:29:51.251911] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.441 [2024-07-15 15:29:51.251917] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca9440) on tqpair=0x1c3df00 00:24:47.441 [2024-07-15 15:29:51.251932] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:47.441 [2024-07-15 15:29:51.251944] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:47.441 [2024-07-15 15:29:51.251953] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.441 [2024-07-15 15:29:51.251960] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c3df00) 00:24:47.441 [2024-07-15 15:29:51.251968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.441 [2024-07-15 15:29:51.251983] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca9440, cid 4, qid 0 00:24:47.441 [2024-07-15 15:29:51.252176] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:47.441 [2024-07-15 15:29:51.252183] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:47.441 [2024-07-15 15:29:51.252188] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:47.441 [2024-07-15 15:29:51.252192] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c3df00): datao=0, datal=4096, cccid=4 00:24:47.441 [2024-07-15 15:29:51.252198] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ca9440) on tqpair(0x1c3df00): expected_datao=0, payload_size=4096 00:24:47.441 [2024-07-15 15:29:51.252205] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.441 [2024-07-15 15:29:51.252328] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:47.441 [2024-07-15 15:29:51.252333] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:47.441 [2024-07-15 15:29:51.293013] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.441 [2024-07-15 15:29:51.293025] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.441 [2024-07-15 15:29:51.293030] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.441 [2024-07-15 15:29:51.293035] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca9440) on tqpair=0x1c3df00 00:24:47.441 [2024-07-15 15:29:51.293044] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:47.441 [2024-07-15 15:29:51.293055] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:24:47.441 [2024-07-15 15:29:51.293068] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:24:47.441 [2024-07-15 15:29:51.293075] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:47.441 [2024-07-15 15:29:51.293082] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:47.441 [2024-07-15 15:29:51.293088] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:24:47.441 [2024-07-15 15:29:51.293097] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:24:47.441 [2024-07-15 15:29:51.293103] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:24:47.441 [2024-07-15 15:29:51.293110] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:24:47.441 [2024-07-15 15:29:51.293125] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.441 [2024-07-15 15:29:51.293130] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c3df00) 00:24:47.441 [2024-07-15 15:29:51.293138] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.441 [2024-07-15 15:29:51.293146] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.441 [2024-07-15 15:29:51.293151] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.441 [2024-07-15 15:29:51.293155] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c3df00) 00:24:47.441 [2024-07-15 15:29:51.293162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.441 [2024-07-15 15:29:51.293179] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca9440, cid 4, qid 0 00:24:47.441 [2024-07-15 15:29:51.293185] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca95c0, cid 5, qid 0 00:24:47.441 [2024-07-15 15:29:51.293300] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.441 [2024-07-15 15:29:51.293308] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.441 [2024-07-15 15:29:51.293312] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.441 [2024-07-15 15:29:51.293317] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca9440) on tqpair=0x1c3df00 00:24:47.441 [2024-07-15 15:29:51.293324] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.441 [2024-07-15 15:29:51.293331] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.441 [2024-07-15 15:29:51.293335] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.441 [2024-07-15 15:29:51.293340] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca95c0) on tqpair=0x1c3df00 00:24:47.441 [2024-07-15 15:29:51.293351] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.441 [2024-07-15 15:29:51.293356] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c3df00) 00:24:47.441 [2024-07-15 15:29:51.293363] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.441 [2024-07-15 15:29:51.293375] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca95c0, cid 5, qid 0 00:24:47.441 [2024-07-15 15:29:51.293567] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.441 [2024-07-15 15:29:51.293574] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.441 [2024-07-15 15:29:51.293578] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.441 [2024-07-15 15:29:51.293583] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca95c0) on tqpair=0x1c3df00 00:24:47.441 [2024-07-15 15:29:51.293594] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.441 [2024-07-15 15:29:51.293599] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c3df00) 00:24:47.441 [2024-07-15 15:29:51.293606] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.441 [2024-07-15 15:29:51.293617] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca95c0, cid 5, qid 0 00:24:47.441 [2024-07-15 15:29:51.293707] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.441 [2024-07-15 15:29:51.293714] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.441 [2024-07-15 15:29:51.293744] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.441 [2024-07-15 15:29:51.293749] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca95c0) on tqpair=0x1c3df00 00:24:47.441 [2024-07-15 15:29:51.293760] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.441 [2024-07-15 15:29:51.293765] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c3df00) 00:24:47.441 [2024-07-15 15:29:51.293772] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.441 [2024-07-15 15:29:51.293784] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca95c0, cid 5, qid 0 00:24:47.441 [2024-07-15 15:29:51.293875] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.441 [2024-07-15 15:29:51.293883] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.441 [2024-07-15 15:29:51.293887] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.441 [2024-07-15 15:29:51.293892] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca95c0) on tqpair=0x1c3df00 00:24:47.441 [2024-07-15 15:29:51.293909] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.441 [2024-07-15 15:29:51.293915] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c3df00) 00:24:47.441 [2024-07-15 15:29:51.293922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.441 [2024-07-15 15:29:51.293930] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.441 [2024-07-15 15:29:51.293935] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c3df00) 00:24:47.441 [2024-07-15 15:29:51.293941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.441 [2024-07-15 15:29:51.293949] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.441 [2024-07-15 15:29:51.293954] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1c3df00) 00:24:47.441 [2024-07-15 15:29:51.293961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.441 [2024-07-15 15:29:51.293969] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.441 [2024-07-15 15:29:51.293974] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1c3df00) 00:24:47.441 [2024-07-15 15:29:51.293981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.442 [2024-07-15 15:29:51.293994] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca95c0, cid 5, qid 0 00:24:47.442 [2024-07-15 15:29:51.294000] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca9440, cid 4, qid 0 00:24:47.442 [2024-07-15 15:29:51.294005] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca9740, cid 6, qid 0 00:24:47.442 [2024-07-15 15:29:51.294011] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca98c0, cid 7, qid 0 00:24:47.442 [2024-07-15 15:29:51.294169] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:47.442 [2024-07-15 15:29:51.294176] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:47.442 [2024-07-15 15:29:51.294181] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:47.442 [2024-07-15 15:29:51.294185] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c3df00): datao=0, datal=8192, cccid=5 00:24:47.442 [2024-07-15 15:29:51.294191] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ca95c0) on tqpair(0x1c3df00): expected_datao=0, payload_size=8192 00:24:47.442 [2024-07-15 15:29:51.294197] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.442 [2024-07-15 15:29:51.294449] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:47.442 [2024-07-15 15:29:51.294456] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:47.442 [2024-07-15 15:29:51.294463] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:47.442 [2024-07-15 15:29:51.294469] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:47.442 [2024-07-15 15:29:51.294474] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:47.442 [2024-07-15 15:29:51.294478] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c3df00): datao=0, datal=512, cccid=4 00:24:47.442 [2024-07-15 15:29:51.294484] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ca9440) on tqpair(0x1c3df00): expected_datao=0, payload_size=512 00:24:47.442 [2024-07-15 15:29:51.294490] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.442 [2024-07-15 15:29:51.294497] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:47.442 [2024-07-15 15:29:51.294502] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:47.442 [2024-07-15 15:29:51.294508] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:47.442 [2024-07-15 15:29:51.294514] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:47.442 [2024-07-15 15:29:51.294519] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:47.442 [2024-07-15 15:29:51.294523] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c3df00): datao=0, datal=512, cccid=6 00:24:47.442 [2024-07-15 15:29:51.294529] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ca9740) on tqpair(0x1c3df00): expected_datao=0, payload_size=512 00:24:47.442 [2024-07-15 15:29:51.294535] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.442 [2024-07-15 15:29:51.294542] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:47.442 [2024-07-15 15:29:51.294546] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:47.442 [2024-07-15 15:29:51.294553] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:47.442 [2024-07-15 15:29:51.294559] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:47.442 [2024-07-15 15:29:51.294563] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:47.442 [2024-07-15 15:29:51.294568] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c3df00): datao=0, datal=4096, cccid=7 00:24:47.442 [2024-07-15 15:29:51.294574] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ca98c0) on tqpair(0x1c3df00): expected_datao=0, payload_size=4096 00:24:47.442 [2024-07-15 15:29:51.294580] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.442 [2024-07-15 15:29:51.294587] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:47.442 [2024-07-15 15:29:51.294591] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:47.442 [2024-07-15 15:29:51.294604] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.442 [2024-07-15 15:29:51.294610] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.442 [2024-07-15 15:29:51.294615] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.442 [2024-07-15 15:29:51.294620] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca95c0) on tqpair=0x1c3df00 00:24:47.442 [2024-07-15 15:29:51.294633] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.442 [2024-07-15 15:29:51.294640] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.442 [2024-07-15 15:29:51.294644] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.442 [2024-07-15 15:29:51.294649] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca9440) on tqpair=0x1c3df00 00:24:47.442 [2024-07-15 15:29:51.294661] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.442 [2024-07-15 15:29:51.294667] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.442 [2024-07-15 15:29:51.294672] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.442 [2024-07-15 15:29:51.294677] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca9740) on tqpair=0x1c3df00 00:24:47.442 [2024-07-15 15:29:51.294684] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.442 [2024-07-15 15:29:51.294691] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.442 [2024-07-15 15:29:51.294697] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.442 [2024-07-15 15:29:51.294702] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca98c0) on tqpair=0x1c3df00 00:24:47.442 ===================================================== 00:24:47.442 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:47.442 ===================================================== 00:24:47.442 Controller Capabilities/Features 00:24:47.442 ================================ 00:24:47.442 Vendor ID: 8086 00:24:47.442 Subsystem Vendor ID: 8086 00:24:47.442 Serial Number: SPDK00000000000001 00:24:47.442 Model Number: SPDK bdev Controller 00:24:47.442 Firmware Version: 24.09 00:24:47.442 Recommended Arb Burst: 6 00:24:47.442 IEEE OUI Identifier: e4 d2 5c 00:24:47.442 Multi-path I/O 00:24:47.442 May have multiple subsystem ports: Yes 00:24:47.442 May have multiple controllers: Yes 00:24:47.442 Associated with SR-IOV VF: No 00:24:47.442 Max Data Transfer Size: 131072 00:24:47.442 Max Number of Namespaces: 32 00:24:47.442 Max Number of I/O Queues: 127 00:24:47.442 NVMe Specification Version (VS): 1.3 00:24:47.442 NVMe Specification Version (Identify): 1.3 00:24:47.442 Maximum Queue Entries: 128 00:24:47.442 Contiguous Queues Required: Yes 00:24:47.442 Arbitration Mechanisms Supported 00:24:47.442 Weighted Round Robin: Not Supported 00:24:47.442 Vendor Specific: Not Supported 00:24:47.442 Reset Timeout: 15000 ms 00:24:47.442 Doorbell Stride: 4 bytes 00:24:47.442 NVM Subsystem Reset: Not Supported 00:24:47.442 Command Sets Supported 00:24:47.442 NVM Command Set: Supported 00:24:47.442 Boot Partition: Not Supported 00:24:47.442 Memory Page Size Minimum: 4096 bytes 00:24:47.442 Memory Page Size Maximum: 4096 bytes 00:24:47.442 Persistent Memory Region: Not Supported 00:24:47.442 Optional Asynchronous Events Supported 00:24:47.442 Namespace Attribute Notices: Supported 00:24:47.442 Firmware Activation Notices: Not Supported 00:24:47.442 ANA Change Notices: Not Supported 00:24:47.442 PLE Aggregate Log Change Notices: Not Supported 00:24:47.442 LBA Status Info Alert Notices: Not Supported 00:24:47.442 EGE Aggregate Log Change Notices: Not Supported 00:24:47.442 Normal NVM Subsystem Shutdown event: Not Supported 00:24:47.442 Zone Descriptor Change Notices: Not Supported 00:24:47.442 Discovery Log Change Notices: Not Supported 00:24:47.442 Controller Attributes 00:24:47.442 128-bit Host Identifier: Supported 00:24:47.442 Non-Operational Permissive Mode: Not Supported 00:24:47.442 NVM Sets: Not Supported 00:24:47.442 Read Recovery Levels: Not Supported 00:24:47.442 Endurance Groups: Not Supported 00:24:47.442 Predictable Latency Mode: Not Supported 00:24:47.442 Traffic Based Keep ALive: Not Supported 00:24:47.442 Namespace Granularity: Not Supported 00:24:47.442 SQ Associations: Not Supported 00:24:47.442 UUID List: Not Supported 00:24:47.442 Multi-Domain Subsystem: Not Supported 00:24:47.442 Fixed Capacity Management: Not Supported 00:24:47.442 Variable Capacity Management: Not Supported 00:24:47.442 Delete Endurance Group: Not Supported 00:24:47.442 Delete NVM Set: Not Supported 00:24:47.442 Extended LBA Formats Supported: Not Supported 00:24:47.442 Flexible Data Placement Supported: Not Supported 00:24:47.442 00:24:47.442 Controller Memory Buffer Support 00:24:47.442 ================================ 00:24:47.442 Supported: No 00:24:47.442 00:24:47.442 Persistent Memory Region Support 00:24:47.442 ================================ 00:24:47.442 Supported: No 00:24:47.442 00:24:47.442 Admin Command Set Attributes 00:24:47.442 ============================ 00:24:47.442 Security Send/Receive: Not Supported 00:24:47.442 Format NVM: Not Supported 00:24:47.442 Firmware Activate/Download: Not Supported 00:24:47.442 Namespace Management: Not Supported 00:24:47.442 Device Self-Test: Not Supported 00:24:47.442 Directives: Not Supported 00:24:47.442 NVMe-MI: Not Supported 00:24:47.442 Virtualization Management: Not Supported 00:24:47.442 Doorbell Buffer Config: Not Supported 00:24:47.442 Get LBA Status Capability: Not Supported 00:24:47.442 Command & Feature Lockdown Capability: Not Supported 00:24:47.442 Abort Command Limit: 4 00:24:47.442 Async Event Request Limit: 4 00:24:47.442 Number of Firmware Slots: N/A 00:24:47.442 Firmware Slot 1 Read-Only: N/A 00:24:47.442 Firmware Activation Without Reset: N/A 00:24:47.442 Multiple Update Detection Support: N/A 00:24:47.442 Firmware Update Granularity: No Information Provided 00:24:47.442 Per-Namespace SMART Log: No 00:24:47.442 Asymmetric Namespace Access Log Page: Not Supported 00:24:47.442 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:47.442 Command Effects Log Page: Supported 00:24:47.442 Get Log Page Extended Data: Supported 00:24:47.442 Telemetry Log Pages: Not Supported 00:24:47.442 Persistent Event Log Pages: Not Supported 00:24:47.442 Supported Log Pages Log Page: May Support 00:24:47.442 Commands Supported & Effects Log Page: Not Supported 00:24:47.442 Feature Identifiers & Effects Log Page:May Support 00:24:47.442 NVMe-MI Commands & Effects Log Page: May Support 00:24:47.442 Data Area 4 for Telemetry Log: Not Supported 00:24:47.442 Error Log Page Entries Supported: 128 00:24:47.442 Keep Alive: Supported 00:24:47.442 Keep Alive Granularity: 10000 ms 00:24:47.442 00:24:47.442 NVM Command Set Attributes 00:24:47.442 ========================== 00:24:47.442 Submission Queue Entry Size 00:24:47.442 Max: 64 00:24:47.442 Min: 64 00:24:47.442 Completion Queue Entry Size 00:24:47.442 Max: 16 00:24:47.442 Min: 16 00:24:47.442 Number of Namespaces: 32 00:24:47.442 Compare Command: Supported 00:24:47.442 Write Uncorrectable Command: Not Supported 00:24:47.442 Dataset Management Command: Supported 00:24:47.442 Write Zeroes Command: Supported 00:24:47.442 Set Features Save Field: Not Supported 00:24:47.442 Reservations: Supported 00:24:47.442 Timestamp: Not Supported 00:24:47.442 Copy: Supported 00:24:47.442 Volatile Write Cache: Present 00:24:47.442 Atomic Write Unit (Normal): 1 00:24:47.442 Atomic Write Unit (PFail): 1 00:24:47.442 Atomic Compare & Write Unit: 1 00:24:47.442 Fused Compare & Write: Supported 00:24:47.442 Scatter-Gather List 00:24:47.442 SGL Command Set: Supported 00:24:47.442 SGL Keyed: Supported 00:24:47.442 SGL Bit Bucket Descriptor: Not Supported 00:24:47.442 SGL Metadata Pointer: Not Supported 00:24:47.442 Oversized SGL: Not Supported 00:24:47.442 SGL Metadata Address: Not Supported 00:24:47.442 SGL Offset: Supported 00:24:47.442 Transport SGL Data Block: Not Supported 00:24:47.442 Replay Protected Memory Block: Not Supported 00:24:47.442 00:24:47.442 Firmware Slot Information 00:24:47.442 ========================= 00:24:47.442 Active slot: 1 00:24:47.442 Slot 1 Firmware Revision: 24.09 00:24:47.442 00:24:47.442 00:24:47.442 Commands Supported and Effects 00:24:47.442 ============================== 00:24:47.442 Admin Commands 00:24:47.442 -------------- 00:24:47.442 Get Log Page (02h): Supported 00:24:47.442 Identify (06h): Supported 00:24:47.442 Abort (08h): Supported 00:24:47.442 Set Features (09h): Supported 00:24:47.442 Get Features (0Ah): Supported 00:24:47.442 Asynchronous Event Request (0Ch): Supported 00:24:47.442 Keep Alive (18h): Supported 00:24:47.442 I/O Commands 00:24:47.442 ------------ 00:24:47.442 Flush (00h): Supported LBA-Change 00:24:47.442 Write (01h): Supported LBA-Change 00:24:47.442 Read (02h): Supported 00:24:47.442 Compare (05h): Supported 00:24:47.442 Write Zeroes (08h): Supported LBA-Change 00:24:47.442 Dataset Management (09h): Supported LBA-Change 00:24:47.442 Copy (19h): Supported LBA-Change 00:24:47.442 00:24:47.442 Error Log 00:24:47.442 ========= 00:24:47.442 00:24:47.442 Arbitration 00:24:47.442 =========== 00:24:47.442 Arbitration Burst: 1 00:24:47.442 00:24:47.442 Power Management 00:24:47.443 ================ 00:24:47.443 Number of Power States: 1 00:24:47.443 Current Power State: Power State #0 00:24:47.443 Power State #0: 00:24:47.443 Max Power: 0.00 W 00:24:47.443 Non-Operational State: Operational 00:24:47.443 Entry Latency: Not Reported 00:24:47.443 Exit Latency: Not Reported 00:24:47.443 Relative Read Throughput: 0 00:24:47.443 Relative Read Latency: 0 00:24:47.443 Relative Write Throughput: 0 00:24:47.443 Relative Write Latency: 0 00:24:47.443 Idle Power: Not Reported 00:24:47.443 Active Power: Not Reported 00:24:47.443 Non-Operational Permissive Mode: Not Supported 00:24:47.443 00:24:47.443 Health Information 00:24:47.443 ================== 00:24:47.443 Critical Warnings: 00:24:47.443 Available Spare Space: OK 00:24:47.443 Temperature: OK 00:24:47.443 Device Reliability: OK 00:24:47.443 Read Only: No 00:24:47.443 Volatile Memory Backup: OK 00:24:47.443 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:47.443 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:47.443 Available Spare: 0% 00:24:47.443 Available Spare Threshold: 0% 00:24:47.443 Life Percentage Used:[2024-07-15 15:29:51.294793] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.443 [2024-07-15 15:29:51.294799] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1c3df00) 00:24:47.443 [2024-07-15 15:29:51.294806] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.443 [2024-07-15 15:29:51.294820] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca98c0, cid 7, qid 0 00:24:47.443 [2024-07-15 15:29:51.298839] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.443 [2024-07-15 15:29:51.298847] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.443 [2024-07-15 15:29:51.298852] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.443 [2024-07-15 15:29:51.298857] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca98c0) on tqpair=0x1c3df00 00:24:47.443 [2024-07-15 15:29:51.298892] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:24:47.443 [2024-07-15 15:29:51.298903] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca8e40) on tqpair=0x1c3df00 00:24:47.443 [2024-07-15 15:29:51.298910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.443 [2024-07-15 15:29:51.298916] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca8fc0) on tqpair=0x1c3df00 00:24:47.443 [2024-07-15 15:29:51.298922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.443 [2024-07-15 15:29:51.298928] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca9140) on tqpair=0x1c3df00 00:24:47.443 [2024-07-15 15:29:51.298934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.443 [2024-07-15 15:29:51.298940] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca92c0) on tqpair=0x1c3df00 00:24:47.443 [2024-07-15 15:29:51.298946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.443 [2024-07-15 15:29:51.298955] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.443 [2024-07-15 15:29:51.298960] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.443 [2024-07-15 15:29:51.298965] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3df00) 00:24:47.443 [2024-07-15 15:29:51.298972] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.443 [2024-07-15 15:29:51.298986] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca92c0, cid 3, qid 0 00:24:47.443 [2024-07-15 15:29:51.299174] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.443 [2024-07-15 15:29:51.299181] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.443 [2024-07-15 15:29:51.299186] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.443 [2024-07-15 15:29:51.299191] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca92c0) on tqpair=0x1c3df00 00:24:47.443 [2024-07-15 15:29:51.299198] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.443 [2024-07-15 15:29:51.299203] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.443 [2024-07-15 15:29:51.299208] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3df00) 00:24:47.443 [2024-07-15 15:29:51.299215] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.443 [2024-07-15 15:29:51.299230] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca92c0, cid 3, qid 0 00:24:47.443 [2024-07-15 15:29:51.299328] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.443 [2024-07-15 15:29:51.299335] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.443 [2024-07-15 15:29:51.299342] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.443 [2024-07-15 15:29:51.299347] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca92c0) on tqpair=0x1c3df00 00:24:47.443 [2024-07-15 15:29:51.299352] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:24:47.443 [2024-07-15 15:29:51.299358] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:24:47.443 [2024-07-15 15:29:51.299369] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.443 [2024-07-15 15:29:51.299374] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.443 [2024-07-15 15:29:51.299378] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3df00) 00:24:47.443 [2024-07-15 15:29:51.299386] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.443 [2024-07-15 15:29:51.299397] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca92c0, cid 3, qid 0 00:24:47.443 [2024-07-15 15:29:51.299489] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.443 [2024-07-15 15:29:51.299496] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.443 [2024-07-15 15:29:51.299501] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.443 [2024-07-15 15:29:51.299506] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca92c0) on tqpair=0x1c3df00 00:24:47.443 [2024-07-15 15:29:51.299516] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.443 [2024-07-15 15:29:51.299521] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.443 [2024-07-15 15:29:51.299526] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3df00) 00:24:47.443 [2024-07-15 15:29:51.299533] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.443 [2024-07-15 15:29:51.299544] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca92c0, cid 3, qid 0 00:24:47.443 [2024-07-15 15:29:51.299639] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.443 [2024-07-15 15:29:51.299645] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.443 [2024-07-15 15:29:51.299650] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.443 [2024-07-15 15:29:51.299655] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca92c0) on tqpair=0x1c3df00 00:24:47.443 [2024-07-15 15:29:51.299665] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.443 [2024-07-15 15:29:51.299671] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.443 [2024-07-15 15:29:51.299675] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3df00) 00:24:47.443 [2024-07-15 15:29:51.299682] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.443 [2024-07-15 15:29:51.299693] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca92c0, cid 3, qid 0 00:24:47.443 [2024-07-15 15:29:51.299789] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.443 [2024-07-15 15:29:51.299796] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.443 [2024-07-15 15:29:51.299801] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.443 [2024-07-15 15:29:51.299806] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca92c0) on tqpair=0x1c3df00 00:24:47.443 [2024-07-15 15:29:51.299816] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.443 [2024-07-15 15:29:51.299821] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.443 [2024-07-15 15:29:51.299826] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3df00) 00:24:47.443 [2024-07-15 15:29:51.299838] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.443 [2024-07-15 15:29:51.299850] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca92c0, cid 3, qid 0 00:24:47.443 [2024-07-15 15:29:51.299939] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.443 [2024-07-15 15:29:51.299946] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.443 [2024-07-15 15:29:51.299951] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.443 [2024-07-15 15:29:51.299956] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca92c0) on tqpair=0x1c3df00 00:24:47.443 [2024-07-15 15:29:51.299966] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.443 [2024-07-15 15:29:51.299971] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.443 [2024-07-15 15:29:51.299975] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3df00) 00:24:47.443 [2024-07-15 15:29:51.299982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.443 [2024-07-15 15:29:51.299994] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca92c0, cid 3, qid 0 00:24:47.443 [2024-07-15 15:29:51.300086] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.443 [2024-07-15 15:29:51.300093] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.443 [2024-07-15 15:29:51.300098] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.443 [2024-07-15 15:29:51.300103] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca92c0) on tqpair=0x1c3df00 00:24:47.443 [2024-07-15 15:29:51.300113] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.443 [2024-07-15 15:29:51.300118] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.443 [2024-07-15 15:29:51.300123] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3df00) 00:24:47.443 [2024-07-15 15:29:51.300130] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.443 [2024-07-15 15:29:51.300141] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca92c0, cid 3, qid 0 00:24:47.443 [2024-07-15 15:29:51.300230] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.443 [2024-07-15 15:29:51.300237] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.443 [2024-07-15 15:29:51.300242] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.443 [2024-07-15 15:29:51.300247] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca92c0) on tqpair=0x1c3df00 00:24:47.443 [2024-07-15 15:29:51.300257] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.443 [2024-07-15 15:29:51.300262] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.443 [2024-07-15 15:29:51.300267] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3df00) 00:24:47.443 [2024-07-15 15:29:51.300273] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.443 [2024-07-15 15:29:51.300286] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca92c0, cid 3, qid 0 00:24:47.443 [2024-07-15 15:29:51.300378] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.443 [2024-07-15 15:29:51.300385] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.443 [2024-07-15 15:29:51.300389] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.443 [2024-07-15 15:29:51.300394] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca92c0) on tqpair=0x1c3df00 00:24:47.443 [2024-07-15 15:29:51.300404] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.443 [2024-07-15 15:29:51.300409] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.443 [2024-07-15 15:29:51.300414] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3df00) 00:24:47.443 [2024-07-15 15:29:51.300421] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.443 [2024-07-15 15:29:51.300432] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca92c0, cid 3, qid 0 00:24:47.443 [2024-07-15 15:29:51.300521] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.443 [2024-07-15 15:29:51.300530] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.443 [2024-07-15 15:29:51.300534] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.443 [2024-07-15 15:29:51.300539] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca92c0) on tqpair=0x1c3df00 00:24:47.443 [2024-07-15 15:29:51.300550] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.443 [2024-07-15 15:29:51.300555] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.443 [2024-07-15 15:29:51.300560] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3df00) 00:24:47.443 [2024-07-15 15:29:51.300567] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.443 [2024-07-15 15:29:51.300578] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca92c0, cid 3, qid 0 00:24:47.443 [2024-07-15 15:29:51.300668] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.443 [2024-07-15 15:29:51.300675] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.443 [2024-07-15 15:29:51.300680] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.443 [2024-07-15 15:29:51.300685] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca92c0) on tqpair=0x1c3df00 00:24:47.443 [2024-07-15 15:29:51.300695] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.443 [2024-07-15 15:29:51.300700] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.443 [2024-07-15 15:29:51.300704] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3df00) 00:24:47.443 [2024-07-15 15:29:51.300711] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.443 [2024-07-15 15:29:51.300723] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca92c0, cid 3, qid 0 00:24:47.443 [2024-07-15 15:29:51.300818] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.443 [2024-07-15 15:29:51.300824] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.443 [2024-07-15 15:29:51.300829] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.443 [2024-07-15 15:29:51.300839] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca92c0) on tqpair=0x1c3df00 00:24:47.443 [2024-07-15 15:29:51.300849] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.443 [2024-07-15 15:29:51.300854] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.443 [2024-07-15 15:29:51.300859] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3df00) 00:24:47.443 [2024-07-15 15:29:51.300866] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.443 [2024-07-15 15:29:51.300877] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca92c0, cid 3, qid 0 00:24:47.443 [2024-07-15 15:29:51.300964] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.443 [2024-07-15 15:29:51.300971] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.443 [2024-07-15 15:29:51.300976] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.443 [2024-07-15 15:29:51.300981] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca92c0) on tqpair=0x1c3df00 00:24:47.443 [2024-07-15 15:29:51.300991] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.443 [2024-07-15 15:29:51.300996] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.443 [2024-07-15 15:29:51.301001] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3df00) 00:24:47.443 [2024-07-15 15:29:51.301008] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.443 [2024-07-15 15:29:51.301019] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca92c0, cid 3, qid 0 00:24:47.443 [2024-07-15 15:29:51.301110] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.443 [2024-07-15 15:29:51.301117] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.443 [2024-07-15 15:29:51.301123] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.443 [2024-07-15 15:29:51.301128] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca92c0) on tqpair=0x1c3df00 00:24:47.443 [2024-07-15 15:29:51.301138] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.443 [2024-07-15 15:29:51.301143] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.443 [2024-07-15 15:29:51.301148] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3df00) 00:24:47.443 [2024-07-15 15:29:51.301155] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.443 [2024-07-15 15:29:51.301166] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca92c0, cid 3, qid 0 00:24:47.443 [2024-07-15 15:29:51.301259] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.443 [2024-07-15 15:29:51.301266] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.443 [2024-07-15 15:29:51.301270] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.443 [2024-07-15 15:29:51.301275] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca92c0) on tqpair=0x1c3df00 00:24:47.443 [2024-07-15 15:29:51.301286] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.443 [2024-07-15 15:29:51.301291] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.443 [2024-07-15 15:29:51.301295] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3df00) 00:24:47.443 [2024-07-15 15:29:51.301302] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.443 [2024-07-15 15:29:51.301313] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca92c0, cid 3, qid 0 00:24:47.443 [2024-07-15 15:29:51.301406] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.443 [2024-07-15 15:29:51.301413] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.444 [2024-07-15 15:29:51.301418] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.444 [2024-07-15 15:29:51.301423] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca92c0) on tqpair=0x1c3df00 00:24:47.444 [2024-07-15 15:29:51.301433] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.444 [2024-07-15 15:29:51.301438] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.444 [2024-07-15 15:29:51.301443] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3df00) 00:24:47.444 [2024-07-15 15:29:51.301450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.444 [2024-07-15 15:29:51.301461] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca92c0, cid 3, qid 0 00:24:47.444 [2024-07-15 15:29:51.301552] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.444 [2024-07-15 15:29:51.301559] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.444 [2024-07-15 15:29:51.301564] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.444 [2024-07-15 15:29:51.301568] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca92c0) on tqpair=0x1c3df00 00:24:47.444 [2024-07-15 15:29:51.301579] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.444 [2024-07-15 15:29:51.301584] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.444 [2024-07-15 15:29:51.301588] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3df00) 00:24:47.444 [2024-07-15 15:29:51.301595] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.444 [2024-07-15 15:29:51.301606] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca92c0, cid 3, qid 0 00:24:47.444 [2024-07-15 15:29:51.301702] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.444 [2024-07-15 15:29:51.301708] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.444 [2024-07-15 15:29:51.301713] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.444 [2024-07-15 15:29:51.301719] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca92c0) on tqpair=0x1c3df00 00:24:47.444 [2024-07-15 15:29:51.301730] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.444 [2024-07-15 15:29:51.301735] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.444 [2024-07-15 15:29:51.301739] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3df00) 00:24:47.444 [2024-07-15 15:29:51.301746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.444 [2024-07-15 15:29:51.301758] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca92c0, cid 3, qid 0 00:24:47.444 [2024-07-15 15:29:51.301942] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.444 [2024-07-15 15:29:51.301949] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.444 [2024-07-15 15:29:51.301954] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.444 [2024-07-15 15:29:51.301959] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca92c0) on tqpair=0x1c3df00 00:24:47.444 [2024-07-15 15:29:51.301969] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.444 [2024-07-15 15:29:51.301974] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.444 [2024-07-15 15:29:51.301979] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3df00) 00:24:47.444 [2024-07-15 15:29:51.301986] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.444 [2024-07-15 15:29:51.301998] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca92c0, cid 3, qid 0 00:24:47.444 [2024-07-15 15:29:51.302092] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.444 [2024-07-15 15:29:51.302099] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.444 [2024-07-15 15:29:51.302103] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.444 [2024-07-15 15:29:51.302108] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca92c0) on tqpair=0x1c3df00 00:24:47.444 [2024-07-15 15:29:51.302119] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.444 [2024-07-15 15:29:51.302124] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.444 [2024-07-15 15:29:51.302128] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3df00) 00:24:47.444 [2024-07-15 15:29:51.302135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.444 [2024-07-15 15:29:51.302147] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca92c0, cid 3, qid 0 00:24:47.444 [2024-07-15 15:29:51.302239] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.444 [2024-07-15 15:29:51.302246] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.444 [2024-07-15 15:29:51.302250] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.444 [2024-07-15 15:29:51.302255] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca92c0) on tqpair=0x1c3df00 00:24:47.444 [2024-07-15 15:29:51.302265] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.444 [2024-07-15 15:29:51.302271] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.444 [2024-07-15 15:29:51.302275] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3df00) 00:24:47.444 [2024-07-15 15:29:51.302282] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.444 [2024-07-15 15:29:51.302293] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca92c0, cid 3, qid 0 00:24:47.444 [2024-07-15 15:29:51.302383] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.444 [2024-07-15 15:29:51.302390] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.444 [2024-07-15 15:29:51.302394] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.444 [2024-07-15 15:29:51.302399] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca92c0) on tqpair=0x1c3df00 00:24:47.444 [2024-07-15 15:29:51.302411] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.444 [2024-07-15 15:29:51.302416] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.444 [2024-07-15 15:29:51.302421] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3df00) 00:24:47.444 [2024-07-15 15:29:51.302428] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.444 [2024-07-15 15:29:51.302439] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca92c0, cid 3, qid 0 00:24:47.444 [2024-07-15 15:29:51.302529] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.444 [2024-07-15 15:29:51.302535] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.444 [2024-07-15 15:29:51.302540] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.444 [2024-07-15 15:29:51.302545] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca92c0) on tqpair=0x1c3df00 00:24:47.444 [2024-07-15 15:29:51.302555] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.444 [2024-07-15 15:29:51.302560] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.444 [2024-07-15 15:29:51.302565] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3df00) 00:24:47.444 [2024-07-15 15:29:51.302572] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.444 [2024-07-15 15:29:51.302583] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca92c0, cid 3, qid 0 00:24:47.444 [2024-07-15 15:29:51.302675] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.444 [2024-07-15 15:29:51.302682] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.444 [2024-07-15 15:29:51.302687] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.444 [2024-07-15 15:29:51.302692] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca92c0) on tqpair=0x1c3df00 00:24:47.444 [2024-07-15 15:29:51.302702] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.444 [2024-07-15 15:29:51.302707] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.444 [2024-07-15 15:29:51.302712] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3df00) 00:24:47.444 [2024-07-15 15:29:51.302719] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.444 [2024-07-15 15:29:51.302730] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca92c0, cid 3, qid 0 00:24:47.444 [2024-07-15 15:29:51.302825] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.444 [2024-07-15 15:29:51.306837] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.444 [2024-07-15 15:29:51.306844] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.444 [2024-07-15 15:29:51.306849] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca92c0) on tqpair=0x1c3df00 00:24:47.444 [2024-07-15 15:29:51.306861] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.444 [2024-07-15 15:29:51.306866] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.444 [2024-07-15 15:29:51.306871] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c3df00) 00:24:47.444 [2024-07-15 15:29:51.306878] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.444 [2024-07-15 15:29:51.306891] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ca92c0, cid 3, qid 0 00:24:47.444 [2024-07-15 15:29:51.307072] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.444 [2024-07-15 15:29:51.307079] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.444 [2024-07-15 15:29:51.307084] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.444 [2024-07-15 15:29:51.307088] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ca92c0) on tqpair=0x1c3df00 00:24:47.444 [2024-07-15 15:29:51.307097] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:24:47.444 0% 00:24:47.444 Data Units Read: 0 00:24:47.444 Data Units Written: 0 00:24:47.444 Host Read Commands: 0 00:24:47.444 Host Write Commands: 0 00:24:47.444 Controller Busy Time: 0 minutes 00:24:47.444 Power Cycles: 0 00:24:47.444 Power On Hours: 0 hours 00:24:47.444 Unsafe Shutdowns: 0 00:24:47.444 Unrecoverable Media Errors: 0 00:24:47.444 Lifetime Error Log Entries: 0 00:24:47.444 Warning Temperature Time: 0 minutes 00:24:47.444 Critical Temperature Time: 0 minutes 00:24:47.444 00:24:47.444 Number of Queues 00:24:47.444 ================ 00:24:47.444 Number of I/O Submission Queues: 127 00:24:47.444 Number of I/O Completion Queues: 127 00:24:47.444 00:24:47.444 Active Namespaces 00:24:47.444 ================= 00:24:47.444 Namespace ID:1 00:24:47.444 Error Recovery Timeout: Unlimited 00:24:47.444 Command Set Identifier: NVM (00h) 00:24:47.444 Deallocate: Supported 00:24:47.444 Deallocated/Unwritten Error: Not Supported 00:24:47.444 Deallocated Read Value: Unknown 00:24:47.444 Deallocate in Write Zeroes: Not Supported 00:24:47.444 Deallocated Guard Field: 0xFFFF 00:24:47.444 Flush: Supported 00:24:47.444 Reservation: Supported 00:24:47.444 Namespace Sharing Capabilities: Multiple Controllers 00:24:47.444 Size (in LBAs): 131072 (0GiB) 00:24:47.444 Capacity (in LBAs): 131072 (0GiB) 00:24:47.444 Utilization (in LBAs): 131072 (0GiB) 00:24:47.444 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:47.444 EUI64: ABCDEF0123456789 00:24:47.444 UUID: 1f7ad079-ab62-49c1-a861-be444e4d046f 00:24:47.444 Thin Provisioning: Not Supported 00:24:47.444 Per-NS Atomic Units: Yes 00:24:47.444 Atomic Boundary Size (Normal): 0 00:24:47.444 Atomic Boundary Size (PFail): 0 00:24:47.444 Atomic Boundary Offset: 0 00:24:47.444 Maximum Single Source Range Length: 65535 00:24:47.444 Maximum Copy Length: 65535 00:24:47.444 Maximum Source Range Count: 1 00:24:47.444 NGUID/EUI64 Never Reused: No 00:24:47.444 Namespace Write Protected: No 00:24:47.444 Number of LBA Formats: 1 00:24:47.444 Current LBA Format: LBA Format #00 00:24:47.444 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:47.444 00:24:47.444 15:29:51 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:47.444 15:29:51 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:47.444 15:29:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.444 15:29:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:47.444 15:29:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.444 15:29:51 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:47.444 15:29:51 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:47.444 15:29:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:47.444 15:29:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:24:47.444 15:29:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:47.444 15:29:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:24:47.444 15:29:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:47.444 15:29:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:47.703 rmmod nvme_tcp 00:24:47.703 rmmod nvme_fabrics 00:24:47.703 rmmod nvme_keyring 00:24:47.703 15:29:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:47.703 15:29:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:24:47.703 15:29:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:24:47.703 15:29:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 3143515 ']' 00:24:47.703 15:29:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 3143515 00:24:47.703 15:29:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 3143515 ']' 00:24:47.703 15:29:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 3143515 00:24:47.703 15:29:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:24:47.703 15:29:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:47.703 15:29:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3143515 00:24:47.703 15:29:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:47.703 15:29:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:47.703 15:29:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3143515' 00:24:47.703 killing process with pid 3143515 00:24:47.703 15:29:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 3143515 00:24:47.703 15:29:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 3143515 00:24:47.962 15:29:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:47.962 15:29:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:47.962 15:29:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:47.962 15:29:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:47.962 15:29:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:47.962 15:29:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:47.962 15:29:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:47.962 15:29:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:49.866 15:29:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:49.866 00:24:49.866 real 0m10.773s 00:24:49.866 user 0m8.148s 00:24:49.866 sys 0m5.718s 00:24:49.866 15:29:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:49.866 15:29:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:49.866 ************************************ 00:24:49.866 END TEST nvmf_identify 00:24:49.866 ************************************ 00:24:50.124 15:29:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:50.124 15:29:53 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:50.124 15:29:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:50.124 15:29:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:50.124 15:29:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:50.124 ************************************ 00:24:50.124 START TEST nvmf_perf 00:24:50.124 ************************************ 00:24:50.124 15:29:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:50.124 * Looking for test storage... 00:24:50.125 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:50.125 15:29:53 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:50.125 15:29:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:50.125 15:29:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:50.125 15:29:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:50.125 15:29:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:50.125 15:29:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:50.125 15:29:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:50.125 15:29:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:50.125 15:29:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:50.125 15:29:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:50.125 15:29:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:50.125 15:29:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:50.125 15:29:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:50.125 15:29:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:24:50.125 15:29:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:50.125 15:29:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:50.125 15:29:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:50.125 15:29:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:50.125 15:29:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:50.125 15:29:53 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:50.125 15:29:53 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:50.125 15:29:53 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:50.125 15:29:53 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.125 15:29:53 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.125 15:29:53 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.125 15:29:53 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:50.125 15:29:53 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.125 15:29:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:24:50.125 15:29:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:50.125 15:29:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:50.125 15:29:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:50.125 15:29:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:50.125 15:29:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:50.125 15:29:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:50.125 15:29:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:50.125 15:29:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:50.125 15:29:53 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:50.125 15:29:53 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:50.125 15:29:53 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:50.125 15:29:53 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:50.125 15:29:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:50.125 15:29:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:50.125 15:29:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:50.125 15:29:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:50.125 15:29:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:50.125 15:29:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:50.125 15:29:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:50.125 15:29:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:50.125 15:29:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:50.125 15:29:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:50.125 15:29:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:24:50.125 15:29:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:56.689 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:56.689 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:56.689 Found net devices under 0000:af:00.0: cvl_0_0 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:56.689 Found net devices under 0000:af:00.1: cvl_0_1 00:24:56.689 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:56.690 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:56.690 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:24:56.690 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:56.690 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:56.690 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:56.690 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:56.690 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:56.690 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:56.690 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:56.690 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:56.690 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:56.690 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:56.690 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:56.690 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:56.690 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:56.690 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:56.690 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:56.690 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:56.690 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:56.690 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:56.690 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:56.690 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:56.690 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:56.690 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:56.690 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:56.690 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:56.690 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:24:56.690 00:24:56.690 --- 10.0.0.2 ping statistics --- 00:24:56.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:56.690 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:24:56.690 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:56.690 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:56.690 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:24:56.690 00:24:56.690 --- 10.0.0.1 ping statistics --- 00:24:56.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:56.690 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:24:56.690 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:56.690 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:24:56.690 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:56.690 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:56.690 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:56.690 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:56.690 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:56.690 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:56.690 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:56.690 15:30:00 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:56.690 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:56.690 15:30:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:56.690 15:30:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:56.690 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=3147453 00:24:56.690 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:56.690 15:30:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 3147453 00:24:56.690 15:30:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 3147453 ']' 00:24:56.690 15:30:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:56.690 15:30:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:56.690 15:30:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:56.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:56.690 15:30:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:56.690 15:30:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:56.690 [2024-07-15 15:30:00.488888] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:24:56.690 [2024-07-15 15:30:00.488934] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:56.690 EAL: No free 2048 kB hugepages reported on node 1 00:24:56.690 [2024-07-15 15:30:00.562995] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:56.949 [2024-07-15 15:30:00.638854] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:56.949 [2024-07-15 15:30:00.638895] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:56.949 [2024-07-15 15:30:00.638905] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:56.949 [2024-07-15 15:30:00.638914] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:56.949 [2024-07-15 15:30:00.638922] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:56.949 [2024-07-15 15:30:00.638970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:56.949 [2024-07-15 15:30:00.639017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:56.949 [2024-07-15 15:30:00.638987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:56.949 [2024-07-15 15:30:00.639015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:57.515 15:30:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:57.515 15:30:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:24:57.515 15:30:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:57.515 15:30:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:57.515 15:30:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:57.515 15:30:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:57.515 15:30:01 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:57.515 15:30:01 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:00.795 15:30:04 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:25:00.795 15:30:04 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:25:00.795 15:30:04 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:d8:00.0 00:25:00.795 15:30:04 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:01.054 15:30:04 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:25:01.054 15:30:04 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:d8:00.0 ']' 00:25:01.054 15:30:04 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:25:01.054 15:30:04 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:25:01.054 15:30:04 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:01.054 [2024-07-15 15:30:04.922331] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:01.054 15:30:04 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:01.313 15:30:05 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:01.313 15:30:05 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:01.573 15:30:05 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:01.573 15:30:05 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:01.832 15:30:05 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:01.832 [2024-07-15 15:30:05.661011] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:01.832 15:30:05 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:02.092 15:30:05 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:d8:00.0 ']' 00:25:02.092 15:30:05 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:25:02.092 15:30:05 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:25:02.092 15:30:05 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:25:03.469 Initializing NVMe Controllers 00:25:03.469 Attached to NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:25:03.469 Associating PCIE (0000:d8:00.0) NSID 1 with lcore 0 00:25:03.469 Initialization complete. Launching workers. 00:25:03.469 ======================================================== 00:25:03.469 Latency(us) 00:25:03.469 Device Information : IOPS MiB/s Average min max 00:25:03.469 PCIE (0000:d8:00.0) NSID 1 from core 0: 102234.33 399.35 312.63 14.55 7184.26 00:25:03.469 ======================================================== 00:25:03.469 Total : 102234.33 399.35 312.63 14.55 7184.26 00:25:03.469 00:25:03.469 15:30:07 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:03.469 EAL: No free 2048 kB hugepages reported on node 1 00:25:04.847 Initializing NVMe Controllers 00:25:04.847 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:04.847 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:04.847 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:04.847 Initialization complete. Launching workers. 00:25:04.847 ======================================================== 00:25:04.847 Latency(us) 00:25:04.847 Device Information : IOPS MiB/s Average min max 00:25:04.847 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 119.00 0.46 8670.14 258.63 46006.04 00:25:04.847 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 46.00 0.18 21859.10 5985.04 49879.01 00:25:04.847 ======================================================== 00:25:04.847 Total : 165.00 0.64 12347.06 258.63 49879.01 00:25:04.847 00:25:04.847 15:30:08 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:04.847 EAL: No free 2048 kB hugepages reported on node 1 00:25:05.785 Initializing NVMe Controllers 00:25:05.785 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:05.785 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:05.785 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:05.785 Initialization complete. Launching workers. 00:25:05.785 ======================================================== 00:25:05.785 Latency(us) 00:25:05.785 Device Information : IOPS MiB/s Average min max 00:25:05.785 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9792.38 38.25 3278.53 619.84 7978.86 00:25:05.785 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3905.75 15.26 8237.26 5631.15 15956.64 00:25:05.785 ======================================================== 00:25:05.785 Total : 13698.14 53.51 4692.41 619.84 15956.64 00:25:05.785 00:25:06.057 15:30:09 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:25:06.057 15:30:09 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:25:06.057 15:30:09 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:06.057 EAL: No free 2048 kB hugepages reported on node 1 00:25:08.607 Initializing NVMe Controllers 00:25:08.607 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:08.607 Controller IO queue size 128, less than required. 00:25:08.607 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:08.607 Controller IO queue size 128, less than required. 00:25:08.607 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:08.607 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:08.607 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:08.607 Initialization complete. Launching workers. 00:25:08.607 ======================================================== 00:25:08.607 Latency(us) 00:25:08.607 Device Information : IOPS MiB/s Average min max 00:25:08.607 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 969.50 242.37 136545.64 80556.08 191706.98 00:25:08.607 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 606.50 151.62 223669.50 69142.69 336006.81 00:25:08.607 ======================================================== 00:25:08.607 Total : 1575.99 394.00 170073.95 69142.69 336006.81 00:25:08.607 00:25:08.607 15:30:12 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:25:08.607 EAL: No free 2048 kB hugepages reported on node 1 00:25:08.865 No valid NVMe controllers or AIO or URING devices found 00:25:08.865 Initializing NVMe Controllers 00:25:08.865 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:08.865 Controller IO queue size 128, less than required. 00:25:08.865 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:08.865 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:25:08.865 Controller IO queue size 128, less than required. 00:25:08.865 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:08.865 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:25:08.865 WARNING: Some requested NVMe devices were skipped 00:25:08.865 15:30:12 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:25:08.865 EAL: No free 2048 kB hugepages reported on node 1 00:25:11.406 Initializing NVMe Controllers 00:25:11.406 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:11.406 Controller IO queue size 128, less than required. 00:25:11.406 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:11.406 Controller IO queue size 128, less than required. 00:25:11.406 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:11.406 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:11.406 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:11.406 Initialization complete. Launching workers. 00:25:11.406 00:25:11.406 ==================== 00:25:11.406 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:25:11.406 TCP transport: 00:25:11.406 polls: 41111 00:25:11.406 idle_polls: 14256 00:25:11.406 sock_completions: 26855 00:25:11.406 nvme_completions: 4047 00:25:11.406 submitted_requests: 6086 00:25:11.406 queued_requests: 1 00:25:11.406 00:25:11.406 ==================== 00:25:11.406 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:25:11.406 TCP transport: 00:25:11.406 polls: 56945 00:25:11.406 idle_polls: 29513 00:25:11.406 sock_completions: 27432 00:25:11.406 nvme_completions: 4225 00:25:11.406 submitted_requests: 6400 00:25:11.406 queued_requests: 1 00:25:11.406 ======================================================== 00:25:11.406 Latency(us) 00:25:11.406 Device Information : IOPS MiB/s Average min max 00:25:11.406 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1011.32 252.83 129763.05 71963.36 213977.13 00:25:11.406 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1055.81 263.95 123260.72 60797.99 181607.35 00:25:11.406 ======================================================== 00:25:11.406 Total : 2067.14 516.78 126441.91 60797.99 213977.13 00:25:11.406 00:25:11.406 15:30:15 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:25:11.406 15:30:15 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:11.664 15:30:15 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:25:11.664 15:30:15 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:25:11.664 15:30:15 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:25:11.664 15:30:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:11.664 15:30:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:25:11.664 15:30:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:11.664 15:30:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:25:11.664 15:30:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:11.664 15:30:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:11.664 rmmod nvme_tcp 00:25:11.664 rmmod nvme_fabrics 00:25:11.664 rmmod nvme_keyring 00:25:11.664 15:30:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:11.664 15:30:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:25:11.664 15:30:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:25:11.664 15:30:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 3147453 ']' 00:25:11.665 15:30:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 3147453 00:25:11.665 15:30:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 3147453 ']' 00:25:11.665 15:30:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 3147453 00:25:11.665 15:30:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:25:11.665 15:30:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:11.665 15:30:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3147453 00:25:11.923 15:30:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:11.923 15:30:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:11.923 15:30:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3147453' 00:25:11.923 killing process with pid 3147453 00:25:11.923 15:30:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 3147453 00:25:11.923 15:30:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 3147453 00:25:13.826 15:30:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:13.826 15:30:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:13.826 15:30:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:13.826 15:30:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:13.826 15:30:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:13.826 15:30:17 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:13.826 15:30:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:13.826 15:30:17 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:16.359 15:30:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:16.359 00:25:16.359 real 0m25.983s 00:25:16.359 user 1m8.250s 00:25:16.359 sys 0m8.374s 00:25:16.359 15:30:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:16.359 15:30:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:16.359 ************************************ 00:25:16.359 END TEST nvmf_perf 00:25:16.359 ************************************ 00:25:16.359 15:30:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:16.359 15:30:19 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:16.359 15:30:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:16.359 15:30:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:16.359 15:30:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:16.359 ************************************ 00:25:16.359 START TEST nvmf_fio_host 00:25:16.359 ************************************ 00:25:16.359 15:30:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:16.359 * Looking for test storage... 00:25:16.359 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:16.359 15:30:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:16.359 15:30:20 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:16.359 15:30:20 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:16.359 15:30:20 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:16.359 15:30:20 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.359 15:30:20 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.359 15:30:20 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.359 15:30:20 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:16.359 15:30:20 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.359 15:30:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:16.359 15:30:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:25:16.359 15:30:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:16.359 15:30:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:16.359 15:30:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:16.359 15:30:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:16.359 15:30:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:16.359 15:30:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:16.359 15:30:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:16.359 15:30:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:16.359 15:30:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:16.359 15:30:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:16.359 15:30:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:25:16.359 15:30:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:25:16.359 15:30:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:16.359 15:30:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:16.359 15:30:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:16.359 15:30:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:16.359 15:30:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:16.359 15:30:20 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:16.359 15:30:20 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:16.359 15:30:20 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:16.359 15:30:20 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.359 15:30:20 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.360 15:30:20 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.360 15:30:20 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:16.360 15:30:20 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.360 15:30:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:25:16.360 15:30:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:16.360 15:30:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:16.360 15:30:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:16.360 15:30:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:16.360 15:30:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:16.360 15:30:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:16.360 15:30:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:16.360 15:30:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:16.360 15:30:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:16.360 15:30:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:25:16.360 15:30:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:16.360 15:30:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:16.360 15:30:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:16.360 15:30:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:16.360 15:30:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:16.360 15:30:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:16.360 15:30:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:16.360 15:30:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:16.360 15:30:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:16.360 15:30:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:16.360 15:30:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:25:16.360 15:30:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.919 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:22.919 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:25:22.919 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:22.919 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:22.919 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:22.919 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:22.919 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:22.919 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:25:22.919 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:22.919 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:25:22.919 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:25:22.919 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:25:22.919 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:25:22.919 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:25:22.919 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:25:22.919 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:22.919 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:22.919 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:22.919 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:22.920 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:22.920 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:22.920 Found net devices under 0000:af:00.0: cvl_0_0 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:22.920 Found net devices under 0000:af:00.1: cvl_0_1 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:22.920 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:22.920 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:25:22.920 00:25:22.920 --- 10.0.0.2 ping statistics --- 00:25:22.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:22.920 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:22.920 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:22.920 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.258 ms 00:25:22.920 00:25:22.920 --- 10.0.0.1 ping statistics --- 00:25:22.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:22.920 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3154383 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3154383 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 3154383 ']' 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:22.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:22.920 15:30:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.179 [2024-07-15 15:30:26.866377] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:25:23.179 [2024-07-15 15:30:26.866426] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:23.179 EAL: No free 2048 kB hugepages reported on node 1 00:25:23.179 [2024-07-15 15:30:26.940763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:23.179 [2024-07-15 15:30:27.014350] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:23.179 [2024-07-15 15:30:27.014389] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:23.179 [2024-07-15 15:30:27.014398] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:23.179 [2024-07-15 15:30:27.014407] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:23.179 [2024-07-15 15:30:27.014414] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:23.179 [2024-07-15 15:30:27.014503] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:23.179 [2024-07-15 15:30:27.014621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:23.179 [2024-07-15 15:30:27.014706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:23.179 [2024-07-15 15:30:27.014708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:24.116 15:30:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:24.116 15:30:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:25:24.116 15:30:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:24.116 [2024-07-15 15:30:27.829218] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:24.116 15:30:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:25:24.116 15:30:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:24.116 15:30:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.116 15:30:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:25:24.375 Malloc1 00:25:24.375 15:30:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:24.375 15:30:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:24.634 15:30:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:24.892 [2024-07-15 15:30:28.598457] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:24.892 15:30:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:25.150 15:30:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:25.150 15:30:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:25.150 15:30:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:25.150 15:30:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:25.150 15:30:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:25.150 15:30:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:25.150 15:30:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:25.150 15:30:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:25:25.150 15:30:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:25.150 15:30:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:25.150 15:30:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:25.150 15:30:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:25:25.151 15:30:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:25.151 15:30:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:25.151 15:30:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:25.151 15:30:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:25.151 15:30:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:25.151 15:30:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:25.151 15:30:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:25.151 15:30:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:25.151 15:30:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:25.151 15:30:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:25.151 15:30:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:25.409 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:25.409 fio-3.35 00:25:25.409 Starting 1 thread 00:25:25.409 EAL: No free 2048 kB hugepages reported on node 1 00:25:27.942 00:25:27.942 test: (groupid=0, jobs=1): err= 0: pid=3155064: Mon Jul 15 15:30:31 2024 00:25:27.942 read: IOPS=12.2k, BW=47.6MiB/s (49.9MB/s)(95.5MiB/2005msec) 00:25:27.942 slat (nsec): min=1513, max=245504, avg=1634.15, stdev=2219.53 00:25:27.942 clat (usec): min=3115, max=10578, avg=5850.97, stdev=578.99 00:25:27.942 lat (usec): min=3152, max=10582, avg=5852.61, stdev=579.06 00:25:27.942 clat percentiles (usec): 00:25:27.942 | 1.00th=[ 4686], 5.00th=[ 5080], 10.00th=[ 5276], 20.00th=[ 5473], 00:25:27.942 | 30.00th=[ 5604], 40.00th=[ 5669], 50.00th=[ 5800], 60.00th=[ 5932], 00:25:27.942 | 70.00th=[ 6063], 80.00th=[ 6194], 90.00th=[ 6390], 95.00th=[ 6652], 00:25:27.942 | 99.00th=[ 8094], 99.50th=[ 8717], 99.90th=[ 9896], 99.95th=[10028], 00:25:27.942 | 99.99th=[10421] 00:25:27.942 bw ( KiB/s): min=47056, max=49464, per=99.98%, avg=48744.00, stdev=1142.09, samples=4 00:25:27.942 iops : min=11764, max=12366, avg=12186.00, stdev=285.52, samples=4 00:25:27.942 write: IOPS=12.1k, BW=47.4MiB/s (49.8MB/s)(95.1MiB/2005msec); 0 zone resets 00:25:27.942 slat (nsec): min=1550, max=239329, avg=1708.53, stdev=1682.22 00:25:27.942 clat (usec): min=2504, max=9495, avg=4603.36, stdev=402.88 00:25:27.942 lat (usec): min=2519, max=9496, avg=4605.07, stdev=402.82 00:25:27.942 clat percentiles (usec): 00:25:27.942 | 1.00th=[ 3490], 5.00th=[ 3949], 10.00th=[ 4146], 20.00th=[ 4293], 00:25:27.942 | 30.00th=[ 4424], 40.00th=[ 4555], 50.00th=[ 4621], 60.00th=[ 4686], 00:25:27.942 | 70.00th=[ 4817], 80.00th=[ 4883], 90.00th=[ 5080], 95.00th=[ 5211], 00:25:27.942 | 99.00th=[ 5473], 99.50th=[ 5669], 99.90th=[ 7242], 99.95th=[ 8160], 00:25:27.942 | 99.99th=[ 9372] 00:25:27.942 bw ( KiB/s): min=47624, max=49456, per=100.00%, avg=48584.00, stdev=763.91, samples=4 00:25:27.942 iops : min=11906, max=12364, avg=12146.00, stdev=190.98, samples=4 00:25:27.942 lat (msec) : 4=2.91%, 10=97.05%, 20=0.04% 00:25:27.942 cpu : usr=64.62%, sys=29.59%, ctx=42, majf=0, minf=6 00:25:27.942 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:25:27.942 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:27.942 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:27.942 issued rwts: total=24438,24353,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:27.942 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:27.942 00:25:27.942 Run status group 0 (all jobs): 00:25:27.942 READ: bw=47.6MiB/s (49.9MB/s), 47.6MiB/s-47.6MiB/s (49.9MB/s-49.9MB/s), io=95.5MiB (100MB), run=2005-2005msec 00:25:27.942 WRITE: bw=47.4MiB/s (49.8MB/s), 47.4MiB/s-47.4MiB/s (49.8MB/s-49.8MB/s), io=95.1MiB (99.7MB), run=2005-2005msec 00:25:27.942 15:30:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:27.942 15:30:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:27.942 15:30:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:27.942 15:30:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:27.942 15:30:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:27.942 15:30:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:27.942 15:30:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:25:27.942 15:30:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:27.942 15:30:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:27.942 15:30:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:27.942 15:30:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:25:27.942 15:30:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:27.942 15:30:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:27.942 15:30:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:27.942 15:30:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:27.942 15:30:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:27.942 15:30:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:27.942 15:30:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:27.942 15:30:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:27.942 15:30:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:27.942 15:30:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:27.942 15:30:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:28.200 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:25:28.200 fio-3.35 00:25:28.200 Starting 1 thread 00:25:28.200 EAL: No free 2048 kB hugepages reported on node 1 00:25:30.731 00:25:30.731 test: (groupid=0, jobs=1): err= 0: pid=3155599: Mon Jul 15 15:30:34 2024 00:25:30.731 read: IOPS=10.7k, BW=166MiB/s (175MB/s)(334MiB/2007msec) 00:25:30.731 slat (nsec): min=2438, max=80952, avg=2669.57, stdev=1215.25 00:25:30.731 clat (usec): min=1752, max=23015, avg=7203.71, stdev=2003.36 00:25:30.731 lat (usec): min=1754, max=23018, avg=7206.38, stdev=2003.61 00:25:30.731 clat percentiles (usec): 00:25:30.731 | 1.00th=[ 3523], 5.00th=[ 4424], 10.00th=[ 4883], 20.00th=[ 5604], 00:25:30.731 | 30.00th=[ 6063], 40.00th=[ 6521], 50.00th=[ 6980], 60.00th=[ 7504], 00:25:30.731 | 70.00th=[ 7963], 80.00th=[ 8586], 90.00th=[ 9634], 95.00th=[10945], 00:25:30.731 | 99.00th=[13304], 99.50th=[14222], 99.90th=[17171], 99.95th=[17695], 00:25:30.731 | 99.99th=[17957] 00:25:30.731 bw ( KiB/s): min=82048, max=94080, per=50.49%, avg=86088.00, stdev=5550.71, samples=4 00:25:30.731 iops : min= 5128, max= 5880, avg=5380.50, stdev=346.92, samples=4 00:25:30.731 write: IOPS=6293, BW=98.3MiB/s (103MB/s)(176MiB/1790msec); 0 zone resets 00:25:30.731 slat (usec): min=28, max=382, avg=30.12, stdev= 7.53 00:25:30.731 clat (usec): min=4711, max=16581, avg=8347.97, stdev=1747.67 00:25:30.731 lat (usec): min=4741, max=16613, avg=8378.09, stdev=1749.91 00:25:30.731 clat percentiles (usec): 00:25:30.731 | 1.00th=[ 5473], 5.00th=[ 6128], 10.00th=[ 6521], 20.00th=[ 6980], 00:25:30.731 | 30.00th=[ 7308], 40.00th=[ 7701], 50.00th=[ 8029], 60.00th=[ 8455], 00:25:30.731 | 70.00th=[ 8848], 80.00th=[ 9503], 90.00th=[10683], 95.00th=[11731], 00:25:30.732 | 99.00th=[14615], 99.50th=[15270], 99.90th=[16188], 99.95th=[16319], 00:25:30.732 | 99.99th=[16581] 00:25:30.732 bw ( KiB/s): min=84224, max=98304, per=88.89%, avg=89504.00, stdev=6161.20, samples=4 00:25:30.732 iops : min= 5264, max= 6144, avg=5594.00, stdev=385.07, samples=4 00:25:30.732 lat (msec) : 2=0.02%, 4=1.85%, 10=87.78%, 20=10.34%, 50=0.01% 00:25:30.732 cpu : usr=83.00%, sys=15.00%, ctx=38, majf=0, minf=3 00:25:30.732 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:25:30.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:30.732 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:30.732 issued rwts: total=21386,11265,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:30.732 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:30.732 00:25:30.732 Run status group 0 (all jobs): 00:25:30.732 READ: bw=166MiB/s (175MB/s), 166MiB/s-166MiB/s (175MB/s-175MB/s), io=334MiB (350MB), run=2007-2007msec 00:25:30.732 WRITE: bw=98.3MiB/s (103MB/s), 98.3MiB/s-98.3MiB/s (103MB/s-103MB/s), io=176MiB (185MB), run=1790-1790msec 00:25:30.732 15:30:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:30.732 15:30:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:25:30.732 15:30:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:30.732 15:30:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:25:30.732 15:30:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:25:30.732 15:30:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:30.732 15:30:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:25:30.732 15:30:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:30.732 15:30:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:25:30.732 15:30:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:30.732 15:30:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:30.732 rmmod nvme_tcp 00:25:30.732 rmmod nvme_fabrics 00:25:30.732 rmmod nvme_keyring 00:25:30.732 15:30:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:30.732 15:30:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:25:30.732 15:30:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:25:30.732 15:30:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 3154383 ']' 00:25:30.732 15:30:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 3154383 00:25:30.732 15:30:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 3154383 ']' 00:25:30.732 15:30:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 3154383 00:25:30.732 15:30:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:25:30.732 15:30:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:30.732 15:30:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3154383 00:25:30.990 15:30:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:30.990 15:30:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:30.990 15:30:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3154383' 00:25:30.990 killing process with pid 3154383 00:25:30.990 15:30:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 3154383 00:25:30.990 15:30:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 3154383 00:25:30.990 15:30:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:30.990 15:30:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:30.990 15:30:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:30.990 15:30:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:30.990 15:30:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:30.990 15:30:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:30.990 15:30:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:30.990 15:30:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:33.525 15:30:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:33.525 00:25:33.525 real 0m17.058s 00:25:33.525 user 0m54.224s 00:25:33.525 sys 0m7.801s 00:25:33.525 15:30:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:33.525 15:30:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.525 ************************************ 00:25:33.525 END TEST nvmf_fio_host 00:25:33.525 ************************************ 00:25:33.525 15:30:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:33.525 15:30:36 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:33.525 15:30:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:33.525 15:30:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:33.525 15:30:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:33.525 ************************************ 00:25:33.525 START TEST nvmf_failover 00:25:33.525 ************************************ 00:25:33.525 15:30:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:33.525 * Looking for test storage... 00:25:33.525 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:33.525 15:30:37 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:33.525 15:30:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:25:33.525 15:30:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:33.525 15:30:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:33.525 15:30:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:33.525 15:30:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:33.525 15:30:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:33.525 15:30:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:33.525 15:30:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:33.525 15:30:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:33.525 15:30:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:33.525 15:30:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:33.525 15:30:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:25:33.525 15:30:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:25:33.525 15:30:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:33.525 15:30:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:33.525 15:30:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:33.525 15:30:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:33.525 15:30:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:33.525 15:30:37 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:33.525 15:30:37 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:33.525 15:30:37 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:33.525 15:30:37 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.525 15:30:37 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.525 15:30:37 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.525 15:30:37 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:25:33.525 15:30:37 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.525 15:30:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:25:33.525 15:30:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:33.525 15:30:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:33.525 15:30:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:33.525 15:30:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:33.525 15:30:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:33.525 15:30:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:33.525 15:30:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:33.525 15:30:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:33.525 15:30:37 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:33.525 15:30:37 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:33.525 15:30:37 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:33.525 15:30:37 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:33.525 15:30:37 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:25:33.525 15:30:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:33.525 15:30:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:33.525 15:30:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:33.525 15:30:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:33.525 15:30:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:33.525 15:30:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:33.525 15:30:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:33.525 15:30:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:33.525 15:30:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:33.525 15:30:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:33.525 15:30:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:25:33.525 15:30:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:40.142 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:40.142 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:25:40.142 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:40.142 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:40.142 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:40.142 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:40.142 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:40.142 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:25:40.142 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:40.142 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:25:40.142 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:25:40.142 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:25:40.142 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:25:40.142 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:25:40.142 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:25:40.142 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:40.142 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:40.142 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:40.142 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:40.142 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:40.142 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:40.142 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:40.142 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:40.142 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:40.142 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:40.142 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:40.142 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:40.142 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:40.142 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:40.142 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:40.142 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:40.142 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:40.142 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:40.142 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:40.142 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:40.143 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:40.143 Found net devices under 0000:af:00.0: cvl_0_0 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:40.143 Found net devices under 0000:af:00.1: cvl_0_1 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:40.143 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:40.143 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:25:40.143 00:25:40.143 --- 10.0.0.2 ping statistics --- 00:25:40.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:40.143 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:40.143 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:40.143 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:25:40.143 00:25:40.143 --- 10.0.0.1 ping statistics --- 00:25:40.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:40.143 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=3159655 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 3159655 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3159655 ']' 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:40.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:40.143 15:30:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:40.143 [2024-07-15 15:30:43.844681] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:25:40.143 [2024-07-15 15:30:43.844726] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:40.143 EAL: No free 2048 kB hugepages reported on node 1 00:25:40.143 [2024-07-15 15:30:43.916607] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:40.143 [2024-07-15 15:30:43.985101] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:40.143 [2024-07-15 15:30:43.985145] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:40.143 [2024-07-15 15:30:43.985154] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:40.143 [2024-07-15 15:30:43.985163] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:40.143 [2024-07-15 15:30:43.985185] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:40.143 [2024-07-15 15:30:43.985295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:40.143 [2024-07-15 15:30:43.985370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:40.143 [2024-07-15 15:30:43.985372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:41.080 15:30:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:41.080 15:30:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:25:41.080 15:30:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:41.080 15:30:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:41.080 15:30:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:41.080 15:30:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:41.080 15:30:44 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:41.080 [2024-07-15 15:30:44.854043] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:41.080 15:30:44 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:41.340 Malloc0 00:25:41.340 15:30:45 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:41.599 15:30:45 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:41.599 15:30:45 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:41.857 [2024-07-15 15:30:45.598816] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:41.857 15:30:45 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:42.116 [2024-07-15 15:30:45.767249] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:42.116 15:30:45 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:42.116 [2024-07-15 15:30:45.943803] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:42.116 15:30:45 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:42.116 15:30:45 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3159953 00:25:42.116 15:30:45 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:42.116 15:30:45 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3159953 /var/tmp/bdevperf.sock 00:25:42.116 15:30:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3159953 ']' 00:25:42.116 15:30:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:42.116 15:30:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:42.116 15:30:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:42.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:42.116 15:30:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:42.116 15:30:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:43.050 15:30:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:43.050 15:30:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:25:43.050 15:30:46 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:43.308 NVMe0n1 00:25:43.308 15:30:47 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:43.874 00:25:43.875 15:30:47 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:43.875 15:30:47 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3160220 00:25:43.875 15:30:47 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:44.812 15:30:48 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:44.812 [2024-07-15 15:30:48.710780] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99310 is same with the state(5) to be set 00:25:44.812 [2024-07-15 15:30:48.710875] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99310 is same with the state(5) to be set 00:25:44.812 [2024-07-15 15:30:48.710886] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99310 is same with the state(5) to be set 00:25:44.812 [2024-07-15 15:30:48.710895] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99310 is same with the state(5) to be set 00:25:44.812 [2024-07-15 15:30:48.710904] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99310 is same with the state(5) to be set 00:25:44.812 [2024-07-15 15:30:48.710912] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99310 is same with the state(5) to be set 00:25:44.812 [2024-07-15 15:30:48.710921] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99310 is same with the state(5) to be set 00:25:44.812 [2024-07-15 15:30:48.710929] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99310 is same with the state(5) to be set 00:25:44.812 [2024-07-15 15:30:48.710938] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99310 is same with the state(5) to be set 00:25:44.812 [2024-07-15 15:30:48.710946] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99310 is same with the state(5) to be set 00:25:44.812 [2024-07-15 15:30:48.710955] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99310 is same with the state(5) to be set 00:25:44.812 [2024-07-15 15:30:48.710963] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99310 is same with the state(5) to be set 00:25:44.812 [2024-07-15 15:30:48.710971] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99310 is same with the state(5) to be set 00:25:44.812 [2024-07-15 15:30:48.710979] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99310 is same with the state(5) to be set 00:25:44.812 [2024-07-15 15:30:48.710988] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c99310 is same with the state(5) to be set 00:25:45.071 15:30:48 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:48.359 15:30:51 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:48.359 00:25:48.359 15:30:52 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:48.618 15:30:52 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:51.903 15:30:55 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:51.903 [2024-07-15 15:30:55.491356] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:51.903 15:30:55 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:52.839 15:30:56 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:52.839 [2024-07-15 15:30:56.686606] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9aed0 is same with the state(5) to be set 00:25:52.839 [2024-07-15 15:30:56.686649] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9aed0 is same with the state(5) to be set 00:25:52.839 [2024-07-15 15:30:56.686660] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9aed0 is same with the state(5) to be set 00:25:52.839 [2024-07-15 15:30:56.686669] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9aed0 is same with the state(5) to be set 00:25:52.839 [2024-07-15 15:30:56.686677] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9aed0 is same with the state(5) to be set 00:25:52.839 [2024-07-15 15:30:56.686686] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9aed0 is same with the state(5) to be set 00:25:52.839 [2024-07-15 15:30:56.686695] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9aed0 is same with the state(5) to be set 00:25:52.839 [2024-07-15 15:30:56.686703] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9aed0 is same with the state(5) to be set 00:25:52.839 [2024-07-15 15:30:56.686712] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9aed0 is same with the state(5) to be set 00:25:52.839 [2024-07-15 15:30:56.686720] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9aed0 is same with the state(5) to be set 00:25:52.839 [2024-07-15 15:30:56.686729] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9aed0 is same with the state(5) to be set 00:25:52.839 [2024-07-15 15:30:56.686737] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9aed0 is same with the state(5) to be set 00:25:52.839 [2024-07-15 15:30:56.686746] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9aed0 is same with the state(5) to be set 00:25:52.839 [2024-07-15 15:30:56.686754] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9aed0 is same with the state(5) to be set 00:25:52.839 [2024-07-15 15:30:56.686763] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9aed0 is same with the state(5) to be set 00:25:52.839 [2024-07-15 15:30:56.686771] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9aed0 is same with the state(5) to be set 00:25:52.839 [2024-07-15 15:30:56.686779] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9aed0 is same with the state(5) to be set 00:25:52.839 [2024-07-15 15:30:56.686787] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9aed0 is same with the state(5) to be set 00:25:52.839 [2024-07-15 15:30:56.686796] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9aed0 is same with the state(5) to be set 00:25:52.839 [2024-07-15 15:30:56.686804] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9aed0 is same with the state(5) to be set 00:25:52.839 [2024-07-15 15:30:56.686818] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9aed0 is same with the state(5) to be set 00:25:52.839 15:30:56 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 3160220 00:25:59.412 0 00:25:59.412 15:31:02 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 3159953 00:25:59.412 15:31:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3159953 ']' 00:25:59.412 15:31:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3159953 00:25:59.412 15:31:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:25:59.412 15:31:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:59.412 15:31:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3159953 00:25:59.412 15:31:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:59.412 15:31:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:59.412 15:31:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3159953' 00:25:59.412 killing process with pid 3159953 00:25:59.412 15:31:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3159953 00:25:59.412 15:31:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3159953 00:25:59.412 15:31:02 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:59.412 [2024-07-15 15:30:46.001595] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:25:59.412 [2024-07-15 15:30:46.001645] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3159953 ] 00:25:59.412 EAL: No free 2048 kB hugepages reported on node 1 00:25:59.412 [2024-07-15 15:30:46.071181] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:59.412 [2024-07-15 15:30:46.143299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:59.412 Running I/O for 15 seconds... 00:25:59.412 [2024-07-15 15:30:48.711441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:101704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.412 [2024-07-15 15:30:48.711478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.412 [2024-07-15 15:30:48.711497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:101712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.412 [2024-07-15 15:30:48.711508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.412 [2024-07-15 15:30:48.711520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.412 [2024-07-15 15:30:48.711530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.412 [2024-07-15 15:30:48.711540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:101728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.412 [2024-07-15 15:30:48.711549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.412 [2024-07-15 15:30:48.711561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:101736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.412 [2024-07-15 15:30:48.711570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.412 [2024-07-15 15:30:48.711580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:101744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.412 [2024-07-15 15:30:48.711589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.412 [2024-07-15 15:30:48.711599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:101752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.412 [2024-07-15 15:30:48.711608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.412 [2024-07-15 15:30:48.711619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:101760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.412 [2024-07-15 15:30:48.711628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.412 [2024-07-15 15:30:48.711638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:100936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.412 [2024-07-15 15:30:48.711647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.412 [2024-07-15 15:30:48.711658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:100944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.412 [2024-07-15 15:30:48.711667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.412 [2024-07-15 15:30:48.711678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:100952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.413 [2024-07-15 15:30:48.711687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.413 [2024-07-15 15:30:48.711703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.413 [2024-07-15 15:30:48.711712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.413 [2024-07-15 15:30:48.711723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:100968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.413 [2024-07-15 15:30:48.711732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.413 [2024-07-15 15:30:48.711743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:100976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.413 [2024-07-15 15:30:48.711751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.413 [2024-07-15 15:30:48.711762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:100984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.413 [2024-07-15 15:30:48.711771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.413 [2024-07-15 15:30:48.711785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:100992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.413 [2024-07-15 15:30:48.711794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.413 [2024-07-15 15:30:48.711805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:101000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.413 [2024-07-15 15:30:48.711814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.413 [2024-07-15 15:30:48.711825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:101008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.413 [2024-07-15 15:30:48.711840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.413 [2024-07-15 15:30:48.711851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:101016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.413 [2024-07-15 15:30:48.711860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.413 [2024-07-15 15:30:48.711870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:101024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.413 [2024-07-15 15:30:48.711879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.413 [2024-07-15 15:30:48.711890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:101032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.413 [2024-07-15 15:30:48.711899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.413 [2024-07-15 15:30:48.711909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:101040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.413 [2024-07-15 15:30:48.711918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.413 [2024-07-15 15:30:48.711929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:101048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.413 [2024-07-15 15:30:48.711937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.413 [2024-07-15 15:30:48.711948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:101056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.413 [2024-07-15 15:30:48.711958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.413 [2024-07-15 15:30:48.711969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:101064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.413 [2024-07-15 15:30:48.711978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.413 [2024-07-15 15:30:48.711989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:101072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.413 [2024-07-15 15:30:48.711998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.413 [2024-07-15 15:30:48.712008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:101080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.413 [2024-07-15 15:30:48.712017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.413 [2024-07-15 15:30:48.712027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.413 [2024-07-15 15:30:48.712036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.413 [2024-07-15 15:30:48.712047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:101096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.413 [2024-07-15 15:30:48.712056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.413 [2024-07-15 15:30:48.712066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:101104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.413 [2024-07-15 15:30:48.712075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.413 [2024-07-15 15:30:48.712087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.413 [2024-07-15 15:30:48.712095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.413 [2024-07-15 15:30:48.712106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:101120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.413 [2024-07-15 15:30:48.712115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.413 [2024-07-15 15:30:48.712126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:101128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.413 [2024-07-15 15:30:48.712135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.413 [2024-07-15 15:30:48.712146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:101136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.413 [2024-07-15 15:30:48.712154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.413 [2024-07-15 15:30:48.712165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:101144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.413 [2024-07-15 15:30:48.712174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.413 [2024-07-15 15:30:48.712184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.413 [2024-07-15 15:30:48.712193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.413 [2024-07-15 15:30:48.712205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:101160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.413 [2024-07-15 15:30:48.712214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.413 [2024-07-15 15:30:48.712224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:101168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.413 [2024-07-15 15:30:48.712233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.413 [2024-07-15 15:30:48.712244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:101176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.413 [2024-07-15 15:30:48.712253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.413 [2024-07-15 15:30:48.712264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:101184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.413 [2024-07-15 15:30:48.712273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.413 [2024-07-15 15:30:48.712283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:101192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.413 [2024-07-15 15:30:48.712292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.413 [2024-07-15 15:30:48.712303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:101200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.413 [2024-07-15 15:30:48.712312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.413 [2024-07-15 15:30:48.712322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:101208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.413 [2024-07-15 15:30:48.712331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.413 [2024-07-15 15:30:48.712341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:101216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.413 [2024-07-15 15:30:48.712351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.413 [2024-07-15 15:30:48.712361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:101224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.413 [2024-07-15 15:30:48.712370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.413 [2024-07-15 15:30:48.712381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:101232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.413 [2024-07-15 15:30:48.712389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.413 [2024-07-15 15:30:48.712400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:101240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.413 [2024-07-15 15:30:48.712409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.413 [2024-07-15 15:30:48.712420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:101248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.413 [2024-07-15 15:30:48.712429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.413 [2024-07-15 15:30:48.712445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:101768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.413 [2024-07-15 15:30:48.712455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.413 [2024-07-15 15:30:48.712466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:101256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.413 [2024-07-15 15:30:48.712475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.413 [2024-07-15 15:30:48.712485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:101264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.413 [2024-07-15 15:30:48.712494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.413 [2024-07-15 15:30:48.712506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:101272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.413 [2024-07-15 15:30:48.712515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.413 [2024-07-15 15:30:48.712526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:101280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.414 [2024-07-15 15:30:48.712535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.414 [2024-07-15 15:30:48.712546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:101288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.414 [2024-07-15 15:30:48.712555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.414 [2024-07-15 15:30:48.712565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:101296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.414 [2024-07-15 15:30:48.712574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.414 [2024-07-15 15:30:48.712585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:101304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.414 [2024-07-15 15:30:48.712594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.414 [2024-07-15 15:30:48.712604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:101312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.414 [2024-07-15 15:30:48.712613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.414 [2024-07-15 15:30:48.712623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.414 [2024-07-15 15:30:48.712632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.414 [2024-07-15 15:30:48.712642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:101328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.414 [2024-07-15 15:30:48.712651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.414 [2024-07-15 15:30:48.712661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:101336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.414 [2024-07-15 15:30:48.712670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.414 [2024-07-15 15:30:48.712680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:101344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.414 [2024-07-15 15:30:48.712690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.414 [2024-07-15 15:30:48.712702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:101352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.414 [2024-07-15 15:30:48.712711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.414 [2024-07-15 15:30:48.712721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:101360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.414 [2024-07-15 15:30:48.712730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.414 [2024-07-15 15:30:48.712740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.414 [2024-07-15 15:30:48.712750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.414 [2024-07-15 15:30:48.712761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:101376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.414 [2024-07-15 15:30:48.712770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.414 [2024-07-15 15:30:48.712781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:101384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.414 [2024-07-15 15:30:48.712790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.414 [2024-07-15 15:30:48.712800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:101392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.414 [2024-07-15 15:30:48.712809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.414 [2024-07-15 15:30:48.712820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:101400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.414 [2024-07-15 15:30:48.712829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.414 [2024-07-15 15:30:48.712843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:101408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.414 [2024-07-15 15:30:48.712852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.414 [2024-07-15 15:30:48.712863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:101416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.414 [2024-07-15 15:30:48.712872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.414 [2024-07-15 15:30:48.712883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:101424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.414 [2024-07-15 15:30:48.712892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.414 [2024-07-15 15:30:48.712903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:101432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.414 [2024-07-15 15:30:48.712912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.414 [2024-07-15 15:30:48.712922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:101440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.414 [2024-07-15 15:30:48.712931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.414 [2024-07-15 15:30:48.712942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:101448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.414 [2024-07-15 15:30:48.712952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.414 [2024-07-15 15:30:48.712963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:101456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.414 [2024-07-15 15:30:48.712972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.414 [2024-07-15 15:30:48.712982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:101464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.414 [2024-07-15 15:30:48.712991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.414 [2024-07-15 15:30:48.713002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:101472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.414 [2024-07-15 15:30:48.713010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.414 [2024-07-15 15:30:48.713021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:101480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.414 [2024-07-15 15:30:48.713030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.414 [2024-07-15 15:30:48.713041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:101488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.414 [2024-07-15 15:30:48.713050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.414 [2024-07-15 15:30:48.713060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:101496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.414 [2024-07-15 15:30:48.713069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.414 [2024-07-15 15:30:48.713081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:101504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.414 [2024-07-15 15:30:48.713090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.414 [2024-07-15 15:30:48.713100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:101512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.414 [2024-07-15 15:30:48.713110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.414 [2024-07-15 15:30:48.713120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:101520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.414 [2024-07-15 15:30:48.713129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.414 [2024-07-15 15:30:48.713139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:101528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.414 [2024-07-15 15:30:48.713148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.414 [2024-07-15 15:30:48.713159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:101536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.414 [2024-07-15 15:30:48.713168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.414 [2024-07-15 15:30:48.713178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:101544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.414 [2024-07-15 15:30:48.713187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.414 [2024-07-15 15:30:48.713199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:101552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.414 [2024-07-15 15:30:48.713208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.414 [2024-07-15 15:30:48.713218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:101560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.414 [2024-07-15 15:30:48.713227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.414 [2024-07-15 15:30:48.713238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:101568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.414 [2024-07-15 15:30:48.713248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.414 [2024-07-15 15:30:48.713258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:101776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.414 [2024-07-15 15:30:48.713267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.414 [2024-07-15 15:30:48.713277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:101784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.414 [2024-07-15 15:30:48.713286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.414 [2024-07-15 15:30:48.713297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.414 [2024-07-15 15:30:48.713306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.414 [2024-07-15 15:30:48.713316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:101800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.414 [2024-07-15 15:30:48.713325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.414 [2024-07-15 15:30:48.713335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:101808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.415 [2024-07-15 15:30:48.713344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.415 [2024-07-15 15:30:48.713354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:101816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.415 [2024-07-15 15:30:48.713363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.415 [2024-07-15 15:30:48.713373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:101824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.415 [2024-07-15 15:30:48.713382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.415 [2024-07-15 15:30:48.713394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:101576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.415 [2024-07-15 15:30:48.713403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.415 [2024-07-15 15:30:48.713414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:101584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.415 [2024-07-15 15:30:48.713423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.415 [2024-07-15 15:30:48.713433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:101592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.415 [2024-07-15 15:30:48.713442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.415 [2024-07-15 15:30:48.713454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:101600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.415 [2024-07-15 15:30:48.713463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.415 [2024-07-15 15:30:48.713474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:101608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.415 [2024-07-15 15:30:48.713482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.415 [2024-07-15 15:30:48.713493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:101616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.415 [2024-07-15 15:30:48.713502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.415 [2024-07-15 15:30:48.713512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:101624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.415 [2024-07-15 15:30:48.713521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.415 [2024-07-15 15:30:48.713532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:101632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.415 [2024-07-15 15:30:48.713541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.415 [2024-07-15 15:30:48.713551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:101640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.415 [2024-07-15 15:30:48.713560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.415 [2024-07-15 15:30:48.713571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:101648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.415 [2024-07-15 15:30:48.713580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.415 [2024-07-15 15:30:48.713590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:101656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.415 [2024-07-15 15:30:48.713599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.415 [2024-07-15 15:30:48.713610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:101664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.415 [2024-07-15 15:30:48.713618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.415 [2024-07-15 15:30:48.713629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:101672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.415 [2024-07-15 15:30:48.713638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.415 [2024-07-15 15:30:48.713648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:101680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.415 [2024-07-15 15:30:48.713657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.415 [2024-07-15 15:30:48.713667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.415 [2024-07-15 15:30:48.713677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.415 [2024-07-15 15:30:48.713688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:101696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.415 [2024-07-15 15:30:48.713699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.415 [2024-07-15 15:30:48.713710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:101832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.415 [2024-07-15 15:30:48.713719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.415 [2024-07-15 15:30:48.713729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:101840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.415 [2024-07-15 15:30:48.713738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.415 [2024-07-15 15:30:48.713749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:101848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.415 [2024-07-15 15:30:48.713758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.415 [2024-07-15 15:30:48.713768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:101856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.415 [2024-07-15 15:30:48.713777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.415 [2024-07-15 15:30:48.713788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:101864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.415 [2024-07-15 15:30:48.713797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.415 [2024-07-15 15:30:48.713807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:101872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.415 [2024-07-15 15:30:48.713817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.415 [2024-07-15 15:30:48.713827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:101880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.415 [2024-07-15 15:30:48.713839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.415 [2024-07-15 15:30:48.713850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:101888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.415 [2024-07-15 15:30:48.713859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.415 [2024-07-15 15:30:48.713870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:101896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.415 [2024-07-15 15:30:48.713878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.415 [2024-07-15 15:30:48.713889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:101904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.415 [2024-07-15 15:30:48.713898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.415 [2024-07-15 15:30:48.713908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:101912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.415 [2024-07-15 15:30:48.713917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.415 [2024-07-15 15:30:48.713928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:101920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.415 [2024-07-15 15:30:48.713938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.415 [2024-07-15 15:30:48.713950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:101928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.415 [2024-07-15 15:30:48.713959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.415 [2024-07-15 15:30:48.713969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:101936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.415 [2024-07-15 15:30:48.713978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.415 [2024-07-15 15:30:48.713989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:101944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.415 [2024-07-15 15:30:48.713997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.415 [2024-07-15 15:30:48.714019] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.415 [2024-07-15 15:30:48.714027] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.415 [2024-07-15 15:30:48.714037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101952 len:8 PRP1 0x0 PRP2 0x0 00:25:59.415 [2024-07-15 15:30:48.714046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.415 [2024-07-15 15:30:48.714090] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1562940 was disconnected and freed. reset controller. 00:25:59.415 [2024-07-15 15:30:48.714103] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:59.415 [2024-07-15 15:30:48.714124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.415 [2024-07-15 15:30:48.714134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.416 [2024-07-15 15:30:48.714144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.416 [2024-07-15 15:30:48.714153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.416 [2024-07-15 15:30:48.714163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.416 [2024-07-15 15:30:48.714172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.416 [2024-07-15 15:30:48.714181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.416 [2024-07-15 15:30:48.714190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.416 [2024-07-15 15:30:48.714199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.416 [2024-07-15 15:30:48.716898] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.416 [2024-07-15 15:30:48.716928] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c590 (9): Bad file descriptor 00:25:59.416 [2024-07-15 15:30:48.784281] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:59.416 [2024-07-15 15:30:52.294848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.416 [2024-07-15 15:30:52.294893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.416 [2024-07-15 15:30:52.294905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.416 [2024-07-15 15:30:52.294923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.416 [2024-07-15 15:30:52.294933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.416 [2024-07-15 15:30:52.294942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.416 [2024-07-15 15:30:52.294952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.416 [2024-07-15 15:30:52.294961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.416 [2024-07-15 15:30:52.294970] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x153c590 is same with the state(5) to be set 00:25:59.416 [2024-07-15 15:30:52.295674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:84272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.416 [2024-07-15 15:30:52.295686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.416 [2024-07-15 15:30:52.295701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:84280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.416 [2024-07-15 15:30:52.295711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.416 [2024-07-15 15:30:52.295722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:84288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.416 [2024-07-15 15:30:52.295731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.416 [2024-07-15 15:30:52.295742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:84296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.416 [2024-07-15 15:30:52.295751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.416 [2024-07-15 15:30:52.295761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.416 [2024-07-15 15:30:52.295770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.416 [2024-07-15 15:30:52.295780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:84312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.416 [2024-07-15 15:30:52.295790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.416 [2024-07-15 15:30:52.295800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:84320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.416 [2024-07-15 15:30:52.295809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.416 [2024-07-15 15:30:52.295819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:84328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.416 [2024-07-15 15:30:52.295829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.416 [2024-07-15 15:30:52.295843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:84336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.416 [2024-07-15 15:30:52.295853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.416 [2024-07-15 15:30:52.295863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:84344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.416 [2024-07-15 15:30:52.295875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.416 [2024-07-15 15:30:52.295886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:84352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.416 [2024-07-15 15:30:52.295895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.416 [2024-07-15 15:30:52.295905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:84360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.416 [2024-07-15 15:30:52.295914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.416 [2024-07-15 15:30:52.295924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:84368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.416 [2024-07-15 15:30:52.295934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.416 [2024-07-15 15:30:52.295944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.416 [2024-07-15 15:30:52.295953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.416 [2024-07-15 15:30:52.295963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:84384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.416 [2024-07-15 15:30:52.295972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.416 [2024-07-15 15:30:52.295983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:84392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.416 [2024-07-15 15:30:52.295992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.416 [2024-07-15 15:30:52.296003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:84400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.416 [2024-07-15 15:30:52.296012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.416 [2024-07-15 15:30:52.296022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:84408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.416 [2024-07-15 15:30:52.296031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.416 [2024-07-15 15:30:52.296042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:84416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.416 [2024-07-15 15:30:52.296051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.416 [2024-07-15 15:30:52.296061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:84424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.416 [2024-07-15 15:30:52.296071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.416 [2024-07-15 15:30:52.296081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:84432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.416 [2024-07-15 15:30:52.296090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.416 [2024-07-15 15:30:52.296100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:84440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.416 [2024-07-15 15:30:52.296110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.416 [2024-07-15 15:30:52.296120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:84448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.416 [2024-07-15 15:30:52.296130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.416 [2024-07-15 15:30:52.296140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:84456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.416 [2024-07-15 15:30:52.296149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.416 [2024-07-15 15:30:52.296160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:84464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.416 [2024-07-15 15:30:52.296169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.416 [2024-07-15 15:30:52.296179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:84472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.417 [2024-07-15 15:30:52.296188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.417 [2024-07-15 15:30:52.296199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:84480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.417 [2024-07-15 15:30:52.296208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.417 [2024-07-15 15:30:52.296218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.417 [2024-07-15 15:30:52.296227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.417 [2024-07-15 15:30:52.296238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.417 [2024-07-15 15:30:52.296247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.417 [2024-07-15 15:30:52.296257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:84504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.417 [2024-07-15 15:30:52.296266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.417 [2024-07-15 15:30:52.296276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:84512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.417 [2024-07-15 15:30:52.296285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.417 [2024-07-15 15:30:52.296296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:84520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.417 [2024-07-15 15:30:52.296305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.417 [2024-07-15 15:30:52.296316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:84528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.417 [2024-07-15 15:30:52.296325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.417 [2024-07-15 15:30:52.296335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:84536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.417 [2024-07-15 15:30:52.296344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.417 [2024-07-15 15:30:52.296355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.417 [2024-07-15 15:30:52.296364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.417 [2024-07-15 15:30:52.296376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:84552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.417 [2024-07-15 15:30:52.296385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.417 [2024-07-15 15:30:52.296395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:84560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.417 [2024-07-15 15:30:52.296405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.417 [2024-07-15 15:30:52.296415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.417 [2024-07-15 15:30:52.296424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.417 [2024-07-15 15:30:52.296434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.417 [2024-07-15 15:30:52.296443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.417 [2024-07-15 15:30:52.296453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.417 [2024-07-15 15:30:52.296462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.417 [2024-07-15 15:30:52.296473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:84592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.417 [2024-07-15 15:30:52.296482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.417 [2024-07-15 15:30:52.296492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.417 [2024-07-15 15:30:52.296501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.417 [2024-07-15 15:30:52.296511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:84608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.417 [2024-07-15 15:30:52.296521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.417 [2024-07-15 15:30:52.296531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:84616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.417 [2024-07-15 15:30:52.296540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.417 [2024-07-15 15:30:52.296550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:84624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.417 [2024-07-15 15:30:52.296559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.417 [2024-07-15 15:30:52.296571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:84632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.417 [2024-07-15 15:30:52.296580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.417 [2024-07-15 15:30:52.296590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:84640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.417 [2024-07-15 15:30:52.296599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.417 [2024-07-15 15:30:52.296609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:84648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.417 [2024-07-15 15:30:52.296620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.417 [2024-07-15 15:30:52.296631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:84656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.417 [2024-07-15 15:30:52.296641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.417 [2024-07-15 15:30:52.296651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:84664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.417 [2024-07-15 15:30:52.296660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.417 [2024-07-15 15:30:52.296670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:84672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.417 [2024-07-15 15:30:52.296679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.417 [2024-07-15 15:30:52.296690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:84680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.417 [2024-07-15 15:30:52.296699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.418 [2024-07-15 15:30:52.296709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:84688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.418 [2024-07-15 15:30:52.296718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.418 [2024-07-15 15:30:52.296728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:84696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.418 [2024-07-15 15:30:52.296737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.418 [2024-07-15 15:30:52.296748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:84704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.418 [2024-07-15 15:30:52.296757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.418 [2024-07-15 15:30:52.296767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:84712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.418 [2024-07-15 15:30:52.296776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.418 [2024-07-15 15:30:52.296786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:84720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.418 [2024-07-15 15:30:52.296796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.418 [2024-07-15 15:30:52.296807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:84728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.418 [2024-07-15 15:30:52.296815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.418 [2024-07-15 15:30:52.296826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:84736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.418 [2024-07-15 15:30:52.296837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.418 [2024-07-15 15:30:52.296848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:84744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.418 [2024-07-15 15:30:52.296857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.418 [2024-07-15 15:30:52.296869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:84752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.418 [2024-07-15 15:30:52.296878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.418 [2024-07-15 15:30:52.296889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:84760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.418 [2024-07-15 15:30:52.296898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.418 [2024-07-15 15:30:52.296908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:84768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.418 [2024-07-15 15:30:52.296917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.418 [2024-07-15 15:30:52.296928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:84776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.418 [2024-07-15 15:30:52.296937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.418 [2024-07-15 15:30:52.296948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:84784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.418 [2024-07-15 15:30:52.296957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.418 [2024-07-15 15:30:52.296967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.418 [2024-07-15 15:30:52.296976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.418 [2024-07-15 15:30:52.296987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:84800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.418 [2024-07-15 15:30:52.296995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.418 [2024-07-15 15:30:52.297006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:84808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.418 [2024-07-15 15:30:52.297015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.418 [2024-07-15 15:30:52.297025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:84816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.418 [2024-07-15 15:30:52.297034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.418 [2024-07-15 15:30:52.297044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:84824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.418 [2024-07-15 15:30:52.297053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.418 [2024-07-15 15:30:52.297064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:84832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.418 [2024-07-15 15:30:52.297073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.418 [2024-07-15 15:30:52.297083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:84840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.418 [2024-07-15 15:30:52.297092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.418 [2024-07-15 15:30:52.297103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:84848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.418 [2024-07-15 15:30:52.297112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.418 [2024-07-15 15:30:52.297124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:84856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.418 [2024-07-15 15:30:52.297133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.418 [2024-07-15 15:30:52.297143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.418 [2024-07-15 15:30:52.297152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.418 [2024-07-15 15:30:52.297162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:84872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.418 [2024-07-15 15:30:52.297171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.418 [2024-07-15 15:30:52.297181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:84880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.418 [2024-07-15 15:30:52.297191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.418 [2024-07-15 15:30:52.297201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:84888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.418 [2024-07-15 15:30:52.297210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.418 [2024-07-15 15:30:52.297220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.418 [2024-07-15 15:30:52.297229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.418 [2024-07-15 15:30:52.297239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:84904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.418 [2024-07-15 15:30:52.297249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.418 [2024-07-15 15:30:52.297259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:84912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.418 [2024-07-15 15:30:52.297269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.418 [2024-07-15 15:30:52.297279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:84920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.418 [2024-07-15 15:30:52.297288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.418 [2024-07-15 15:30:52.297298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:84928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.418 [2024-07-15 15:30:52.297307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.418 [2024-07-15 15:30:52.297317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.418 [2024-07-15 15:30:52.297327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.418 [2024-07-15 15:30:52.297337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:84944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.418 [2024-07-15 15:30:52.297346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.418 [2024-07-15 15:30:52.297356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:84952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.418 [2024-07-15 15:30:52.297367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.418 [2024-07-15 15:30:52.297378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:84960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.419 [2024-07-15 15:30:52.297387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.419 [2024-07-15 15:30:52.297398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:84968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.419 [2024-07-15 15:30:52.297406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.419 [2024-07-15 15:30:52.297417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:84976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.419 [2024-07-15 15:30:52.297425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.419 [2024-07-15 15:30:52.297436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:84984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.419 [2024-07-15 15:30:52.297445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.419 [2024-07-15 15:30:52.297455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:84992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.419 [2024-07-15 15:30:52.297464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.419 [2024-07-15 15:30:52.297474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:85000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.419 [2024-07-15 15:30:52.297483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.419 [2024-07-15 15:30:52.297493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:85008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.419 [2024-07-15 15:30:52.297502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.419 [2024-07-15 15:30:52.297513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:85016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.419 [2024-07-15 15:30:52.297521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.419 [2024-07-15 15:30:52.297532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:85024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.419 [2024-07-15 15:30:52.297541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.419 [2024-07-15 15:30:52.297551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:85032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.419 [2024-07-15 15:30:52.297561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.419 [2024-07-15 15:30:52.297571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:85040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.419 [2024-07-15 15:30:52.297580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.419 [2024-07-15 15:30:52.297590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.419 [2024-07-15 15:30:52.297600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.419 [2024-07-15 15:30:52.297612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:85056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.419 [2024-07-15 15:30:52.297621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.419 [2024-07-15 15:30:52.297631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:85064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.419 [2024-07-15 15:30:52.297640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.419 [2024-07-15 15:30:52.297650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.419 [2024-07-15 15:30:52.297659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.419 [2024-07-15 15:30:52.297670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:85080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.419 [2024-07-15 15:30:52.297679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.419 [2024-07-15 15:30:52.297689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:85088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.419 [2024-07-15 15:30:52.297698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.419 [2024-07-15 15:30:52.297708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:85096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.419 [2024-07-15 15:30:52.297718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.419 [2024-07-15 15:30:52.297728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:84144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.419 [2024-07-15 15:30:52.297737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.419 [2024-07-15 15:30:52.297748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:84152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.419 [2024-07-15 15:30:52.297757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.419 [2024-07-15 15:30:52.297767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:84160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.419 [2024-07-15 15:30:52.297776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.419 [2024-07-15 15:30:52.297787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:84168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.419 [2024-07-15 15:30:52.297795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.419 [2024-07-15 15:30:52.297806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:84176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.419 [2024-07-15 15:30:52.297815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.419 [2024-07-15 15:30:52.297825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:84184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.419 [2024-07-15 15:30:52.297838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.419 [2024-07-15 15:30:52.297848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:84192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.419 [2024-07-15 15:30:52.297858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.419 [2024-07-15 15:30:52.297869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:85104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.419 [2024-07-15 15:30:52.297878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.419 [2024-07-15 15:30:52.297888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:85112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.419 [2024-07-15 15:30:52.297897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.419 [2024-07-15 15:30:52.297908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:85120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.419 [2024-07-15 15:30:52.297917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.419 [2024-07-15 15:30:52.297928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:85128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.419 [2024-07-15 15:30:52.297937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.419 [2024-07-15 15:30:52.297947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:85136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.419 [2024-07-15 15:30:52.297956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.419 [2024-07-15 15:30:52.297967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:85144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.419 [2024-07-15 15:30:52.297976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.419 [2024-07-15 15:30:52.297987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:85152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.419 [2024-07-15 15:30:52.297996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.419 [2024-07-15 15:30:52.298006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:84200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.419 [2024-07-15 15:30:52.298016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.419 [2024-07-15 15:30:52.298026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:84208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.419 [2024-07-15 15:30:52.298035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.419 [2024-07-15 15:30:52.298046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:84216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.419 [2024-07-15 15:30:52.298054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.419 [2024-07-15 15:30:52.298065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:84224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.419 [2024-07-15 15:30:52.298074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.419 [2024-07-15 15:30:52.298085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.419 [2024-07-15 15:30:52.298094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.419 [2024-07-15 15:30:52.298104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:84240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.419 [2024-07-15 15:30:52.298114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.419 [2024-07-15 15:30:52.298125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:84248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.419 [2024-07-15 15:30:52.298135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.419 [2024-07-15 15:30:52.298145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:84256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.419 [2024-07-15 15:30:52.298154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.419 [2024-07-15 15:30:52.298164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:85160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.419 [2024-07-15 15:30:52.298173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.419 [2024-07-15 15:30:52.298183] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17073c0 is same with the state(5) to be set 00:25:59.419 [2024-07-15 15:30:52.298193] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.419 [2024-07-15 15:30:52.298201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.419 [2024-07-15 15:30:52.298209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84264 len:8 PRP1 0x0 PRP2 0x0 00:25:59.419 [2024-07-15 15:30:52.298218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.419 [2024-07-15 15:30:52.298261] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17073c0 was disconnected and freed. reset controller. 00:25:59.419 [2024-07-15 15:30:52.298273] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:59.419 [2024-07-15 15:30:52.298282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.419 [2024-07-15 15:30:52.300930] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.419 [2024-07-15 15:30:52.300963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c590 (9): Bad file descriptor 00:25:59.419 [2024-07-15 15:30:52.372463] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:59.419 [2024-07-15 15:30:56.688378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.419 [2024-07-15 15:30:56.688414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.419 [2024-07-15 15:30:56.688433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.419 [2024-07-15 15:30:56.688444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.419 [2024-07-15 15:30:56.688455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.419 [2024-07-15 15:30:56.688464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.419 [2024-07-15 15:30:56.688475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:10928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.419 [2024-07-15 15:30:56.688484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.419 [2024-07-15 15:30:56.688495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.419 [2024-07-15 15:30:56.688508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.419 [2024-07-15 15:30:56.688519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:10944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.419 [2024-07-15 15:30:56.688528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.419 [2024-07-15 15:30:56.688539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:10952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.420 [2024-07-15 15:30:56.688548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.420 [2024-07-15 15:30:56.688559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:10960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.420 [2024-07-15 15:30:56.688568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.420 [2024-07-15 15:30:56.688579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:10968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.420 [2024-07-15 15:30:56.688588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.420 [2024-07-15 15:30:56.688598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.420 [2024-07-15 15:30:56.688607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.420 [2024-07-15 15:30:56.688618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:10984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.420 [2024-07-15 15:30:56.688627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.420 [2024-07-15 15:30:56.688637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:10992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.420 [2024-07-15 15:30:56.688647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.420 [2024-07-15 15:30:56.688657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.420 [2024-07-15 15:30:56.688666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.420 [2024-07-15 15:30:56.688677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:11008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.420 [2024-07-15 15:30:56.688686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.420 [2024-07-15 15:30:56.688697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.420 [2024-07-15 15:30:56.688707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.420 [2024-07-15 15:30:56.688717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:11024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.420 [2024-07-15 15:30:56.688727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.420 [2024-07-15 15:30:56.688737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:11032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.420 [2024-07-15 15:30:56.688746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.420 [2024-07-15 15:30:56.688759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.420 [2024-07-15 15:30:56.688768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.420 [2024-07-15 15:30:56.688779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:11048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.420 [2024-07-15 15:30:56.688788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.420 [2024-07-15 15:30:56.688798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:11056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.420 [2024-07-15 15:30:56.688807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.420 [2024-07-15 15:30:56.688818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.420 [2024-07-15 15:30:56.688827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.420 [2024-07-15 15:30:56.688843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:11072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.420 [2024-07-15 15:30:56.688852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.420 [2024-07-15 15:30:56.688863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:11080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.420 [2024-07-15 15:30:56.688872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.420 [2024-07-15 15:30:56.688882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:11088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.420 [2024-07-15 15:30:56.688891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.420 [2024-07-15 15:30:56.688902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:11096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.420 [2024-07-15 15:30:56.688911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.420 [2024-07-15 15:30:56.688921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:11104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.420 [2024-07-15 15:30:56.688930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.420 [2024-07-15 15:30:56.688940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.420 [2024-07-15 15:30:56.688949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.420 [2024-07-15 15:30:56.688960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.420 [2024-07-15 15:30:56.688969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.420 [2024-07-15 15:30:56.688980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:11128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.420 [2024-07-15 15:30:56.688989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.420 [2024-07-15 15:30:56.689000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:11136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.420 [2024-07-15 15:30:56.689010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.420 [2024-07-15 15:30:56.689021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:11144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.420 [2024-07-15 15:30:56.689030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.420 [2024-07-15 15:30:56.689040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.420 [2024-07-15 15:30:56.689049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.420 [2024-07-15 15:30:56.689059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:11160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.420 [2024-07-15 15:30:56.689068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.420 [2024-07-15 15:30:56.689080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:11168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.420 [2024-07-15 15:30:56.689088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.420 [2024-07-15 15:30:56.689099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:11176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.420 [2024-07-15 15:30:56.689108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.420 [2024-07-15 15:30:56.689118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:11184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.420 [2024-07-15 15:30:56.689127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.420 [2024-07-15 15:30:56.689138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:11192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.420 [2024-07-15 15:30:56.689147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.420 [2024-07-15 15:30:56.689157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:11200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.420 [2024-07-15 15:30:56.689166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.420 [2024-07-15 15:30:56.689177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.420 [2024-07-15 15:30:56.689185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.420 [2024-07-15 15:30:56.689196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:11216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.420 [2024-07-15 15:30:56.689206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.420 [2024-07-15 15:30:56.689216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.420 [2024-07-15 15:30:56.689225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.420 [2024-07-15 15:30:56.689236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:11232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.420 [2024-07-15 15:30:56.689245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.420 [2024-07-15 15:30:56.689257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.420 [2024-07-15 15:30:56.689266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.420 [2024-07-15 15:30:56.689276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:11248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.420 [2024-07-15 15:30:56.689285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.420 [2024-07-15 15:30:56.689295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:11256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.420 [2024-07-15 15:30:56.689304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.420 [2024-07-15 15:30:56.689315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.420 [2024-07-15 15:30:56.689324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.420 [2024-07-15 15:30:56.689334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.420 [2024-07-15 15:30:56.689343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.420 [2024-07-15 15:30:56.689353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:11280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.420 [2024-07-15 15:30:56.689362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.420 [2024-07-15 15:30:56.689373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:11288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.420 [2024-07-15 15:30:56.689382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.420 [2024-07-15 15:30:56.689393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.420 [2024-07-15 15:30:56.689402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.420 [2024-07-15 15:30:56.689412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:11304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.420 [2024-07-15 15:30:56.689421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.420 [2024-07-15 15:30:56.689432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.420 [2024-07-15 15:30:56.689440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.420 [2024-07-15 15:30:56.689451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.420 [2024-07-15 15:30:56.689459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.420 [2024-07-15 15:30:56.689470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:11328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.420 [2024-07-15 15:30:56.689479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.420 [2024-07-15 15:30:56.689489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.420 [2024-07-15 15:30:56.689498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.420 [2024-07-15 15:30:56.689512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.420 [2024-07-15 15:30:56.689521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.420 [2024-07-15 15:30:56.689531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:11352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.420 [2024-07-15 15:30:56.689541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.420 [2024-07-15 15:30:56.689551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.420 [2024-07-15 15:30:56.689560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.420 [2024-07-15 15:30:56.689570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:11368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.420 [2024-07-15 15:30:56.689579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.420 [2024-07-15 15:30:56.689589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.420 [2024-07-15 15:30:56.689598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.420 [2024-07-15 15:30:56.689608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:11384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.420 [2024-07-15 15:30:56.689617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.420 [2024-07-15 15:30:56.689628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:11392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.420 [2024-07-15 15:30:56.689636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.420 [2024-07-15 15:30:56.689647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.420 [2024-07-15 15:30:56.689656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.420 [2024-07-15 15:30:56.689666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:11408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.420 [2024-07-15 15:30:56.689675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.420 [2024-07-15 15:30:56.689685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:11416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.420 [2024-07-15 15:30:56.689694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.420 [2024-07-15 15:30:56.689705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.420 [2024-07-15 15:30:56.689714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.420 [2024-07-15 15:30:56.689724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.420 [2024-07-15 15:30:56.689733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.420 [2024-07-15 15:30:56.689743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.420 [2024-07-15 15:30:56.689753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.420 [2024-07-15 15:30:56.689764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:11448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.421 [2024-07-15 15:30:56.689773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.421 [2024-07-15 15:30:56.689783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:11456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.421 [2024-07-15 15:30:56.689792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.421 [2024-07-15 15:30:56.689803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:11464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.421 [2024-07-15 15:30:56.689812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.421 [2024-07-15 15:30:56.689822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:11472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.421 [2024-07-15 15:30:56.689834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.421 [2024-07-15 15:30:56.689844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:11480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.421 [2024-07-15 15:30:56.689853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.421 [2024-07-15 15:30:56.689864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.421 [2024-07-15 15:30:56.689872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.421 [2024-07-15 15:30:56.689883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.421 [2024-07-15 15:30:56.689892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.421 [2024-07-15 15:30:56.689902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:11504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.421 [2024-07-15 15:30:56.689911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.421 [2024-07-15 15:30:56.689921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.421 [2024-07-15 15:30:56.689930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.421 [2024-07-15 15:30:56.689940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.421 [2024-07-15 15:30:56.689949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.421 [2024-07-15 15:30:56.689960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.421 [2024-07-15 15:30:56.689968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.421 [2024-07-15 15:30:56.689979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.421 [2024-07-15 15:30:56.689987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.421 [2024-07-15 15:30:56.689999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.421 [2024-07-15 15:30:56.690009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.421 [2024-07-15 15:30:56.690019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.421 [2024-07-15 15:30:56.690028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.421 [2024-07-15 15:30:56.690038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.421 [2024-07-15 15:30:56.690047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.421 [2024-07-15 15:30:56.690058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:11568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.421 [2024-07-15 15:30:56.690067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.421 [2024-07-15 15:30:56.690077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:11576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.421 [2024-07-15 15:30:56.690086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.421 [2024-07-15 15:30:56.690096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.421 [2024-07-15 15:30:56.690105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.421 [2024-07-15 15:30:56.690116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.421 [2024-07-15 15:30:56.690125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.421 [2024-07-15 15:30:56.690135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.421 [2024-07-15 15:30:56.690144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.421 [2024-07-15 15:30:56.690154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:11608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.421 [2024-07-15 15:30:56.690163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.421 [2024-07-15 15:30:56.690174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:11616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.421 [2024-07-15 15:30:56.690183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.421 [2024-07-15 15:30:56.690194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:11624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.421 [2024-07-15 15:30:56.690202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.421 [2024-07-15 15:30:56.690213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:11632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.421 [2024-07-15 15:30:56.690222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.421 [2024-07-15 15:30:56.690232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.421 [2024-07-15 15:30:56.690241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.421 [2024-07-15 15:30:56.690252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.421 [2024-07-15 15:30:56.690261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.421 [2024-07-15 15:30:56.690271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.421 [2024-07-15 15:30:56.690280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.421 [2024-07-15 15:30:56.690291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.421 [2024-07-15 15:30:56.690300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.421 [2024-07-15 15:30:56.690310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:11672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.421 [2024-07-15 15:30:56.690319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.421 [2024-07-15 15:30:56.690330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:11680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.421 [2024-07-15 15:30:56.690339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.421 [2024-07-15 15:30:56.690349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:11688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.421 [2024-07-15 15:30:56.690358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.421 [2024-07-15 15:30:56.690369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:11696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.421 [2024-07-15 15:30:56.690377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.421 [2024-07-15 15:30:56.690388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:11704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.421 [2024-07-15 15:30:56.690397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.421 [2024-07-15 15:30:56.690407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:11712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.421 [2024-07-15 15:30:56.690416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.421 [2024-07-15 15:30:56.690426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:11720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.421 [2024-07-15 15:30:56.690435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.421 [2024-07-15 15:30:56.690445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:11728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.421 [2024-07-15 15:30:56.690455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.421 [2024-07-15 15:30:56.690465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:11736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.421 [2024-07-15 15:30:56.690474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.421 [2024-07-15 15:30:56.690484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:11744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.421 [2024-07-15 15:30:56.690494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.421 [2024-07-15 15:30:56.690504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:11752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.421 [2024-07-15 15:30:56.690513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.421 [2024-07-15 15:30:56.690524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:11760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.421 [2024-07-15 15:30:56.690532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.421 [2024-07-15 15:30:56.690543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:11768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.421 [2024-07-15 15:30:56.690551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.421 [2024-07-15 15:30:56.690562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.421 [2024-07-15 15:30:56.690571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.421 [2024-07-15 15:30:56.690581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:11784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.421 [2024-07-15 15:30:56.690590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.421 [2024-07-15 15:30:56.690600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:11792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.421 [2024-07-15 15:30:56.690609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.421 [2024-07-15 15:30:56.690620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:11800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.421 [2024-07-15 15:30:56.690630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.421 [2024-07-15 15:30:56.690640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.421 [2024-07-15 15:30:56.690649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.421 [2024-07-15 15:30:56.690659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.421 [2024-07-15 15:30:56.690669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.421 [2024-07-15 15:30:56.690679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.421 [2024-07-15 15:30:56.690688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.421 [2024-07-15 15:30:56.690699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.421 [2024-07-15 15:30:56.690708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.421 [2024-07-15 15:30:56.690718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:11840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.421 [2024-07-15 15:30:56.690727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.422 [2024-07-15 15:30:56.690739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.422 [2024-07-15 15:30:56.690748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.422 [2024-07-15 15:30:56.690759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:11856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.422 [2024-07-15 15:30:56.690768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.422 [2024-07-15 15:30:56.690778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.422 [2024-07-15 15:30:56.690787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.422 [2024-07-15 15:30:56.690797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:11872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.422 [2024-07-15 15:30:56.690806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.422 [2024-07-15 15:30:56.690817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.422 [2024-07-15 15:30:56.690826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.422 [2024-07-15 15:30:56.690839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:11888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.422 [2024-07-15 15:30:56.690848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.422 [2024-07-15 15:30:56.690858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:11896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.422 [2024-07-15 15:30:56.690867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.422 [2024-07-15 15:30:56.690892] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.422 [2024-07-15 15:30:56.690901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:8 PRP1 0x0 PRP2 0x0 00:25:59.422 [2024-07-15 15:30:56.690910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.422 [2024-07-15 15:30:56.690922] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.422 [2024-07-15 15:30:56.690929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.422 [2024-07-15 15:30:56.690937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11912 len:8 PRP1 0x0 PRP2 0x0 00:25:59.422 [2024-07-15 15:30:56.690946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.422 [2024-07-15 15:30:56.690955] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.422 [2024-07-15 15:30:56.690962] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.422 [2024-07-15 15:30:56.690970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11920 len:8 PRP1 0x0 PRP2 0x0 00:25:59.422 [2024-07-15 15:30:56.690979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.422 [2024-07-15 15:30:56.691023] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17071b0 was disconnected and freed. reset controller. 00:25:59.422 [2024-07-15 15:30:56.691036] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:59.422 [2024-07-15 15:30:56.691060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.422 [2024-07-15 15:30:56.691070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.422 [2024-07-15 15:30:56.691080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.422 [2024-07-15 15:30:56.691089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.422 [2024-07-15 15:30:56.691098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.422 [2024-07-15 15:30:56.691107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.422 [2024-07-15 15:30:56.691117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.422 [2024-07-15 15:30:56.691126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.422 [2024-07-15 15:30:56.691135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.422 [2024-07-15 15:30:56.691167] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153c590 (9): Bad file descriptor 00:25:59.422 [2024-07-15 15:30:56.693813] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.422 [2024-07-15 15:30:56.727193] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:59.422 00:25:59.422 Latency(us) 00:25:59.422 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:59.422 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:59.422 Verification LBA range: start 0x0 length 0x4000 00:25:59.422 NVMe0n1 : 15.00 12109.41 47.30 563.97 0.00 10079.18 802.82 13369.34 00:25:59.422 =================================================================================================================== 00:25:59.422 Total : 12109.41 47.30 563.97 0.00 10079.18 802.82 13369.34 00:25:59.422 Received shutdown signal, test time was about 15.000000 seconds 00:25:59.422 00:25:59.422 Latency(us) 00:25:59.422 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:59.422 =================================================================================================================== 00:25:59.422 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:59.422 15:31:02 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:59.422 15:31:02 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:59.422 15:31:02 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:59.422 15:31:02 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3162854 00:25:59.422 15:31:02 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:59.422 15:31:02 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3162854 /var/tmp/bdevperf.sock 00:25:59.422 15:31:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3162854 ']' 00:25:59.422 15:31:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:59.422 15:31:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:59.422 15:31:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:59.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:59.422 15:31:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:59.422 15:31:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:59.997 15:31:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:59.997 15:31:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:25:59.997 15:31:03 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:00.256 [2024-07-15 15:31:03.928955] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:00.256 15:31:03 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:00.256 [2024-07-15 15:31:04.117443] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:00.256 15:31:04 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:00.824 NVMe0n1 00:26:00.824 15:31:04 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:01.083 00:26:01.083 15:31:04 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:01.341 00:26:01.341 15:31:05 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:01.341 15:31:05 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:26:01.599 15:31:05 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:01.599 15:31:05 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:26:04.914 15:31:08 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:04.914 15:31:08 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:26:04.914 15:31:08 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:04.914 15:31:08 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3163669 00:26:04.914 15:31:08 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 3163669 00:26:06.289 0 00:26:06.289 15:31:09 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:06.289 [2024-07-15 15:31:02.982176] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:26:06.289 [2024-07-15 15:31:02.982228] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3162854 ] 00:26:06.289 EAL: No free 2048 kB hugepages reported on node 1 00:26:06.289 [2024-07-15 15:31:03.051509] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:06.289 [2024-07-15 15:31:03.117099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:06.289 [2024-07-15 15:31:05.457322] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:06.289 [2024-07-15 15:31:05.457366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:06.289 [2024-07-15 15:31:05.457379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.289 [2024-07-15 15:31:05.457390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:06.289 [2024-07-15 15:31:05.457399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.289 [2024-07-15 15:31:05.457409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:06.289 [2024-07-15 15:31:05.457418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.289 [2024-07-15 15:31:05.457427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:06.289 [2024-07-15 15:31:05.457436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.289 [2024-07-15 15:31:05.457445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.289 [2024-07-15 15:31:05.457471] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.289 [2024-07-15 15:31:05.457486] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b9b590 (9): Bad file descriptor 00:26:06.289 [2024-07-15 15:31:05.518627] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:06.289 Running I/O for 1 seconds... 00:26:06.289 00:26:06.289 Latency(us) 00:26:06.289 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:06.289 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:06.289 Verification LBA range: start 0x0 length 0x4000 00:26:06.289 NVMe0n1 : 1.01 12039.76 47.03 0.00 0.00 10590.98 2202.01 17825.79 00:26:06.289 =================================================================================================================== 00:26:06.289 Total : 12039.76 47.03 0.00 0.00 10590.98 2202.01 17825.79 00:26:06.289 15:31:09 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:06.289 15:31:09 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:26:06.289 15:31:09 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:06.289 15:31:10 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:06.289 15:31:10 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:26:06.547 15:31:10 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:06.805 15:31:10 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:26:10.088 15:31:13 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:10.088 15:31:13 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:26:10.088 15:31:13 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 3162854 00:26:10.088 15:31:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3162854 ']' 00:26:10.088 15:31:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3162854 00:26:10.088 15:31:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:26:10.088 15:31:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:10.088 15:31:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3162854 00:26:10.088 15:31:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:10.088 15:31:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:10.088 15:31:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3162854' 00:26:10.088 killing process with pid 3162854 00:26:10.088 15:31:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3162854 00:26:10.088 15:31:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3162854 00:26:10.088 15:31:13 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:26:10.088 15:31:13 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:10.347 15:31:14 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:26:10.347 15:31:14 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:10.347 15:31:14 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:26:10.347 15:31:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:10.347 15:31:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:26:10.347 15:31:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:10.347 15:31:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:26:10.347 15:31:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:10.347 15:31:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:10.347 rmmod nvme_tcp 00:26:10.347 rmmod nvme_fabrics 00:26:10.347 rmmod nvme_keyring 00:26:10.347 15:31:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:10.347 15:31:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:26:10.347 15:31:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:26:10.347 15:31:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 3159655 ']' 00:26:10.347 15:31:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 3159655 00:26:10.347 15:31:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3159655 ']' 00:26:10.347 15:31:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3159655 00:26:10.347 15:31:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:26:10.347 15:31:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:10.347 15:31:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3159655 00:26:10.347 15:31:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:10.347 15:31:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:10.347 15:31:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3159655' 00:26:10.347 killing process with pid 3159655 00:26:10.347 15:31:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3159655 00:26:10.347 15:31:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3159655 00:26:10.606 15:31:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:10.606 15:31:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:10.606 15:31:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:10.606 15:31:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:10.606 15:31:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:10.606 15:31:14 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:10.606 15:31:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:10.606 15:31:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:13.138 15:31:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:13.138 00:26:13.138 real 0m39.482s 00:26:13.138 user 2m2.318s 00:26:13.138 sys 0m9.867s 00:26:13.138 15:31:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:13.138 15:31:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:13.138 ************************************ 00:26:13.138 END TEST nvmf_failover 00:26:13.138 ************************************ 00:26:13.138 15:31:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:13.138 15:31:16 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:13.138 15:31:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:13.138 15:31:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:13.138 15:31:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:13.138 ************************************ 00:26:13.138 START TEST nvmf_host_discovery 00:26:13.138 ************************************ 00:26:13.138 15:31:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:13.138 * Looking for test storage... 00:26:13.138 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:13.138 15:31:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:13.138 15:31:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:26:13.138 15:31:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:13.138 15:31:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:13.138 15:31:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:13.138 15:31:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:13.138 15:31:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:13.138 15:31:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:13.138 15:31:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:13.138 15:31:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:13.138 15:31:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:13.138 15:31:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:13.138 15:31:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:26:13.138 15:31:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:26:13.138 15:31:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:13.138 15:31:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:13.138 15:31:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:13.138 15:31:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:13.138 15:31:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:13.138 15:31:16 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:13.138 15:31:16 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:13.138 15:31:16 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:13.138 15:31:16 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.138 15:31:16 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.138 15:31:16 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.138 15:31:16 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:26:13.138 15:31:16 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.138 15:31:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:26:13.138 15:31:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:13.138 15:31:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:13.138 15:31:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:13.139 15:31:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:13.139 15:31:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:13.139 15:31:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:13.139 15:31:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:13.139 15:31:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:13.139 15:31:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:26:13.139 15:31:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:26:13.139 15:31:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:26:13.139 15:31:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:26:13.139 15:31:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:26:13.139 15:31:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:26:13.139 15:31:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:26:13.139 15:31:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:13.139 15:31:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:13.139 15:31:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:13.139 15:31:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:13.139 15:31:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:13.139 15:31:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:13.139 15:31:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:13.139 15:31:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:13.139 15:31:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:13.139 15:31:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:13.139 15:31:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:26:13.139 15:31:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:19.702 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:19.702 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:26:19.702 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:19.702 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:19.702 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:19.702 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:19.702 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:19.702 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:26:19.702 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:19.702 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:26:19.702 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:26:19.702 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:26:19.702 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:26:19.702 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:26:19.702 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:26:19.702 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:19.702 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:19.702 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:19.702 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:19.703 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:19.703 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:19.703 Found net devices under 0000:af:00.0: cvl_0_0 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:19.703 Found net devices under 0000:af:00.1: cvl_0_1 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:19.703 15:31:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:19.703 15:31:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:19.703 15:31:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:19.703 15:31:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:19.703 15:31:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:19.703 15:31:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:19.703 15:31:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:19.703 15:31:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:19.703 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:19.703 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:26:19.703 00:26:19.703 --- 10.0.0.2 ping statistics --- 00:26:19.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:19.703 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:26:19.703 15:31:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:19.703 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:19.703 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:26:19.703 00:26:19.703 --- 10.0.0.1 ping statistics --- 00:26:19.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:19.703 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:26:19.703 15:31:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:19.703 15:31:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:26:19.703 15:31:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:19.703 15:31:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:19.703 15:31:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:19.703 15:31:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:19.703 15:31:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:19.703 15:31:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:19.703 15:31:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:19.703 15:31:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:26:19.703 15:31:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:19.703 15:31:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:19.703 15:31:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:19.703 15:31:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=3168270 00:26:19.703 15:31:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:19.703 15:31:23 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 3168270 00:26:19.703 15:31:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 3168270 ']' 00:26:19.703 15:31:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:19.703 15:31:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:19.703 15:31:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:19.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:19.703 15:31:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:19.703 15:31:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:19.703 [2024-07-15 15:31:23.299213] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:26:19.703 [2024-07-15 15:31:23.299263] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:19.703 EAL: No free 2048 kB hugepages reported on node 1 00:26:19.703 [2024-07-15 15:31:23.370762] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:19.703 [2024-07-15 15:31:23.443570] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:19.703 [2024-07-15 15:31:23.443612] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:19.703 [2024-07-15 15:31:23.443622] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:19.703 [2024-07-15 15:31:23.443630] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:19.703 [2024-07-15 15:31:23.443653] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:19.703 [2024-07-15 15:31:23.443675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:20.271 15:31:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:20.271 15:31:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:26:20.271 15:31:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:20.271 15:31:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:20.271 15:31:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:20.271 15:31:24 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:20.271 15:31:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:20.271 15:31:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.271 15:31:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:20.271 [2024-07-15 15:31:24.138971] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:20.271 15:31:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.271 15:31:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:26:20.271 15:31:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.271 15:31:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:20.271 [2024-07-15 15:31:24.147135] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:20.271 15:31:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.271 15:31:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:26:20.271 15:31:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.271 15:31:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:20.271 null0 00:26:20.271 15:31:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.271 15:31:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:26:20.271 15:31:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.271 15:31:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:20.271 null1 00:26:20.271 15:31:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.271 15:31:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:26:20.271 15:31:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.271 15:31:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:20.271 15:31:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.271 15:31:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3168411 00:26:20.271 15:31:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3168411 /tmp/host.sock 00:26:20.271 15:31:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 3168411 ']' 00:26:20.271 15:31:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:26:20.271 15:31:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:20.271 15:31:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:20.271 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:20.271 15:31:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:20.271 15:31:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:20.271 15:31:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:26:20.530 [2024-07-15 15:31:24.225916] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:26:20.530 [2024-07-15 15:31:24.225966] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3168411 ] 00:26:20.530 EAL: No free 2048 kB hugepages reported on node 1 00:26:20.530 [2024-07-15 15:31:24.311748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:20.530 [2024-07-15 15:31:24.385693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:21.466 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:21.467 [2024-07-15 15:31:25.366286] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.467 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:26:21.726 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:21.726 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.726 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:21.726 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:21.726 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:21.726 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:21.726 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.726 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:26:21.726 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:26:21.726 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:21.726 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:21.726 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:21.726 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.726 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:21.726 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:21.726 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.726 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:26:21.726 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:26:21.726 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:21.726 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:21.726 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:21.726 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:21.726 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:21.726 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:21.726 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:21.726 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:21.726 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:21.726 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.726 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:21.726 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.726 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:21.726 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:26:21.726 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:21.726 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:21.726 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:26:21.726 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.726 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:21.726 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.726 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:21.726 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:21.726 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:21.726 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:21.726 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:21.726 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:26:21.726 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:21.726 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.726 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:21.726 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:21.726 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:21.726 15:31:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:21.726 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.726 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:26:21.726 15:31:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:26:22.293 [2024-07-15 15:31:26.046941] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:22.293 [2024-07-15 15:31:26.046962] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:22.293 [2024-07-15 15:31:26.046977] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:22.293 [2024-07-15 15:31:26.134232] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:22.293 [2024-07-15 15:31:26.198337] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:22.293 [2024-07-15 15:31:26.198359] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:22.861 15:31:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:22.861 15:31:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:22.861 15:31:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:26:22.861 15:31:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:22.861 15:31:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:22.861 15:31:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.861 15:31:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:22.861 15:31:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:22.861 15:31:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.861 15:31:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.861 15:31:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.861 15:31:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:22.861 15:31:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:22.861 15:31:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:22.861 15:31:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:22.861 15:31:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:22.861 15:31:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:26:22.861 15:31:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:26:22.861 15:31:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:22.861 15:31:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:22.861 15:31:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:22.861 15:31:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.861 15:31:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:22.862 15:31:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.862 15:31:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.862 15:31:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:26:22.862 15:31:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:22.862 15:31:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:22.862 15:31:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:22.862 15:31:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:22.862 15:31:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:22.862 15:31:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:26:22.862 15:31:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:26:22.862 15:31:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:22.862 15:31:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:22.862 15:31:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.862 15:31:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.862 15:31:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:22.862 15:31:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:22.862 15:31:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.862 15:31:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:26:22.862 15:31:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:22.862 15:31:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:26:22.862 15:31:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:22.862 15:31:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:22.862 15:31:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:22.862 15:31:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:22.862 15:31:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:22.862 15:31:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:22.862 15:31:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:22.862 15:31:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:22.862 15:31:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:22.862 15:31:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.862 15:31:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.862 15:31:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.121 15:31:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:23.121 15:31:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:26:23.121 15:31:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:23.121 15:31:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:23.121 15:31:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:26:23.121 15:31:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.121 15:31:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:23.121 15:31:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.121 15:31:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:23.121 15:31:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:23.121 15:31:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:23.121 15:31:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:23.121 15:31:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:23.121 15:31:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:26:23.121 15:31:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:23.121 15:31:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.121 15:31:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:23.121 15:31:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:23.121 15:31:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:23.121 15:31:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:23.121 15:31:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.121 15:31:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:23.121 15:31:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:23.121 15:31:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:26:23.121 15:31:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:23.121 15:31:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:23.121 15:31:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:23.121 15:31:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:23.121 15:31:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:23.121 15:31:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:23.121 15:31:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:23.121 15:31:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:26:23.121 15:31:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:23.121 15:31:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.121 15:31:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:23.121 15:31:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.121 15:31:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:23.121 15:31:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:23.121 15:31:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:23.121 15:31:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:23.121 15:31:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:26:23.121 15:31:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.121 15:31:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:23.121 [2024-07-15 15:31:27.010936] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:23.121 [2024-07-15 15:31:27.012040] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:23.121 [2024-07-15 15:31:27.012063] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:23.121 15:31:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.121 15:31:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:23.121 15:31:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:23.121 15:31:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:23.121 15:31:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:23.121 15:31:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:23.121 15:31:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:26:23.121 15:31:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:23.121 15:31:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:23.121 15:31:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.121 15:31:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:23.121 15:31:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:23.121 15:31:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:23.380 15:31:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.380 15:31:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.380 15:31:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:23.380 15:31:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:23.380 15:31:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:23.380 15:31:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:23.380 15:31:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:23.380 15:31:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:23.380 15:31:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:26:23.380 15:31:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:23.380 15:31:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:23.380 15:31:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.380 15:31:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:23.380 15:31:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:23.380 15:31:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:23.380 15:31:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.380 15:31:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:23.380 15:31:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:23.380 15:31:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:23.380 15:31:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:23.380 15:31:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:23.380 15:31:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:23.380 15:31:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:23.380 15:31:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:26:23.380 15:31:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:23.380 15:31:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:23.380 15:31:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.380 15:31:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:23.380 15:31:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:23.380 15:31:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:23.380 [2024-07-15 15:31:27.140728] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:26:23.380 15:31:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.380 15:31:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:26:23.380 15:31:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:26:23.638 [2024-07-15 15:31:27.449068] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:23.638 [2024-07-15 15:31:27.449086] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:23.638 [2024-07-15 15:31:27.449093] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:24.573 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:24.573 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:24.573 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:26:24.573 15:31:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:24.573 15:31:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:24.573 15:31:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:24.573 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.573 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:24.573 15:31:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:24.573 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.573 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:26:24.573 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:24.573 15:31:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:26:24.573 15:31:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:24.573 15:31:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:24.573 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:24.573 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:24.573 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:24.573 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:24.573 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:24.573 15:31:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:24.573 15:31:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:24.573 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.573 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:24.573 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.573 15:31:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:24.573 15:31:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:24.573 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:24.573 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:24.573 15:31:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:24.573 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.573 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:24.573 [2024-07-15 15:31:28.282642] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:24.573 [2024-07-15 15:31:28.282663] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:24.573 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.573 15:31:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:24.573 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:24.573 [2024-07-15 15:31:28.288107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.573 [2024-07-15 15:31:28.288126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.573 [2024-07-15 15:31:28.288138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.573 [2024-07-15 15:31:28.288148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.573 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:24.573 [2024-07-15 15:31:28.288158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.573 [2024-07-15 15:31:28.288169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.573 [2024-07-15 15:31:28.288179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.573 [2024-07-15 15:31:28.288188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.573 [2024-07-15 15:31:28.288201] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfafb0 is same with the state(5) to be set 00:26:24.573 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:24.573 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:24.573 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:26:24.573 15:31:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:24.573 15:31:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:24.573 15:31:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:24.573 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.573 15:31:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:24.573 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:24.573 [2024-07-15 15:31:28.298121] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bfafb0 (9): Bad file descriptor 00:26:24.573 [2024-07-15 15:31:28.308159] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:24.573 [2024-07-15 15:31:28.308466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.573 [2024-07-15 15:31:28.308482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bfafb0 with addr=10.0.0.2, port=4420 00:26:24.573 [2024-07-15 15:31:28.308493] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfafb0 is same with the state(5) to be set 00:26:24.573 [2024-07-15 15:31:28.308506] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bfafb0 (9): Bad file descriptor 00:26:24.573 [2024-07-15 15:31:28.308519] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:24.573 [2024-07-15 15:31:28.308528] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:24.573 [2024-07-15 15:31:28.308538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:24.573 [2024-07-15 15:31:28.308550] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.573 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.573 [2024-07-15 15:31:28.318214] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:24.573 [2024-07-15 15:31:28.318545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.573 [2024-07-15 15:31:28.318559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bfafb0 with addr=10.0.0.2, port=4420 00:26:24.573 [2024-07-15 15:31:28.318569] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfafb0 is same with the state(5) to be set 00:26:24.573 [2024-07-15 15:31:28.318582] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bfafb0 (9): Bad file descriptor 00:26:24.573 [2024-07-15 15:31:28.318600] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:24.573 [2024-07-15 15:31:28.318610] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:24.574 [2024-07-15 15:31:28.318619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:24.574 [2024-07-15 15:31:28.318630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.574 [2024-07-15 15:31:28.328266] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:24.574 [2024-07-15 15:31:28.328606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.574 [2024-07-15 15:31:28.328621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bfafb0 with addr=10.0.0.2, port=4420 00:26:24.574 [2024-07-15 15:31:28.328634] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfafb0 is same with the state(5) to be set 00:26:24.574 [2024-07-15 15:31:28.328647] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bfafb0 (9): Bad file descriptor 00:26:24.574 [2024-07-15 15:31:28.328659] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:24.574 [2024-07-15 15:31:28.328667] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:24.574 [2024-07-15 15:31:28.328676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:24.574 [2024-07-15 15:31:28.328688] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.574 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.574 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:24.574 15:31:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:24.574 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:24.574 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:24.574 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:24.574 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:24.574 [2024-07-15 15:31:28.338323] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:24.574 [2024-07-15 15:31:28.338597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.574 [2024-07-15 15:31:28.338611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bfafb0 with addr=10.0.0.2, port=4420 00:26:24.574 [2024-07-15 15:31:28.338621] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfafb0 is same with the state(5) to be set 00:26:24.574 [2024-07-15 15:31:28.338634] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bfafb0 (9): Bad file descriptor 00:26:24.574 [2024-07-15 15:31:28.338653] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:24.574 [2024-07-15 15:31:28.338662] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:24.574 [2024-07-15 15:31:28.338671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:24.574 [2024-07-15 15:31:28.338682] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.574 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:26:24.574 15:31:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:24.574 15:31:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:24.574 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.574 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:24.574 15:31:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:24.574 15:31:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:24.574 [2024-07-15 15:31:28.348376] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:24.574 [2024-07-15 15:31:28.348749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.574 [2024-07-15 15:31:28.348764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bfafb0 with addr=10.0.0.2, port=4420 00:26:24.574 [2024-07-15 15:31:28.348774] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfafb0 is same with the state(5) to be set 00:26:24.574 [2024-07-15 15:31:28.348786] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bfafb0 (9): Bad file descriptor 00:26:24.574 [2024-07-15 15:31:28.348801] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:24.574 [2024-07-15 15:31:28.348809] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:24.574 [2024-07-15 15:31:28.348818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:24.574 [2024-07-15 15:31:28.348829] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.574 [2024-07-15 15:31:28.358431] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:24.574 [2024-07-15 15:31:28.358767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.574 [2024-07-15 15:31:28.358781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bfafb0 with addr=10.0.0.2, port=4420 00:26:24.574 [2024-07-15 15:31:28.358790] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfafb0 is same with the state(5) to be set 00:26:24.574 [2024-07-15 15:31:28.358802] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bfafb0 (9): Bad file descriptor 00:26:24.574 [2024-07-15 15:31:28.358821] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:24.574 [2024-07-15 15:31:28.358830] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:24.574 [2024-07-15 15:31:28.358844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:24.574 [2024-07-15 15:31:28.358855] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.574 [2024-07-15 15:31:28.368482] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:24.574 [2024-07-15 15:31:28.368842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.574 [2024-07-15 15:31:28.368856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bfafb0 with addr=10.0.0.2, port=4420 00:26:24.574 [2024-07-15 15:31:28.368865] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfafb0 is same with the state(5) to be set 00:26:24.574 [2024-07-15 15:31:28.368878] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bfafb0 (9): Bad file descriptor 00:26:24.574 [2024-07-15 15:31:28.368890] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:24.574 [2024-07-15 15:31:28.368898] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:24.574 [2024-07-15 15:31:28.368907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:24.574 [2024-07-15 15:31:28.368918] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.574 [2024-07-15 15:31:28.370336] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:26:24.574 [2024-07-15 15:31:28.370353] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:24.574 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.574 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:24.574 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:24.574 15:31:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:24.574 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:24.574 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:24.574 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:24.574 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:26:24.574 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:26:24.574 15:31:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:24.574 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.574 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:24.574 15:31:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:24.574 15:31:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:24.574 15:31:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:24.574 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.574 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:26:24.574 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:24.574 15:31:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:26:24.574 15:31:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:24.574 15:31:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:24.574 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:24.574 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:24.574 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:24.574 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:24.574 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:24.574 15:31:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:24.574 15:31:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:24.574 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.574 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:24.574 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.833 15:31:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:24.833 15:31:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:24.833 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:24.833 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:24.833 15:31:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:26:24.833 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.833 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:24.833 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.833 15:31:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:26:24.833 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:26:24.833 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:24.833 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:24.833 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:26:24.833 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:26:24.833 15:31:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:24.833 15:31:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:24.833 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.833 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:24.833 15:31:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:24.833 15:31:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:24.833 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.833 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:26:24.833 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:24.833 15:31:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:26:24.833 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:26:24.833 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:24.833 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:24.833 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:26:24.833 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:26:24.833 15:31:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:24.833 15:31:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:24.833 15:31:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:24.833 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.833 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:24.833 15:31:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:24.833 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.833 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:26:24.833 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:24.833 15:31:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:26:24.833 15:31:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:26:24.833 15:31:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:24.833 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:24.833 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:24.833 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:24.833 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:24.833 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:24.833 15:31:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:24.833 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.833 15:31:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:24.833 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:24.833 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.833 15:31:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:26:24.833 15:31:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:26:24.833 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:24.833 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:24.833 15:31:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:24.833 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.833 15:31:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:26.210 [2024-07-15 15:31:29.707007] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:26.210 [2024-07-15 15:31:29.707025] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:26.210 [2024-07-15 15:31:29.707038] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:26.210 [2024-07-15 15:31:29.794297] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:26:26.210 [2024-07-15 15:31:29.983170] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:26.210 [2024-07-15 15:31:29.983198] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:26.210 15:31:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.210 15:31:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:26.210 15:31:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:26:26.210 15:31:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:26.210 15:31:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:26.210 15:31:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:26.210 15:31:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:26.210 15:31:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:26.210 15:31:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:26.210 15:31:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.210 15:31:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:26.210 request: 00:26:26.210 { 00:26:26.210 "name": "nvme", 00:26:26.210 "trtype": "tcp", 00:26:26.210 "traddr": "10.0.0.2", 00:26:26.210 "adrfam": "ipv4", 00:26:26.210 "trsvcid": "8009", 00:26:26.210 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:26.210 "wait_for_attach": true, 00:26:26.210 "method": "bdev_nvme_start_discovery", 00:26:26.210 "req_id": 1 00:26:26.210 } 00:26:26.210 Got JSON-RPC error response 00:26:26.210 response: 00:26:26.210 { 00:26:26.210 "code": -17, 00:26:26.210 "message": "File exists" 00:26:26.210 } 00:26:26.210 15:31:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:26.210 15:31:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:26:26.210 15:31:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:26.210 15:31:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:26.210 15:31:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:26.210 15:31:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:26:26.210 15:31:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:26.210 15:31:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:26.210 15:31:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.210 15:31:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:26.210 15:31:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:26.210 15:31:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:26.210 15:31:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.210 15:31:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:26:26.210 15:31:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:26:26.210 15:31:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:26.210 15:31:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:26.210 15:31:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:26.210 15:31:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.210 15:31:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:26.210 15:31:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:26.210 15:31:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.210 15:31:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:26.210 15:31:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:26.210 15:31:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:26:26.210 15:31:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:26.211 15:31:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:26.211 15:31:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:26.211 15:31:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:26.211 15:31:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:26.211 15:31:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:26.211 15:31:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.211 15:31:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:26.469 request: 00:26:26.469 { 00:26:26.469 "name": "nvme_second", 00:26:26.469 "trtype": "tcp", 00:26:26.469 "traddr": "10.0.0.2", 00:26:26.469 "adrfam": "ipv4", 00:26:26.469 "trsvcid": "8009", 00:26:26.469 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:26.469 "wait_for_attach": true, 00:26:26.469 "method": "bdev_nvme_start_discovery", 00:26:26.469 "req_id": 1 00:26:26.469 } 00:26:26.469 Got JSON-RPC error response 00:26:26.469 response: 00:26:26.469 { 00:26:26.469 "code": -17, 00:26:26.469 "message": "File exists" 00:26:26.469 } 00:26:26.469 15:31:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:26.469 15:31:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:26:26.469 15:31:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:26.469 15:31:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:26.469 15:31:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:26.469 15:31:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:26:26.469 15:31:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:26.469 15:31:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:26.469 15:31:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.469 15:31:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:26.469 15:31:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:26.469 15:31:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:26.469 15:31:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.469 15:31:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:26:26.469 15:31:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:26:26.469 15:31:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:26.469 15:31:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:26.469 15:31:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:26.469 15:31:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:26.469 15:31:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.469 15:31:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:26.469 15:31:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.469 15:31:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:26.469 15:31:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:26.469 15:31:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:26:26.469 15:31:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:26.469 15:31:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:26.469 15:31:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:26.469 15:31:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:26.469 15:31:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:26.469 15:31:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:26.469 15:31:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.469 15:31:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:27.405 [2024-07-15 15:31:31.242819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.405 [2024-07-15 15:31:31.242856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c37a40 with addr=10.0.0.2, port=8010 00:26:27.405 [2024-07-15 15:31:31.242873] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:27.405 [2024-07-15 15:31:31.242882] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:27.405 [2024-07-15 15:31:31.242890] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:28.341 [2024-07-15 15:31:32.245238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.341 [2024-07-15 15:31:32.245263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c37a40 with addr=10.0.0.2, port=8010 00:26:28.341 [2024-07-15 15:31:32.245277] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:28.341 [2024-07-15 15:31:32.245285] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:28.341 [2024-07-15 15:31:32.245293] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:29.720 [2024-07-15 15:31:33.247282] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:26:29.720 request: 00:26:29.720 { 00:26:29.720 "name": "nvme_second", 00:26:29.720 "trtype": "tcp", 00:26:29.720 "traddr": "10.0.0.2", 00:26:29.720 "adrfam": "ipv4", 00:26:29.720 "trsvcid": "8010", 00:26:29.720 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:29.720 "wait_for_attach": false, 00:26:29.720 "attach_timeout_ms": 3000, 00:26:29.720 "method": "bdev_nvme_start_discovery", 00:26:29.720 "req_id": 1 00:26:29.720 } 00:26:29.720 Got JSON-RPC error response 00:26:29.720 response: 00:26:29.720 { 00:26:29.720 "code": -110, 00:26:29.720 "message": "Connection timed out" 00:26:29.720 } 00:26:29.720 15:31:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:29.720 15:31:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:26:29.720 15:31:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:29.720 15:31:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:29.720 15:31:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:29.720 15:31:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:26:29.720 15:31:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:29.720 15:31:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:29.720 15:31:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.720 15:31:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:29.720 15:31:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.720 15:31:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:29.720 15:31:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.720 15:31:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:26:29.720 15:31:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:26:29.720 15:31:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3168411 00:26:29.720 15:31:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:26:29.720 15:31:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:29.720 15:31:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:26:29.720 15:31:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:29.720 15:31:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:26:29.720 15:31:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:29.720 15:31:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:29.720 rmmod nvme_tcp 00:26:29.720 rmmod nvme_fabrics 00:26:29.720 rmmod nvme_keyring 00:26:29.720 15:31:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:29.720 15:31:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:26:29.720 15:31:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:26:29.720 15:31:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 3168270 ']' 00:26:29.720 15:31:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 3168270 00:26:29.720 15:31:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 3168270 ']' 00:26:29.720 15:31:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 3168270 00:26:29.720 15:31:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:26:29.720 15:31:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:29.720 15:31:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3168270 00:26:29.720 15:31:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:29.720 15:31:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:29.720 15:31:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3168270' 00:26:29.720 killing process with pid 3168270 00:26:29.720 15:31:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 3168270 00:26:29.720 15:31:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 3168270 00:26:29.720 15:31:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:29.720 15:31:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:29.720 15:31:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:29.720 15:31:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:29.721 15:31:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:29.721 15:31:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:29.721 15:31:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:29.721 15:31:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:32.327 15:31:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:32.327 00:26:32.327 real 0m19.090s 00:26:32.327 user 0m22.554s 00:26:32.327 sys 0m6.987s 00:26:32.327 15:31:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:32.327 15:31:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:32.327 ************************************ 00:26:32.327 END TEST nvmf_host_discovery 00:26:32.327 ************************************ 00:26:32.327 15:31:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:32.327 15:31:35 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:32.327 15:31:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:32.327 15:31:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:32.327 15:31:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:32.327 ************************************ 00:26:32.327 START TEST nvmf_host_multipath_status 00:26:32.327 ************************************ 00:26:32.327 15:31:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:32.327 * Looking for test storage... 00:26:32.327 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:32.327 15:31:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:32.327 15:31:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:26:32.327 15:31:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:32.327 15:31:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:32.327 15:31:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:32.327 15:31:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:32.327 15:31:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:32.327 15:31:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:32.327 15:31:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:32.327 15:31:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:32.328 15:31:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:32.328 15:31:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:32.328 15:31:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:26:32.328 15:31:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:26:32.328 15:31:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:32.328 15:31:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:32.328 15:31:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:32.328 15:31:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:32.328 15:31:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:32.328 15:31:35 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:32.328 15:31:35 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:32.328 15:31:35 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:32.328 15:31:35 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.328 15:31:35 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.328 15:31:35 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.328 15:31:35 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:26:32.328 15:31:35 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.328 15:31:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:26:32.328 15:31:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:32.328 15:31:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:32.328 15:31:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:32.328 15:31:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:32.328 15:31:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:32.328 15:31:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:32.328 15:31:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:32.328 15:31:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:32.328 15:31:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:32.328 15:31:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:32.328 15:31:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:32.328 15:31:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:26:32.328 15:31:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:32.328 15:31:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:32.328 15:31:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:26:32.328 15:31:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:32.328 15:31:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:32.328 15:31:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:32.328 15:31:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:32.328 15:31:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:32.328 15:31:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:32.328 15:31:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:32.328 15:31:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:32.328 15:31:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:32.328 15:31:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:32.328 15:31:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:26:32.328 15:31:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:38.889 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:38.889 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:26:38.889 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:38.889 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:38.889 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:38.889 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:38.889 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:38.889 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:26:38.889 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:38.889 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:26:38.889 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:26:38.889 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:26:38.889 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:26:38.889 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:26:38.889 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:26:38.889 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:38.889 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:38.889 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:38.889 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:38.889 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:38.889 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:38.889 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:38.889 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:38.889 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:38.889 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:38.889 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:38.889 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:38.889 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:38.889 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:38.889 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:38.889 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:38.889 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:38.889 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:38.889 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:38.889 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:38.889 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:38.889 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:38.889 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:38.889 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:38.889 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:38.889 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:38.889 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:38.889 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:38.889 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:38.889 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:38.889 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:38.889 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:38.889 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:38.889 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:38.889 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:38.889 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:38.889 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:38.889 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:38.889 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:38.889 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:38.889 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:38.890 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:38.890 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:38.890 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:38.890 Found net devices under 0000:af:00.0: cvl_0_0 00:26:38.890 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:38.890 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:38.890 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:38.890 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:38.890 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:38.890 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:38.890 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:38.890 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:38.890 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:38.890 Found net devices under 0000:af:00.1: cvl_0_1 00:26:38.890 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:38.890 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:38.890 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:26:38.890 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:38.890 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:38.890 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:38.890 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:38.890 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:38.890 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:38.890 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:38.890 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:38.890 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:38.890 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:38.890 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:38.890 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:38.890 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:38.890 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:38.890 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:38.890 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:38.890 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:38.890 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:38.890 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:38.890 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:38.890 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:38.890 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:38.890 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:38.890 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:38.890 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:26:38.890 00:26:38.890 --- 10.0.0.2 ping statistics --- 00:26:38.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:38.890 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:26:38.890 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:38.890 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:38.890 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:26:38.890 00:26:38.890 --- 10.0.0.1 ping statistics --- 00:26:38.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:38.890 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:26:38.890 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:38.890 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:26:38.890 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:38.890 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:38.890 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:38.890 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:38.890 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:38.890 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:38.890 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:38.890 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:26:38.890 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:38.890 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:38.890 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:38.890 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=3173841 00:26:38.890 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 3173841 00:26:38.890 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:38.890 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 3173841 ']' 00:26:38.890 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:38.890 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:38.890 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:38.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:38.890 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:38.890 15:31:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:38.890 [2024-07-15 15:31:42.633804] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:26:38.890 [2024-07-15 15:31:42.633868] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:38.890 EAL: No free 2048 kB hugepages reported on node 1 00:26:38.890 [2024-07-15 15:31:42.706854] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:38.890 [2024-07-15 15:31:42.778345] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:38.890 [2024-07-15 15:31:42.778386] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:38.890 [2024-07-15 15:31:42.778395] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:38.890 [2024-07-15 15:31:42.778403] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:38.890 [2024-07-15 15:31:42.778410] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:38.890 [2024-07-15 15:31:42.778459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:38.890 [2024-07-15 15:31:42.778462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:39.836 15:31:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:39.836 15:31:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:26:39.836 15:31:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:39.836 15:31:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:39.836 15:31:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:39.836 15:31:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:39.836 15:31:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3173841 00:26:39.836 15:31:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:39.836 [2024-07-15 15:31:43.625356] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:39.836 15:31:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:40.094 Malloc0 00:26:40.094 15:31:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:26:40.352 15:31:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:40.352 15:31:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:40.610 [2024-07-15 15:31:44.344757] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:40.610 15:31:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:40.610 [2024-07-15 15:31:44.509225] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:40.869 15:31:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3174134 00:26:40.869 15:31:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:26:40.869 15:31:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:40.869 15:31:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3174134 /var/tmp/bdevperf.sock 00:26:40.869 15:31:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 3174134 ']' 00:26:40.869 15:31:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:40.869 15:31:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:40.869 15:31:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:40.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:40.869 15:31:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:40.869 15:31:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:41.804 15:31:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:41.804 15:31:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:26:41.804 15:31:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:41.804 15:31:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:26:42.062 Nvme0n1 00:26:42.062 15:31:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:42.628 Nvme0n1 00:26:42.628 15:31:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:42.628 15:31:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:26:44.533 15:31:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:26:44.533 15:31:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:44.791 15:31:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:44.791 15:31:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:26:46.161 15:31:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:26:46.161 15:31:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:46.161 15:31:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:46.161 15:31:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:46.161 15:31:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:46.161 15:31:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:46.161 15:31:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:46.161 15:31:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:46.161 15:31:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:46.161 15:31:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:46.161 15:31:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:46.161 15:31:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:46.419 15:31:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:46.419 15:31:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:46.419 15:31:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:46.419 15:31:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:46.677 15:31:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:46.677 15:31:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:46.677 15:31:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:46.677 15:31:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:46.935 15:31:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:46.935 15:31:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:46.935 15:31:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:46.935 15:31:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:46.935 15:31:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:46.935 15:31:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:26:46.935 15:31:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:47.193 15:31:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:47.451 15:31:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:26:48.386 15:31:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:26:48.386 15:31:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:48.386 15:31:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.386 15:31:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:48.645 15:31:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:48.645 15:31:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:48.645 15:31:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.645 15:31:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:48.645 15:31:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:48.645 15:31:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:48.645 15:31:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.645 15:31:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:48.903 15:31:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:48.903 15:31:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:48.903 15:31:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.903 15:31:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:49.162 15:31:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:49.162 15:31:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:49.162 15:31:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:49.162 15:31:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:49.420 15:31:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:49.420 15:31:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:49.420 15:31:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:49.420 15:31:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:49.420 15:31:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:49.420 15:31:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:49.420 15:31:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:49.678 15:31:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:49.937 15:31:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:50.872 15:31:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:50.872 15:31:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:50.872 15:31:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.872 15:31:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:51.130 15:31:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:51.130 15:31:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:51.130 15:31:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:51.130 15:31:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:51.130 15:31:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:51.130 15:31:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:51.130 15:31:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:51.130 15:31:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:51.389 15:31:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:51.389 15:31:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:51.389 15:31:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:51.389 15:31:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:51.648 15:31:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:51.648 15:31:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:51.648 15:31:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:51.648 15:31:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:51.648 15:31:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:51.648 15:31:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:51.648 15:31:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:51.648 15:31:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:51.907 15:31:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:51.907 15:31:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:51.907 15:31:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:52.166 15:31:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:52.424 15:31:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:53.360 15:31:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:53.360 15:31:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:53.360 15:31:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:53.360 15:31:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:53.619 15:31:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:53.619 15:31:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:53.619 15:31:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:53.619 15:31:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:53.619 15:31:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:53.619 15:31:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:53.619 15:31:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:53.619 15:31:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:53.949 15:31:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:53.949 15:31:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:53.949 15:31:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:53.949 15:31:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:54.236 15:31:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:54.236 15:31:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:54.236 15:31:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:54.236 15:31:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:54.236 15:31:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:54.236 15:31:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:54.236 15:31:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:54.236 15:31:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:54.495 15:31:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:54.495 15:31:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:54.495 15:31:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:54.753 15:31:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:54.753 15:31:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:55.690 15:31:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:55.690 15:31:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:55.949 15:31:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:55.949 15:31:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:55.949 15:31:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:55.949 15:31:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:55.949 15:31:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:55.949 15:31:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:56.207 15:31:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:56.207 15:31:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:56.207 15:31:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:56.208 15:31:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:56.466 15:32:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:56.466 15:32:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:56.466 15:32:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:56.466 15:32:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:56.466 15:32:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:56.466 15:32:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:56.466 15:32:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:56.466 15:32:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:56.724 15:32:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:56.725 15:32:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:56.725 15:32:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:56.725 15:32:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:56.983 15:32:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:56.983 15:32:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:56.983 15:32:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:56.983 15:32:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:57.241 15:32:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:58.187 15:32:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:58.187 15:32:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:58.187 15:32:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:58.187 15:32:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:58.445 15:32:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:58.445 15:32:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:58.445 15:32:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:58.445 15:32:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:58.704 15:32:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:58.704 15:32:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:58.704 15:32:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:58.704 15:32:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:58.704 15:32:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:58.704 15:32:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:58.704 15:32:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:58.704 15:32:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:58.962 15:32:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:58.962 15:32:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:58.962 15:32:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:58.962 15:32:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:59.221 15:32:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:59.221 15:32:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:59.221 15:32:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:59.221 15:32:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:59.221 15:32:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:59.221 15:32:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:59.482 15:32:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:59.482 15:32:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:59.741 15:32:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:59.999 15:32:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:27:00.934 15:32:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:27:00.934 15:32:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:00.934 15:32:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:00.934 15:32:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:01.192 15:32:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:01.192 15:32:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:01.192 15:32:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:01.193 15:32:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:01.193 15:32:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:01.193 15:32:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:01.193 15:32:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:01.193 15:32:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:01.451 15:32:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:01.451 15:32:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:01.451 15:32:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:01.451 15:32:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:01.709 15:32:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:01.709 15:32:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:01.709 15:32:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:01.709 15:32:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:01.709 15:32:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:01.709 15:32:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:01.709 15:32:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:01.709 15:32:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:01.967 15:32:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:01.967 15:32:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:27:01.967 15:32:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:02.224 15:32:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:02.482 15:32:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:27:03.416 15:32:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:27:03.416 15:32:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:03.416 15:32:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:03.416 15:32:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:03.674 15:32:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:03.675 15:32:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:03.675 15:32:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:03.675 15:32:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:03.675 15:32:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:03.675 15:32:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:03.675 15:32:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:03.675 15:32:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:03.933 15:32:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:03.933 15:32:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:03.933 15:32:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:03.934 15:32:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:04.192 15:32:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:04.192 15:32:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:04.192 15:32:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:04.192 15:32:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:04.451 15:32:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:04.451 15:32:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:04.451 15:32:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:04.451 15:32:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:04.451 15:32:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:04.451 15:32:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:27:04.451 15:32:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:04.710 15:32:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:04.968 15:32:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:27:05.903 15:32:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:27:05.903 15:32:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:05.903 15:32:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:05.903 15:32:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:06.162 15:32:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:06.162 15:32:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:06.162 15:32:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:06.162 15:32:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:06.162 15:32:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:06.162 15:32:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:06.162 15:32:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:06.162 15:32:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:06.420 15:32:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:06.420 15:32:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:06.420 15:32:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:06.420 15:32:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:06.679 15:32:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:06.679 15:32:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:06.679 15:32:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:06.679 15:32:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:06.938 15:32:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:06.938 15:32:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:06.938 15:32:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:06.938 15:32:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:06.938 15:32:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:06.938 15:32:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:27:06.938 15:32:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:07.201 15:32:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:07.460 15:32:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:27:08.396 15:32:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:27:08.396 15:32:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:08.396 15:32:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:08.396 15:32:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:08.656 15:32:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:08.656 15:32:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:08.656 15:32:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:08.656 15:32:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:08.656 15:32:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:08.656 15:32:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:08.656 15:32:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:08.656 15:32:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:08.914 15:32:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:08.914 15:32:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:08.914 15:32:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:08.914 15:32:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:09.173 15:32:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:09.173 15:32:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:09.173 15:32:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:09.173 15:32:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:09.432 15:32:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:09.432 15:32:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:09.432 15:32:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:09.432 15:32:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:09.432 15:32:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:09.432 15:32:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3174134 00:27:09.432 15:32:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 3174134 ']' 00:27:09.432 15:32:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 3174134 00:27:09.432 15:32:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:27:09.432 15:32:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:09.432 15:32:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3174134 00:27:09.694 15:32:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:27:09.694 15:32:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:27:09.694 15:32:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3174134' 00:27:09.694 killing process with pid 3174134 00:27:09.694 15:32:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 3174134 00:27:09.694 15:32:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 3174134 00:27:09.694 Connection closed with partial response: 00:27:09.694 00:27:09.694 00:27:09.694 15:32:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3174134 00:27:09.694 15:32:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:09.694 [2024-07-15 15:31:44.571242] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:27:09.694 [2024-07-15 15:31:44.571294] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3174134 ] 00:27:09.694 EAL: No free 2048 kB hugepages reported on node 1 00:27:09.694 [2024-07-15 15:31:44.637000] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.694 [2024-07-15 15:31:44.707516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:09.694 Running I/O for 90 seconds... 00:27:09.694 [2024-07-15 15:31:58.379944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.694 [2024-07-15 15:31:58.379987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:09.694 [2024-07-15 15:31:58.382131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.694 [2024-07-15 15:31:58.382156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:09.694 [2024-07-15 15:31:58.382180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.694 [2024-07-15 15:31:58.382191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:09.694 [2024-07-15 15:31:58.382211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.694 [2024-07-15 15:31:58.382221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:09.694 [2024-07-15 15:31:58.382241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.694 [2024-07-15 15:31:58.382251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:09.694 [2024-07-15 15:31:58.382272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.694 [2024-07-15 15:31:58.382282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:09.694 [2024-07-15 15:31:58.382302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.694 [2024-07-15 15:31:58.382311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:09.694 [2024-07-15 15:31:58.382331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.694 [2024-07-15 15:31:58.382341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:09.694 [2024-07-15 15:31:58.382361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.694 [2024-07-15 15:31:58.382370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:09.694 [2024-07-15 15:31:58.382389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.694 [2024-07-15 15:31:58.382399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:09.694 [2024-07-15 15:31:58.382419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.694 [2024-07-15 15:31:58.382433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:09.694 [2024-07-15 15:31:58.382453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.694 [2024-07-15 15:31:58.382463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:09.694 [2024-07-15 15:31:58.382483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.694 [2024-07-15 15:31:58.382492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:09.694 [2024-07-15 15:31:58.382512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.694 [2024-07-15 15:31:58.382522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:09.694 [2024-07-15 15:31:58.382542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.694 [2024-07-15 15:31:58.382551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:09.694 [2024-07-15 15:31:58.382571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.694 [2024-07-15 15:31:58.382582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:09.694 [2024-07-15 15:31:58.382602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.694 [2024-07-15 15:31:58.382612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:09.694 [2024-07-15 15:31:58.382632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.694 [2024-07-15 15:31:58.382641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:09.694 [2024-07-15 15:31:58.382661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.694 [2024-07-15 15:31:58.382671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:09.694 [2024-07-15 15:31:58.382690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:2184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.694 [2024-07-15 15:31:58.382699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:09.694 [2024-07-15 15:31:58.382720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.694 [2024-07-15 15:31:58.382730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:09.694 [2024-07-15 15:31:58.382750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.694 [2024-07-15 15:31:58.382759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:09.694 [2024-07-15 15:31:58.382780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.694 [2024-07-15 15:31:58.382790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:09.694 [2024-07-15 15:31:58.382811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.694 [2024-07-15 15:31:58.382820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:09.694 [2024-07-15 15:31:58.382845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.694 [2024-07-15 15:31:58.382855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:09.694 [2024-07-15 15:31:58.382875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.694 [2024-07-15 15:31:58.382885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:09.694 [2024-07-15 15:31:58.382904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.694 [2024-07-15 15:31:58.382913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:09.694 [2024-07-15 15:31:58.382933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.694 [2024-07-15 15:31:58.382942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.694 [2024-07-15 15:31:58.382962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.694 [2024-07-15 15:31:58.382971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:09.694 [2024-07-15 15:31:58.382991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.694 [2024-07-15 15:31:58.383000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:09.694 [2024-07-15 15:31:58.383020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.694 [2024-07-15 15:31:58.383029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:09.694 [2024-07-15 15:31:58.383048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.694 [2024-07-15 15:31:58.383060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:09.694 [2024-07-15 15:31:58.383080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.694 [2024-07-15 15:31:58.383089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:09.694 [2024-07-15 15:31:58.383109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.694 [2024-07-15 15:31:58.383118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:09.695 [2024-07-15 15:31:58.383138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.695 [2024-07-15 15:31:58.383147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:09.695 [2024-07-15 15:31:58.383169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.695 [2024-07-15 15:31:58.383178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:09.695 [2024-07-15 15:31:58.383199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.695 [2024-07-15 15:31:58.383208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:09.695 [2024-07-15 15:31:58.383228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.695 [2024-07-15 15:31:58.383237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:09.695 [2024-07-15 15:31:58.383258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.695 [2024-07-15 15:31:58.383267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:09.695 [2024-07-15 15:31:58.383287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.695 [2024-07-15 15:31:58.383298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:09.695 [2024-07-15 15:31:58.383318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.695 [2024-07-15 15:31:58.383328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:09.695 [2024-07-15 15:31:58.383348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.695 [2024-07-15 15:31:58.383357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:09.695 [2024-07-15 15:31:58.383451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.695 [2024-07-15 15:31:58.383463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:09.695 [2024-07-15 15:32:11.170857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:102304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.695 [2024-07-15 15:32:11.170901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:09.695 [2024-07-15 15:32:11.170942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:102320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.695 [2024-07-15 15:32:11.170954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:09.695 [2024-07-15 15:32:11.170969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:102336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.695 [2024-07-15 15:32:11.170979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:09.695 [2024-07-15 15:32:11.170993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:102352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.695 [2024-07-15 15:32:11.171004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:09.695 [2024-07-15 15:32:11.171019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:102368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.695 [2024-07-15 15:32:11.171032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:09.695 [2024-07-15 15:32:11.171047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:102384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.695 [2024-07-15 15:32:11.171058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:09.695 [2024-07-15 15:32:11.171073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.695 [2024-07-15 15:32:11.171083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:09.695 [2024-07-15 15:32:11.171098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:102416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.695 [2024-07-15 15:32:11.171107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:09.695 [2024-07-15 15:32:11.171121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.695 [2024-07-15 15:32:11.171131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:09.695 [2024-07-15 15:32:11.171148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:102448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.695 [2024-07-15 15:32:11.171158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:09.695 [2024-07-15 15:32:11.171174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:102464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.695 [2024-07-15 15:32:11.171184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:09.695 [2024-07-15 15:32:11.171200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:102480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.695 [2024-07-15 15:32:11.171210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:09.695 [2024-07-15 15:32:11.171225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:102496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.695 [2024-07-15 15:32:11.171236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:09.695 [2024-07-15 15:32:11.171251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:102232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.695 [2024-07-15 15:32:11.171262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:09.695 [2024-07-15 15:32:11.171482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:102264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.695 [2024-07-15 15:32:11.171495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:09.695 [2024-07-15 15:32:11.171511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:102512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.695 [2024-07-15 15:32:11.171521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:09.695 [2024-07-15 15:32:11.171536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:102528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.695 [2024-07-15 15:32:11.171549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:09.695 [2024-07-15 15:32:11.171564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:102544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.695 [2024-07-15 15:32:11.171574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:09.695 [2024-07-15 15:32:11.171588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:102560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.695 [2024-07-15 15:32:11.171598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:09.695 [2024-07-15 15:32:11.171613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:102576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.695 [2024-07-15 15:32:11.171623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:09.695 [2024-07-15 15:32:11.171637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:102592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.695 [2024-07-15 15:32:11.171646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:09.695 [2024-07-15 15:32:11.171660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:102608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.695 [2024-07-15 15:32:11.171671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:09.695 [2024-07-15 15:32:11.171685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:102624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.695 [2024-07-15 15:32:11.171694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:09.695 [2024-07-15 15:32:11.171709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:102640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.695 [2024-07-15 15:32:11.171718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:09.695 [2024-07-15 15:32:11.171734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:102656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.695 [2024-07-15 15:32:11.171743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:09.695 [2024-07-15 15:32:11.171758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:102672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.695 [2024-07-15 15:32:11.171767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:09.695 [2024-07-15 15:32:11.171781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:102688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.695 [2024-07-15 15:32:11.171791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:09.695 [2024-07-15 15:32:11.171805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:102704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.695 [2024-07-15 15:32:11.171815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:09.695 [2024-07-15 15:32:11.171829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:102720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.695 [2024-07-15 15:32:11.171844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:09.695 [2024-07-15 15:32:11.171860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.695 [2024-07-15 15:32:11.171870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:09.695 [2024-07-15 15:32:11.171884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:102752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.695 [2024-07-15 15:32:11.171894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.695 [2024-07-15 15:32:11.171909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:102768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.695 [2024-07-15 15:32:11.171918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:09.695 [2024-07-15 15:32:11.171932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:102784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.695 [2024-07-15 15:32:11.171941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:09.696 [2024-07-15 15:32:11.171956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:102800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.696 [2024-07-15 15:32:11.171966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:09.696 [2024-07-15 15:32:11.171981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.696 [2024-07-15 15:32:11.171990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:09.696 [2024-07-15 15:32:11.172004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:102832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.696 [2024-07-15 15:32:11.172014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:09.696 [2024-07-15 15:32:11.172029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:102848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.696 [2024-07-15 15:32:11.172039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:09.696 [2024-07-15 15:32:11.172054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:102864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.696 [2024-07-15 15:32:11.172063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:09.696 [2024-07-15 15:32:11.172078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:102880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.696 [2024-07-15 15:32:11.172087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:09.696 [2024-07-15 15:32:11.172102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:102896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.696 [2024-07-15 15:32:11.172111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:09.696 [2024-07-15 15:32:11.172125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.696 [2024-07-15 15:32:11.172136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:09.696 [2024-07-15 15:32:11.172152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:102928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.696 [2024-07-15 15:32:11.172161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:09.696 [2024-07-15 15:32:11.172175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.696 [2024-07-15 15:32:11.172184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:09.696 [2024-07-15 15:32:11.172199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:102960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.696 [2024-07-15 15:32:11.172208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:09.696 [2024-07-15 15:32:11.172223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:102976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.696 [2024-07-15 15:32:11.172232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:09.696 [2024-07-15 15:32:11.172246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:102224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.696 [2024-07-15 15:32:11.172256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:09.696 [2024-07-15 15:32:11.172271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:102256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.696 [2024-07-15 15:32:11.172280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:09.696 [2024-07-15 15:32:11.172295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:102288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.696 [2024-07-15 15:32:11.172304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:09.696 [2024-07-15 15:32:11.173349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:103000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.696 [2024-07-15 15:32:11.173370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:09.696 [2024-07-15 15:32:11.173389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:103016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.696 [2024-07-15 15:32:11.173400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:09.696 [2024-07-15 15:32:11.173415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:103032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.696 [2024-07-15 15:32:11.173425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:09.696 [2024-07-15 15:32:11.173439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:103048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.696 [2024-07-15 15:32:11.173449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:09.696 [2024-07-15 15:32:11.173464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:103064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.696 [2024-07-15 15:32:11.173474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:09.696 [2024-07-15 15:32:11.173488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:103080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.696 [2024-07-15 15:32:11.173501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:09.696 [2024-07-15 15:32:11.173515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:103096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.696 [2024-07-15 15:32:11.173525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:09.696 [2024-07-15 15:32:11.173540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:103112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.696 [2024-07-15 15:32:11.173549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:09.696 [2024-07-15 15:32:11.173564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:103128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.696 [2024-07-15 15:32:11.173573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:09.696 [2024-07-15 15:32:11.173588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:103144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.696 [2024-07-15 15:32:11.173597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:09.696 [2024-07-15 15:32:11.173612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:103160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.696 [2024-07-15 15:32:11.173622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:09.696 [2024-07-15 15:32:11.173637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:103176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.696 [2024-07-15 15:32:11.173646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:09.696 [2024-07-15 15:32:11.173661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:103192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.696 [2024-07-15 15:32:11.173670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:09.696 [2024-07-15 15:32:11.173686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:103208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.696 [2024-07-15 15:32:11.173695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.696 [2024-07-15 15:32:11.173710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:103224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.696 [2024-07-15 15:32:11.173719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.696 [2024-07-15 15:32:11.173734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:103240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.696 [2024-07-15 15:32:11.173744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:09.696 Received shutdown signal, test time was about 26.915438 seconds 00:27:09.696 00:27:09.696 Latency(us) 00:27:09.696 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:09.696 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:09.696 Verification LBA range: start 0x0 length 0x4000 00:27:09.696 Nvme0n1 : 26.91 11219.44 43.83 0.00 0.00 11387.56 484.97 3019898.88 00:27:09.696 =================================================================================================================== 00:27:09.696 Total : 11219.44 43.83 0.00 0.00 11387.56 484.97 3019898.88 00:27:09.696 15:32:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:09.955 15:32:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:27:09.955 15:32:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:09.955 15:32:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:27:09.955 15:32:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:09.955 15:32:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:27:09.955 15:32:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:09.955 15:32:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:27:09.955 15:32:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:09.955 15:32:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:09.955 rmmod nvme_tcp 00:27:09.955 rmmod nvme_fabrics 00:27:09.955 rmmod nvme_keyring 00:27:09.955 15:32:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:09.955 15:32:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:27:09.955 15:32:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:27:09.955 15:32:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 3173841 ']' 00:27:09.955 15:32:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 3173841 00:27:09.956 15:32:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 3173841 ']' 00:27:09.956 15:32:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 3173841 00:27:09.956 15:32:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:27:09.956 15:32:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:09.956 15:32:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3173841 00:27:09.956 15:32:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:09.956 15:32:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:09.956 15:32:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3173841' 00:27:09.956 killing process with pid 3173841 00:27:09.956 15:32:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 3173841 00:27:09.956 15:32:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 3173841 00:27:10.215 15:32:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:10.215 15:32:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:10.215 15:32:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:10.215 15:32:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:10.215 15:32:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:10.215 15:32:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:10.215 15:32:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:10.215 15:32:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:12.803 15:32:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:12.803 00:27:12.803 real 0m40.322s 00:27:12.803 user 1m42.818s 00:27:12.803 sys 0m14.542s 00:27:12.803 15:32:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:12.803 15:32:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:12.803 ************************************ 00:27:12.803 END TEST nvmf_host_multipath_status 00:27:12.803 ************************************ 00:27:12.803 15:32:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:12.803 15:32:16 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:12.803 15:32:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:12.803 15:32:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:12.803 15:32:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:12.803 ************************************ 00:27:12.803 START TEST nvmf_discovery_remove_ifc 00:27:12.803 ************************************ 00:27:12.803 15:32:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:12.803 * Looking for test storage... 00:27:12.803 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:12.803 15:32:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:12.804 15:32:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:27:12.804 15:32:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:12.804 15:32:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:12.804 15:32:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:12.804 15:32:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:12.804 15:32:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:12.804 15:32:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:12.804 15:32:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:12.804 15:32:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:12.804 15:32:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:12.804 15:32:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:12.804 15:32:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:27:12.804 15:32:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:27:12.804 15:32:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:12.804 15:32:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:12.804 15:32:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:12.804 15:32:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:12.804 15:32:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:12.804 15:32:16 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:12.804 15:32:16 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:12.804 15:32:16 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:12.804 15:32:16 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.804 15:32:16 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.804 15:32:16 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.804 15:32:16 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:27:12.804 15:32:16 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.804 15:32:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:27:12.804 15:32:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:12.804 15:32:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:12.804 15:32:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:12.804 15:32:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:12.804 15:32:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:12.804 15:32:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:12.804 15:32:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:12.804 15:32:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:12.804 15:32:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:27:12.804 15:32:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:27:12.804 15:32:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:27:12.804 15:32:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:27:12.804 15:32:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:27:12.804 15:32:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:27:12.804 15:32:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:27:12.804 15:32:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:12.804 15:32:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:12.804 15:32:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:12.804 15:32:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:12.804 15:32:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:12.804 15:32:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:12.804 15:32:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:12.804 15:32:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:12.804 15:32:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:12.804 15:32:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:12.804 15:32:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:27:12.804 15:32:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:19.370 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:19.370 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:27:19.370 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:19.370 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:19.370 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:19.370 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:19.370 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:19.370 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:27:19.370 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:19.370 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:27:19.370 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:27:19.370 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:27:19.370 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:27:19.370 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:27:19.370 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:27:19.370 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:19.370 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:19.370 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:19.370 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:19.370 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:19.370 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:19.370 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:19.370 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:19.370 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:19.371 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:19.371 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:19.371 Found net devices under 0000:af:00.0: cvl_0_0 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:19.371 Found net devices under 0000:af:00.1: cvl_0_1 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:19.371 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:19.629 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:19.629 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:19.629 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:19.629 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:19.629 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:27:19.629 00:27:19.629 --- 10.0.0.2 ping statistics --- 00:27:19.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:19.629 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:27:19.629 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:19.629 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:19.629 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:27:19.629 00:27:19.629 --- 10.0.0.1 ping statistics --- 00:27:19.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:19.629 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:27:19.629 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:19.629 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:27:19.629 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:19.629 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:19.629 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:19.629 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:19.629 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:19.629 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:19.629 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:19.629 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:27:19.629 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:19.629 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:19.629 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:19.629 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=3182833 00:27:19.629 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:19.629 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 3182833 00:27:19.629 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 3182833 ']' 00:27:19.629 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:19.629 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:19.629 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:19.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:19.629 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:19.629 15:32:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:19.629 [2024-07-15 15:32:23.451417] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:27:19.629 [2024-07-15 15:32:23.451468] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:19.629 EAL: No free 2048 kB hugepages reported on node 1 00:27:19.629 [2024-07-15 15:32:23.524872] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:19.887 [2024-07-15 15:32:23.592799] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:19.887 [2024-07-15 15:32:23.592849] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:19.887 [2024-07-15 15:32:23.592863] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:19.887 [2024-07-15 15:32:23.592871] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:19.887 [2024-07-15 15:32:23.592878] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:19.887 [2024-07-15 15:32:23.592908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:20.453 15:32:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:20.453 15:32:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:27:20.453 15:32:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:20.453 15:32:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:20.453 15:32:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:20.453 15:32:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:20.453 15:32:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:27:20.453 15:32:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.453 15:32:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:20.453 [2024-07-15 15:32:24.296022] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:20.453 [2024-07-15 15:32:24.304190] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:20.453 null0 00:27:20.453 [2024-07-15 15:32:24.336176] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:20.453 15:32:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.453 15:32:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3183031 00:27:20.453 15:32:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3183031 /tmp/host.sock 00:27:20.453 15:32:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 3183031 ']' 00:27:20.453 15:32:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:27:20.453 15:32:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:20.453 15:32:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:20.453 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:20.453 15:32:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:20.453 15:32:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:20.453 15:32:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:27:20.713 [2024-07-15 15:32:24.407339] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:27:20.713 [2024-07-15 15:32:24.407387] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3183031 ] 00:27:20.713 EAL: No free 2048 kB hugepages reported on node 1 00:27:20.713 [2024-07-15 15:32:24.478045] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:20.713 [2024-07-15 15:32:24.552930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:21.648 15:32:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:21.648 15:32:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:27:21.648 15:32:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:21.648 15:32:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:27:21.648 15:32:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.648 15:32:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:21.648 15:32:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.648 15:32:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:27:21.648 15:32:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.648 15:32:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:21.648 15:32:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.648 15:32:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:27:21.648 15:32:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.648 15:32:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:22.585 [2024-07-15 15:32:26.324499] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:22.585 [2024-07-15 15:32:26.324523] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:22.585 [2024-07-15 15:32:26.324536] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:22.585 [2024-07-15 15:32:26.452927] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:22.844 [2024-07-15 15:32:26.680801] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:22.844 [2024-07-15 15:32:26.680855] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:22.844 [2024-07-15 15:32:26.680877] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:22.844 [2024-07-15 15:32:26.680892] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:22.844 [2024-07-15 15:32:26.680914] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:22.844 15:32:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.844 15:32:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:27:22.844 15:32:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:22.844 [2024-07-15 15:32:26.684369] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1958d40 was disconnected and freed. delete nvme_qpair. 00:27:22.844 15:32:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:22.844 15:32:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:22.844 15:32:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.844 15:32:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:22.844 15:32:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:22.844 15:32:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:22.844 15:32:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.844 15:32:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:27:22.844 15:32:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:27:22.844 15:32:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:27:23.103 15:32:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:27:23.103 15:32:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:23.103 15:32:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:23.103 15:32:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:23.103 15:32:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:23.103 15:32:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.103 15:32:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:23.103 15:32:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:23.103 15:32:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.103 15:32:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:23.103 15:32:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:24.040 15:32:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:24.040 15:32:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:24.040 15:32:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:24.040 15:32:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:24.040 15:32:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.040 15:32:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:24.040 15:32:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:24.299 15:32:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.299 15:32:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:24.299 15:32:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:25.235 15:32:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:25.235 15:32:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:25.235 15:32:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:25.235 15:32:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:25.235 15:32:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.235 15:32:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:25.235 15:32:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:25.235 15:32:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.235 15:32:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:25.235 15:32:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:26.171 15:32:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:26.171 15:32:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:26.171 15:32:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:26.171 15:32:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.171 15:32:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:26.171 15:32:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:26.171 15:32:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:26.171 15:32:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.171 15:32:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:26.171 15:32:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:27.547 15:32:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:27.547 15:32:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:27.547 15:32:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:27.547 15:32:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.547 15:32:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:27.547 15:32:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:27.547 15:32:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:27.547 15:32:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.547 15:32:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:27.547 15:32:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:28.481 [2024-07-15 15:32:32.121812] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:27:28.481 [2024-07-15 15:32:32.121860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.481 [2024-07-15 15:32:32.121874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.481 [2024-07-15 15:32:32.121886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.481 [2024-07-15 15:32:32.121895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.481 [2024-07-15 15:32:32.121904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.481 [2024-07-15 15:32:32.121914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.481 [2024-07-15 15:32:32.121923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.481 [2024-07-15 15:32:32.121932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.481 [2024-07-15 15:32:32.121942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.481 [2024-07-15 15:32:32.121951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.481 [2024-07-15 15:32:32.121960] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191f720 is same with the state(5) to be set 00:27:28.481 [2024-07-15 15:32:32.131834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x191f720 (9): Bad file descriptor 00:27:28.481 15:32:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:28.481 15:32:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:28.481 15:32:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:28.481 15:32:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:28.481 15:32:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.481 15:32:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:28.481 15:32:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:28.481 [2024-07-15 15:32:32.141871] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:29.424 [2024-07-15 15:32:33.184852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:29.424 [2024-07-15 15:32:33.184897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191f720 with addr=10.0.0.2, port=4420 00:27:29.424 [2024-07-15 15:32:33.184914] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191f720 is same with the state(5) to be set 00:27:29.424 [2024-07-15 15:32:33.184948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x191f720 (9): Bad file descriptor 00:27:29.424 [2024-07-15 15:32:33.185333] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:29.424 [2024-07-15 15:32:33.185357] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:29.424 [2024-07-15 15:32:33.185370] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:29.424 [2024-07-15 15:32:33.185384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:29.424 [2024-07-15 15:32:33.185404] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:29.424 [2024-07-15 15:32:33.185417] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:29.424 15:32:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.424 15:32:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:29.424 15:32:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:30.362 [2024-07-15 15:32:34.187883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:30.362 [2024-07-15 15:32:34.187906] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:30.362 [2024-07-15 15:32:34.187916] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:30.362 [2024-07-15 15:32:34.187925] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:27:30.362 [2024-07-15 15:32:34.187938] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:30.362 [2024-07-15 15:32:34.187959] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:27:30.362 [2024-07-15 15:32:34.187979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.362 [2024-07-15 15:32:34.187990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.362 [2024-07-15 15:32:34.188000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.362 [2024-07-15 15:32:34.188009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.362 [2024-07-15 15:32:34.188019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.362 [2024-07-15 15:32:34.188029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.362 [2024-07-15 15:32:34.188038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.362 [2024-07-15 15:32:34.188047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.362 [2024-07-15 15:32:34.188056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:30.362 [2024-07-15 15:32:34.188066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.362 [2024-07-15 15:32:34.188075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:27:30.362 [2024-07-15 15:32:34.188099] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x191eba0 (9): Bad file descriptor 00:27:30.362 [2024-07-15 15:32:34.189100] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:27:30.362 [2024-07-15 15:32:34.189115] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:27:30.362 15:32:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:30.362 15:32:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:30.362 15:32:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:30.362 15:32:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:30.362 15:32:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.362 15:32:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:30.362 15:32:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:30.362 15:32:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.362 15:32:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:27:30.362 15:32:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:30.362 15:32:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:30.621 15:32:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:27:30.621 15:32:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:30.621 15:32:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:30.621 15:32:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:30.621 15:32:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.621 15:32:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:30.621 15:32:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:30.621 15:32:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:30.621 15:32:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.621 15:32:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:30.621 15:32:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:31.556 15:32:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:31.556 15:32:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:31.556 15:32:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:31.556 15:32:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:31.556 15:32:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.556 15:32:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:31.556 15:32:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:31.556 15:32:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.556 15:32:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:31.556 15:32:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:32.492 [2024-07-15 15:32:36.199449] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:32.492 [2024-07-15 15:32:36.199467] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:32.492 [2024-07-15 15:32:36.199482] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:32.492 [2024-07-15 15:32:36.285739] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:27:32.492 [2024-07-15 15:32:36.382080] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:32.492 [2024-07-15 15:32:36.382117] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:32.492 [2024-07-15 15:32:36.382136] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:32.492 [2024-07-15 15:32:36.382153] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:27:32.492 [2024-07-15 15:32:36.382161] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:32.492 [2024-07-15 15:32:36.388771] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x190e620 was disconnected and freed. delete nvme_qpair. 00:27:32.750 15:32:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:32.750 15:32:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:32.750 15:32:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:32.750 15:32:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.750 15:32:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:32.750 15:32:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:32.750 15:32:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:32.750 15:32:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.750 15:32:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:27:32.750 15:32:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:27:32.750 15:32:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3183031 00:27:32.750 15:32:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 3183031 ']' 00:27:32.750 15:32:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 3183031 00:27:32.750 15:32:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:27:32.750 15:32:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:32.750 15:32:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3183031 00:27:32.750 15:32:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:32.750 15:32:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:32.750 15:32:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3183031' 00:27:32.750 killing process with pid 3183031 00:27:32.750 15:32:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 3183031 00:27:32.750 15:32:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 3183031 00:27:33.009 15:32:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:27:33.009 15:32:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:33.009 15:32:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:27:33.009 15:32:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:33.009 15:32:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:27:33.009 15:32:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:33.009 15:32:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:33.009 rmmod nvme_tcp 00:27:33.009 rmmod nvme_fabrics 00:27:33.009 rmmod nvme_keyring 00:27:33.009 15:32:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:33.010 15:32:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:27:33.010 15:32:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:27:33.010 15:32:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 3182833 ']' 00:27:33.010 15:32:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 3182833 00:27:33.010 15:32:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 3182833 ']' 00:27:33.010 15:32:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 3182833 00:27:33.010 15:32:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:27:33.010 15:32:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:33.010 15:32:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3182833 00:27:33.010 15:32:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:33.010 15:32:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:33.010 15:32:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3182833' 00:27:33.010 killing process with pid 3182833 00:27:33.010 15:32:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 3182833 00:27:33.010 15:32:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 3182833 00:27:33.269 15:32:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:33.269 15:32:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:33.269 15:32:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:33.269 15:32:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:33.269 15:32:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:33.269 15:32:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:33.269 15:32:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:33.269 15:32:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:35.802 15:32:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:35.802 00:27:35.802 real 0m22.927s 00:27:35.802 user 0m26.636s 00:27:35.802 sys 0m7.414s 00:27:35.802 15:32:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:35.802 15:32:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:35.802 ************************************ 00:27:35.802 END TEST nvmf_discovery_remove_ifc 00:27:35.802 ************************************ 00:27:35.802 15:32:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:35.803 15:32:39 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:35.803 15:32:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:35.803 15:32:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:35.803 15:32:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:35.803 ************************************ 00:27:35.803 START TEST nvmf_identify_kernel_target 00:27:35.803 ************************************ 00:27:35.803 15:32:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:35.803 * Looking for test storage... 00:27:35.803 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:35.803 15:32:39 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:35.803 15:32:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:27:35.803 15:32:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:35.803 15:32:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:35.803 15:32:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:35.803 15:32:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:35.803 15:32:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:35.803 15:32:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:35.803 15:32:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:35.803 15:32:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:35.803 15:32:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:35.803 15:32:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:35.803 15:32:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:27:35.803 15:32:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:27:35.803 15:32:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:35.803 15:32:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:35.803 15:32:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:35.803 15:32:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:35.803 15:32:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:35.803 15:32:39 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:35.803 15:32:39 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:35.803 15:32:39 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:35.803 15:32:39 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.803 15:32:39 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.803 15:32:39 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.803 15:32:39 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:27:35.803 15:32:39 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.803 15:32:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:27:35.803 15:32:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:35.803 15:32:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:35.803 15:32:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:35.803 15:32:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:35.803 15:32:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:35.803 15:32:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:35.803 15:32:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:35.803 15:32:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:35.803 15:32:39 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:27:35.803 15:32:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:35.803 15:32:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:35.803 15:32:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:35.803 15:32:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:35.803 15:32:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:35.803 15:32:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:35.803 15:32:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:35.803 15:32:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:35.803 15:32:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:35.803 15:32:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:35.803 15:32:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:27:35.803 15:32:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:42.440 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:42.440 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:42.440 Found net devices under 0000:af:00.0: cvl_0_0 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:42.440 Found net devices under 0000:af:00.1: cvl_0_1 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:42.440 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:42.441 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:42.441 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:42.441 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:42.441 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:42.441 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:42.441 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:42.441 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:42.441 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:42.441 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:42.441 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:42.441 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:42.441 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:42.441 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:42.441 15:32:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:42.441 15:32:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:42.441 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:42.441 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:27:42.441 00:27:42.441 --- 10.0.0.2 ping statistics --- 00:27:42.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:42.441 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:27:42.441 15:32:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:42.441 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:42.441 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:27:42.441 00:27:42.441 --- 10.0.0.1 ping statistics --- 00:27:42.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:42.441 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:27:42.441 15:32:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:42.441 15:32:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:27:42.441 15:32:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:42.441 15:32:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:42.441 15:32:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:42.441 15:32:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:42.441 15:32:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:42.441 15:32:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:42.441 15:32:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:42.441 15:32:46 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:42.441 15:32:46 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:27:42.441 15:32:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:27:42.441 15:32:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:42.441 15:32:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:42.441 15:32:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.441 15:32:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.441 15:32:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:42.441 15:32:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.441 15:32:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:42.441 15:32:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:42.441 15:32:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:42.441 15:32:46 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:42.441 15:32:46 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:42.441 15:32:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:42.441 15:32:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:42.441 15:32:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:42.441 15:32:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:42.441 15:32:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:42.441 15:32:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:27:42.441 15:32:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:42.441 15:32:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:42.441 15:32:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:42.441 15:32:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:45.731 Waiting for block devices as requested 00:27:45.731 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:27:45.731 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:45.731 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:45.731 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:45.731 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:45.990 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:45.990 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:45.990 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:45.990 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:27:46.249 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:46.249 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:46.249 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:46.508 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:46.508 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:46.508 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:46.508 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:46.767 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:27:46.767 15:32:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:46.767 15:32:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:46.767 15:32:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:46.767 15:32:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:46.767 15:32:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:46.767 15:32:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:46.767 15:32:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:46.767 15:32:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:46.767 15:32:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:47.027 No valid GPT data, bailing 00:27:47.027 15:32:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:47.027 15:32:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:27:47.027 15:32:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:27:47.027 15:32:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:47.027 15:32:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:47.027 15:32:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:47.027 15:32:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:47.027 15:32:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:47.027 15:32:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:47.027 15:32:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:27:47.027 15:32:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:47.027 15:32:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:27:47.027 15:32:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:47.027 15:32:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:27:47.027 15:32:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:27:47.027 15:32:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:27:47.027 15:32:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:47.027 15:32:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.1 -t tcp -s 4420 00:27:47.027 00:27:47.027 Discovery Log Number of Records 2, Generation counter 2 00:27:47.027 =====Discovery Log Entry 0====== 00:27:47.027 trtype: tcp 00:27:47.027 adrfam: ipv4 00:27:47.027 subtype: current discovery subsystem 00:27:47.027 treq: not specified, sq flow control disable supported 00:27:47.027 portid: 1 00:27:47.027 trsvcid: 4420 00:27:47.027 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:47.027 traddr: 10.0.0.1 00:27:47.027 eflags: none 00:27:47.027 sectype: none 00:27:47.027 =====Discovery Log Entry 1====== 00:27:47.027 trtype: tcp 00:27:47.027 adrfam: ipv4 00:27:47.027 subtype: nvme subsystem 00:27:47.027 treq: not specified, sq flow control disable supported 00:27:47.027 portid: 1 00:27:47.027 trsvcid: 4420 00:27:47.027 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:47.027 traddr: 10.0.0.1 00:27:47.027 eflags: none 00:27:47.027 sectype: none 00:27:47.027 15:32:50 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:47.027 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:47.027 EAL: No free 2048 kB hugepages reported on node 1 00:27:47.027 ===================================================== 00:27:47.027 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:47.027 ===================================================== 00:27:47.027 Controller Capabilities/Features 00:27:47.027 ================================ 00:27:47.027 Vendor ID: 0000 00:27:47.027 Subsystem Vendor ID: 0000 00:27:47.027 Serial Number: fb49aca5f146c81af5c7 00:27:47.027 Model Number: Linux 00:27:47.027 Firmware Version: 6.7.0-68 00:27:47.027 Recommended Arb Burst: 0 00:27:47.027 IEEE OUI Identifier: 00 00 00 00:27:47.027 Multi-path I/O 00:27:47.027 May have multiple subsystem ports: No 00:27:47.027 May have multiple controllers: No 00:27:47.027 Associated with SR-IOV VF: No 00:27:47.027 Max Data Transfer Size: Unlimited 00:27:47.027 Max Number of Namespaces: 0 00:27:47.027 Max Number of I/O Queues: 1024 00:27:47.027 NVMe Specification Version (VS): 1.3 00:27:47.027 NVMe Specification Version (Identify): 1.3 00:27:47.027 Maximum Queue Entries: 1024 00:27:47.027 Contiguous Queues Required: No 00:27:47.027 Arbitration Mechanisms Supported 00:27:47.027 Weighted Round Robin: Not Supported 00:27:47.027 Vendor Specific: Not Supported 00:27:47.027 Reset Timeout: 7500 ms 00:27:47.027 Doorbell Stride: 4 bytes 00:27:47.027 NVM Subsystem Reset: Not Supported 00:27:47.027 Command Sets Supported 00:27:47.027 NVM Command Set: Supported 00:27:47.027 Boot Partition: Not Supported 00:27:47.027 Memory Page Size Minimum: 4096 bytes 00:27:47.027 Memory Page Size Maximum: 4096 bytes 00:27:47.027 Persistent Memory Region: Not Supported 00:27:47.027 Optional Asynchronous Events Supported 00:27:47.027 Namespace Attribute Notices: Not Supported 00:27:47.027 Firmware Activation Notices: Not Supported 00:27:47.027 ANA Change Notices: Not Supported 00:27:47.027 PLE Aggregate Log Change Notices: Not Supported 00:27:47.027 LBA Status Info Alert Notices: Not Supported 00:27:47.027 EGE Aggregate Log Change Notices: Not Supported 00:27:47.027 Normal NVM Subsystem Shutdown event: Not Supported 00:27:47.027 Zone Descriptor Change Notices: Not Supported 00:27:47.027 Discovery Log Change Notices: Supported 00:27:47.027 Controller Attributes 00:27:47.027 128-bit Host Identifier: Not Supported 00:27:47.027 Non-Operational Permissive Mode: Not Supported 00:27:47.027 NVM Sets: Not Supported 00:27:47.027 Read Recovery Levels: Not Supported 00:27:47.027 Endurance Groups: Not Supported 00:27:47.027 Predictable Latency Mode: Not Supported 00:27:47.027 Traffic Based Keep ALive: Not Supported 00:27:47.027 Namespace Granularity: Not Supported 00:27:47.027 SQ Associations: Not Supported 00:27:47.027 UUID List: Not Supported 00:27:47.027 Multi-Domain Subsystem: Not Supported 00:27:47.027 Fixed Capacity Management: Not Supported 00:27:47.027 Variable Capacity Management: Not Supported 00:27:47.027 Delete Endurance Group: Not Supported 00:27:47.027 Delete NVM Set: Not Supported 00:27:47.027 Extended LBA Formats Supported: Not Supported 00:27:47.027 Flexible Data Placement Supported: Not Supported 00:27:47.027 00:27:47.027 Controller Memory Buffer Support 00:27:47.027 ================================ 00:27:47.027 Supported: No 00:27:47.027 00:27:47.027 Persistent Memory Region Support 00:27:47.027 ================================ 00:27:47.027 Supported: No 00:27:47.027 00:27:47.027 Admin Command Set Attributes 00:27:47.027 ============================ 00:27:47.027 Security Send/Receive: Not Supported 00:27:47.027 Format NVM: Not Supported 00:27:47.027 Firmware Activate/Download: Not Supported 00:27:47.027 Namespace Management: Not Supported 00:27:47.027 Device Self-Test: Not Supported 00:27:47.027 Directives: Not Supported 00:27:47.027 NVMe-MI: Not Supported 00:27:47.027 Virtualization Management: Not Supported 00:27:47.027 Doorbell Buffer Config: Not Supported 00:27:47.027 Get LBA Status Capability: Not Supported 00:27:47.027 Command & Feature Lockdown Capability: Not Supported 00:27:47.027 Abort Command Limit: 1 00:27:47.027 Async Event Request Limit: 1 00:27:47.027 Number of Firmware Slots: N/A 00:27:47.027 Firmware Slot 1 Read-Only: N/A 00:27:47.027 Firmware Activation Without Reset: N/A 00:27:47.027 Multiple Update Detection Support: N/A 00:27:47.027 Firmware Update Granularity: No Information Provided 00:27:47.027 Per-Namespace SMART Log: No 00:27:47.027 Asymmetric Namespace Access Log Page: Not Supported 00:27:47.027 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:47.027 Command Effects Log Page: Not Supported 00:27:47.027 Get Log Page Extended Data: Supported 00:27:47.027 Telemetry Log Pages: Not Supported 00:27:47.027 Persistent Event Log Pages: Not Supported 00:27:47.027 Supported Log Pages Log Page: May Support 00:27:47.027 Commands Supported & Effects Log Page: Not Supported 00:27:47.027 Feature Identifiers & Effects Log Page:May Support 00:27:47.027 NVMe-MI Commands & Effects Log Page: May Support 00:27:47.027 Data Area 4 for Telemetry Log: Not Supported 00:27:47.027 Error Log Page Entries Supported: 1 00:27:47.027 Keep Alive: Not Supported 00:27:47.027 00:27:47.027 NVM Command Set Attributes 00:27:47.027 ========================== 00:27:47.027 Submission Queue Entry Size 00:27:47.027 Max: 1 00:27:47.027 Min: 1 00:27:47.027 Completion Queue Entry Size 00:27:47.027 Max: 1 00:27:47.027 Min: 1 00:27:47.027 Number of Namespaces: 0 00:27:47.027 Compare Command: Not Supported 00:27:47.027 Write Uncorrectable Command: Not Supported 00:27:47.027 Dataset Management Command: Not Supported 00:27:47.027 Write Zeroes Command: Not Supported 00:27:47.027 Set Features Save Field: Not Supported 00:27:47.027 Reservations: Not Supported 00:27:47.027 Timestamp: Not Supported 00:27:47.027 Copy: Not Supported 00:27:47.027 Volatile Write Cache: Not Present 00:27:47.027 Atomic Write Unit (Normal): 1 00:27:47.027 Atomic Write Unit (PFail): 1 00:27:47.027 Atomic Compare & Write Unit: 1 00:27:47.027 Fused Compare & Write: Not Supported 00:27:47.027 Scatter-Gather List 00:27:47.027 SGL Command Set: Supported 00:27:47.027 SGL Keyed: Not Supported 00:27:47.027 SGL Bit Bucket Descriptor: Not Supported 00:27:47.027 SGL Metadata Pointer: Not Supported 00:27:47.027 Oversized SGL: Not Supported 00:27:47.027 SGL Metadata Address: Not Supported 00:27:47.027 SGL Offset: Supported 00:27:47.027 Transport SGL Data Block: Not Supported 00:27:47.027 Replay Protected Memory Block: Not Supported 00:27:47.027 00:27:47.027 Firmware Slot Information 00:27:47.027 ========================= 00:27:47.027 Active slot: 0 00:27:47.027 00:27:47.027 00:27:47.027 Error Log 00:27:47.027 ========= 00:27:47.027 00:27:47.027 Active Namespaces 00:27:47.028 ================= 00:27:47.028 Discovery Log Page 00:27:47.028 ================== 00:27:47.028 Generation Counter: 2 00:27:47.028 Number of Records: 2 00:27:47.028 Record Format: 0 00:27:47.028 00:27:47.028 Discovery Log Entry 0 00:27:47.028 ---------------------- 00:27:47.028 Transport Type: 3 (TCP) 00:27:47.028 Address Family: 1 (IPv4) 00:27:47.028 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:47.028 Entry Flags: 00:27:47.028 Duplicate Returned Information: 0 00:27:47.028 Explicit Persistent Connection Support for Discovery: 0 00:27:47.028 Transport Requirements: 00:27:47.028 Secure Channel: Not Specified 00:27:47.028 Port ID: 1 (0x0001) 00:27:47.028 Controller ID: 65535 (0xffff) 00:27:47.028 Admin Max SQ Size: 32 00:27:47.028 Transport Service Identifier: 4420 00:27:47.028 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:47.028 Transport Address: 10.0.0.1 00:27:47.028 Discovery Log Entry 1 00:27:47.028 ---------------------- 00:27:47.028 Transport Type: 3 (TCP) 00:27:47.028 Address Family: 1 (IPv4) 00:27:47.028 Subsystem Type: 2 (NVM Subsystem) 00:27:47.028 Entry Flags: 00:27:47.028 Duplicate Returned Information: 0 00:27:47.028 Explicit Persistent Connection Support for Discovery: 0 00:27:47.028 Transport Requirements: 00:27:47.028 Secure Channel: Not Specified 00:27:47.028 Port ID: 1 (0x0001) 00:27:47.028 Controller ID: 65535 (0xffff) 00:27:47.028 Admin Max SQ Size: 32 00:27:47.028 Transport Service Identifier: 4420 00:27:47.028 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:47.028 Transport Address: 10.0.0.1 00:27:47.028 15:32:50 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:47.287 EAL: No free 2048 kB hugepages reported on node 1 00:27:47.287 get_feature(0x01) failed 00:27:47.287 get_feature(0x02) failed 00:27:47.287 get_feature(0x04) failed 00:27:47.287 ===================================================== 00:27:47.287 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:47.287 ===================================================== 00:27:47.287 Controller Capabilities/Features 00:27:47.287 ================================ 00:27:47.287 Vendor ID: 0000 00:27:47.287 Subsystem Vendor ID: 0000 00:27:47.287 Serial Number: 6119799c7c719f888773 00:27:47.287 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:47.287 Firmware Version: 6.7.0-68 00:27:47.287 Recommended Arb Burst: 6 00:27:47.287 IEEE OUI Identifier: 00 00 00 00:27:47.287 Multi-path I/O 00:27:47.287 May have multiple subsystem ports: Yes 00:27:47.287 May have multiple controllers: Yes 00:27:47.287 Associated with SR-IOV VF: No 00:27:47.287 Max Data Transfer Size: Unlimited 00:27:47.287 Max Number of Namespaces: 1024 00:27:47.287 Max Number of I/O Queues: 128 00:27:47.287 NVMe Specification Version (VS): 1.3 00:27:47.287 NVMe Specification Version (Identify): 1.3 00:27:47.287 Maximum Queue Entries: 1024 00:27:47.287 Contiguous Queues Required: No 00:27:47.287 Arbitration Mechanisms Supported 00:27:47.287 Weighted Round Robin: Not Supported 00:27:47.287 Vendor Specific: Not Supported 00:27:47.287 Reset Timeout: 7500 ms 00:27:47.287 Doorbell Stride: 4 bytes 00:27:47.287 NVM Subsystem Reset: Not Supported 00:27:47.287 Command Sets Supported 00:27:47.287 NVM Command Set: Supported 00:27:47.287 Boot Partition: Not Supported 00:27:47.287 Memory Page Size Minimum: 4096 bytes 00:27:47.287 Memory Page Size Maximum: 4096 bytes 00:27:47.287 Persistent Memory Region: Not Supported 00:27:47.287 Optional Asynchronous Events Supported 00:27:47.287 Namespace Attribute Notices: Supported 00:27:47.287 Firmware Activation Notices: Not Supported 00:27:47.287 ANA Change Notices: Supported 00:27:47.287 PLE Aggregate Log Change Notices: Not Supported 00:27:47.287 LBA Status Info Alert Notices: Not Supported 00:27:47.287 EGE Aggregate Log Change Notices: Not Supported 00:27:47.287 Normal NVM Subsystem Shutdown event: Not Supported 00:27:47.287 Zone Descriptor Change Notices: Not Supported 00:27:47.287 Discovery Log Change Notices: Not Supported 00:27:47.287 Controller Attributes 00:27:47.287 128-bit Host Identifier: Supported 00:27:47.287 Non-Operational Permissive Mode: Not Supported 00:27:47.287 NVM Sets: Not Supported 00:27:47.287 Read Recovery Levels: Not Supported 00:27:47.287 Endurance Groups: Not Supported 00:27:47.287 Predictable Latency Mode: Not Supported 00:27:47.287 Traffic Based Keep ALive: Supported 00:27:47.287 Namespace Granularity: Not Supported 00:27:47.287 SQ Associations: Not Supported 00:27:47.287 UUID List: Not Supported 00:27:47.287 Multi-Domain Subsystem: Not Supported 00:27:47.287 Fixed Capacity Management: Not Supported 00:27:47.287 Variable Capacity Management: Not Supported 00:27:47.287 Delete Endurance Group: Not Supported 00:27:47.287 Delete NVM Set: Not Supported 00:27:47.287 Extended LBA Formats Supported: Not Supported 00:27:47.287 Flexible Data Placement Supported: Not Supported 00:27:47.287 00:27:47.287 Controller Memory Buffer Support 00:27:47.287 ================================ 00:27:47.287 Supported: No 00:27:47.287 00:27:47.287 Persistent Memory Region Support 00:27:47.287 ================================ 00:27:47.287 Supported: No 00:27:47.287 00:27:47.287 Admin Command Set Attributes 00:27:47.287 ============================ 00:27:47.287 Security Send/Receive: Not Supported 00:27:47.287 Format NVM: Not Supported 00:27:47.287 Firmware Activate/Download: Not Supported 00:27:47.287 Namespace Management: Not Supported 00:27:47.287 Device Self-Test: Not Supported 00:27:47.287 Directives: Not Supported 00:27:47.287 NVMe-MI: Not Supported 00:27:47.287 Virtualization Management: Not Supported 00:27:47.287 Doorbell Buffer Config: Not Supported 00:27:47.287 Get LBA Status Capability: Not Supported 00:27:47.287 Command & Feature Lockdown Capability: Not Supported 00:27:47.288 Abort Command Limit: 4 00:27:47.288 Async Event Request Limit: 4 00:27:47.288 Number of Firmware Slots: N/A 00:27:47.288 Firmware Slot 1 Read-Only: N/A 00:27:47.288 Firmware Activation Without Reset: N/A 00:27:47.288 Multiple Update Detection Support: N/A 00:27:47.288 Firmware Update Granularity: No Information Provided 00:27:47.288 Per-Namespace SMART Log: Yes 00:27:47.288 Asymmetric Namespace Access Log Page: Supported 00:27:47.288 ANA Transition Time : 10 sec 00:27:47.288 00:27:47.288 Asymmetric Namespace Access Capabilities 00:27:47.288 ANA Optimized State : Supported 00:27:47.288 ANA Non-Optimized State : Supported 00:27:47.288 ANA Inaccessible State : Supported 00:27:47.288 ANA Persistent Loss State : Supported 00:27:47.288 ANA Change State : Supported 00:27:47.288 ANAGRPID is not changed : No 00:27:47.288 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:47.288 00:27:47.288 ANA Group Identifier Maximum : 128 00:27:47.288 Number of ANA Group Identifiers : 128 00:27:47.288 Max Number of Allowed Namespaces : 1024 00:27:47.288 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:47.288 Command Effects Log Page: Supported 00:27:47.288 Get Log Page Extended Data: Supported 00:27:47.288 Telemetry Log Pages: Not Supported 00:27:47.288 Persistent Event Log Pages: Not Supported 00:27:47.288 Supported Log Pages Log Page: May Support 00:27:47.288 Commands Supported & Effects Log Page: Not Supported 00:27:47.288 Feature Identifiers & Effects Log Page:May Support 00:27:47.288 NVMe-MI Commands & Effects Log Page: May Support 00:27:47.288 Data Area 4 for Telemetry Log: Not Supported 00:27:47.288 Error Log Page Entries Supported: 128 00:27:47.288 Keep Alive: Supported 00:27:47.288 Keep Alive Granularity: 1000 ms 00:27:47.288 00:27:47.288 NVM Command Set Attributes 00:27:47.288 ========================== 00:27:47.288 Submission Queue Entry Size 00:27:47.288 Max: 64 00:27:47.288 Min: 64 00:27:47.288 Completion Queue Entry Size 00:27:47.288 Max: 16 00:27:47.288 Min: 16 00:27:47.288 Number of Namespaces: 1024 00:27:47.288 Compare Command: Not Supported 00:27:47.288 Write Uncorrectable Command: Not Supported 00:27:47.288 Dataset Management Command: Supported 00:27:47.288 Write Zeroes Command: Supported 00:27:47.288 Set Features Save Field: Not Supported 00:27:47.288 Reservations: Not Supported 00:27:47.288 Timestamp: Not Supported 00:27:47.288 Copy: Not Supported 00:27:47.288 Volatile Write Cache: Present 00:27:47.288 Atomic Write Unit (Normal): 1 00:27:47.288 Atomic Write Unit (PFail): 1 00:27:47.288 Atomic Compare & Write Unit: 1 00:27:47.288 Fused Compare & Write: Not Supported 00:27:47.288 Scatter-Gather List 00:27:47.288 SGL Command Set: Supported 00:27:47.288 SGL Keyed: Not Supported 00:27:47.288 SGL Bit Bucket Descriptor: Not Supported 00:27:47.288 SGL Metadata Pointer: Not Supported 00:27:47.288 Oversized SGL: Not Supported 00:27:47.288 SGL Metadata Address: Not Supported 00:27:47.288 SGL Offset: Supported 00:27:47.288 Transport SGL Data Block: Not Supported 00:27:47.288 Replay Protected Memory Block: Not Supported 00:27:47.288 00:27:47.288 Firmware Slot Information 00:27:47.288 ========================= 00:27:47.288 Active slot: 0 00:27:47.288 00:27:47.288 Asymmetric Namespace Access 00:27:47.288 =========================== 00:27:47.288 Change Count : 0 00:27:47.288 Number of ANA Group Descriptors : 1 00:27:47.288 ANA Group Descriptor : 0 00:27:47.288 ANA Group ID : 1 00:27:47.288 Number of NSID Values : 1 00:27:47.288 Change Count : 0 00:27:47.288 ANA State : 1 00:27:47.288 Namespace Identifier : 1 00:27:47.288 00:27:47.288 Commands Supported and Effects 00:27:47.288 ============================== 00:27:47.288 Admin Commands 00:27:47.288 -------------- 00:27:47.288 Get Log Page (02h): Supported 00:27:47.288 Identify (06h): Supported 00:27:47.288 Abort (08h): Supported 00:27:47.288 Set Features (09h): Supported 00:27:47.288 Get Features (0Ah): Supported 00:27:47.288 Asynchronous Event Request (0Ch): Supported 00:27:47.288 Keep Alive (18h): Supported 00:27:47.288 I/O Commands 00:27:47.288 ------------ 00:27:47.288 Flush (00h): Supported 00:27:47.288 Write (01h): Supported LBA-Change 00:27:47.288 Read (02h): Supported 00:27:47.288 Write Zeroes (08h): Supported LBA-Change 00:27:47.288 Dataset Management (09h): Supported 00:27:47.288 00:27:47.288 Error Log 00:27:47.288 ========= 00:27:47.288 Entry: 0 00:27:47.288 Error Count: 0x3 00:27:47.288 Submission Queue Id: 0x0 00:27:47.288 Command Id: 0x5 00:27:47.288 Phase Bit: 0 00:27:47.288 Status Code: 0x2 00:27:47.288 Status Code Type: 0x0 00:27:47.288 Do Not Retry: 1 00:27:47.288 Error Location: 0x28 00:27:47.288 LBA: 0x0 00:27:47.288 Namespace: 0x0 00:27:47.288 Vendor Log Page: 0x0 00:27:47.288 ----------- 00:27:47.288 Entry: 1 00:27:47.288 Error Count: 0x2 00:27:47.288 Submission Queue Id: 0x0 00:27:47.288 Command Id: 0x5 00:27:47.288 Phase Bit: 0 00:27:47.288 Status Code: 0x2 00:27:47.288 Status Code Type: 0x0 00:27:47.288 Do Not Retry: 1 00:27:47.288 Error Location: 0x28 00:27:47.288 LBA: 0x0 00:27:47.288 Namespace: 0x0 00:27:47.288 Vendor Log Page: 0x0 00:27:47.288 ----------- 00:27:47.288 Entry: 2 00:27:47.288 Error Count: 0x1 00:27:47.288 Submission Queue Id: 0x0 00:27:47.288 Command Id: 0x4 00:27:47.288 Phase Bit: 0 00:27:47.288 Status Code: 0x2 00:27:47.288 Status Code Type: 0x0 00:27:47.288 Do Not Retry: 1 00:27:47.288 Error Location: 0x28 00:27:47.288 LBA: 0x0 00:27:47.288 Namespace: 0x0 00:27:47.288 Vendor Log Page: 0x0 00:27:47.288 00:27:47.288 Number of Queues 00:27:47.288 ================ 00:27:47.288 Number of I/O Submission Queues: 128 00:27:47.288 Number of I/O Completion Queues: 128 00:27:47.288 00:27:47.288 ZNS Specific Controller Data 00:27:47.288 ============================ 00:27:47.288 Zone Append Size Limit: 0 00:27:47.288 00:27:47.288 00:27:47.288 Active Namespaces 00:27:47.288 ================= 00:27:47.288 get_feature(0x05) failed 00:27:47.288 Namespace ID:1 00:27:47.288 Command Set Identifier: NVM (00h) 00:27:47.288 Deallocate: Supported 00:27:47.288 Deallocated/Unwritten Error: Not Supported 00:27:47.288 Deallocated Read Value: Unknown 00:27:47.288 Deallocate in Write Zeroes: Not Supported 00:27:47.288 Deallocated Guard Field: 0xFFFF 00:27:47.288 Flush: Supported 00:27:47.288 Reservation: Not Supported 00:27:47.288 Namespace Sharing Capabilities: Multiple Controllers 00:27:47.288 Size (in LBAs): 3125627568 (1490GiB) 00:27:47.288 Capacity (in LBAs): 3125627568 (1490GiB) 00:27:47.288 Utilization (in LBAs): 3125627568 (1490GiB) 00:27:47.288 UUID: 35e0f712-1e2a-4343-8d02-4d9a68db09db 00:27:47.288 Thin Provisioning: Not Supported 00:27:47.288 Per-NS Atomic Units: Yes 00:27:47.288 Atomic Boundary Size (Normal): 0 00:27:47.288 Atomic Boundary Size (PFail): 0 00:27:47.288 Atomic Boundary Offset: 0 00:27:47.288 NGUID/EUI64 Never Reused: No 00:27:47.288 ANA group ID: 1 00:27:47.288 Namespace Write Protected: No 00:27:47.288 Number of LBA Formats: 1 00:27:47.288 Current LBA Format: LBA Format #00 00:27:47.288 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:47.288 00:27:47.288 15:32:50 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:47.288 15:32:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:47.288 15:32:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:27:47.288 15:32:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:47.288 15:32:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:27:47.288 15:32:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:47.288 15:32:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:47.288 rmmod nvme_tcp 00:27:47.288 rmmod nvme_fabrics 00:27:47.288 15:32:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:47.288 15:32:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:27:47.288 15:32:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:27:47.288 15:32:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:27:47.288 15:32:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:47.288 15:32:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:47.288 15:32:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:47.288 15:32:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:47.288 15:32:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:47.288 15:32:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:47.288 15:32:51 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:47.288 15:32:51 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:49.823 15:32:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:49.823 15:32:53 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:49.823 15:32:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:49.823 15:32:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:27:49.823 15:32:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:49.823 15:32:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:49.823 15:32:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:49.823 15:32:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:49.823 15:32:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:49.823 15:32:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:49.823 15:32:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:53.110 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:53.110 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:53.110 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:53.110 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:53.110 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:53.110 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:53.110 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:53.110 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:53.110 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:53.110 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:53.110 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:53.110 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:53.110 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:53.110 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:53.110 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:53.110 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:54.490 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:27:54.490 00:27:54.490 real 0m19.011s 00:27:54.490 user 0m4.489s 00:27:54.490 sys 0m10.192s 00:27:54.490 15:32:58 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:54.490 15:32:58 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:54.490 ************************************ 00:27:54.490 END TEST nvmf_identify_kernel_target 00:27:54.490 ************************************ 00:27:54.490 15:32:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:54.490 15:32:58 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:54.490 15:32:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:54.490 15:32:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:54.490 15:32:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:54.490 ************************************ 00:27:54.490 START TEST nvmf_auth_host 00:27:54.490 ************************************ 00:27:54.490 15:32:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:54.750 * Looking for test storage... 00:27:54.750 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:54.750 15:32:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:54.750 15:32:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:54.750 15:32:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:54.750 15:32:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:54.750 15:32:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:54.750 15:32:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:54.750 15:32:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:54.750 15:32:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:54.750 15:32:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:54.750 15:32:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:54.750 15:32:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:54.750 15:32:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:54.750 15:32:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:27:54.750 15:32:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:27:54.750 15:32:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:54.750 15:32:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:54.750 15:32:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:54.750 15:32:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:54.750 15:32:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:54.750 15:32:58 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:54.750 15:32:58 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:54.750 15:32:58 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:54.750 15:32:58 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.750 15:32:58 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.750 15:32:58 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.750 15:32:58 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:54.750 15:32:58 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.750 15:32:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:27:54.750 15:32:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:54.750 15:32:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:54.750 15:32:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:54.750 15:32:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:54.750 15:32:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:54.750 15:32:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:54.750 15:32:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:54.750 15:32:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:54.750 15:32:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:54.750 15:32:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:54.750 15:32:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:54.750 15:32:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:54.750 15:32:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:54.750 15:32:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:54.750 15:32:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:54.750 15:32:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:54.750 15:32:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:54.750 15:32:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:54.750 15:32:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:54.750 15:32:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:54.750 15:32:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:54.750 15:32:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:54.750 15:32:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:54.750 15:32:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:54.750 15:32:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:54.750 15:32:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:54.750 15:32:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:54.750 15:32:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:27:54.750 15:32:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.311 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:01.311 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:28:01.311 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:01.311 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:01.311 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:01.311 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:01.311 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:01.311 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:28:01.311 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:01.311 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:28:01.311 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:28:01.311 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:28:01.311 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:28:01.311 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:28:01.311 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:28:01.311 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:01.311 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:01.311 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:01.311 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:01.311 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:01.311 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:01.311 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:01.311 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:01.311 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:01.311 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:01.311 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:01.311 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:01.311 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:01.311 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:01.311 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:01.311 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:01.311 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:01.311 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:01.311 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:01.311 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:01.311 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:01.311 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:01.311 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:01.311 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:01.311 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:01.311 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:01.311 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:01.311 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:01.311 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:01.312 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:01.312 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:01.312 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:01.312 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:01.312 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:01.312 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:01.312 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:01.312 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:01.312 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:01.312 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:01.312 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:01.312 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:01.312 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:01.312 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:01.312 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:01.312 Found net devices under 0000:af:00.0: cvl_0_0 00:28:01.312 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:01.312 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:01.312 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:01.312 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:01.312 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:01.312 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:01.312 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:01.312 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:01.312 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:01.312 Found net devices under 0000:af:00.1: cvl_0_1 00:28:01.312 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:01.312 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:01.312 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:28:01.312 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:01.312 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:01.312 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:01.312 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:01.312 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:01.312 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:01.312 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:01.312 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:01.312 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:01.312 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:01.312 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:01.312 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:01.312 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:01.312 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:01.312 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:01.312 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:01.312 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:01.312 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:01.312 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:01.312 15:33:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:01.312 15:33:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:01.312 15:33:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:01.312 15:33:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:01.312 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:01.312 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:28:01.312 00:28:01.312 --- 10.0.0.2 ping statistics --- 00:28:01.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:01.312 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:28:01.312 15:33:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:01.312 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:01.312 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:28:01.312 00:28:01.312 --- 10.0.0.1 ping statistics --- 00:28:01.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:01.312 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:28:01.312 15:33:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:01.312 15:33:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:28:01.312 15:33:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:01.312 15:33:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:01.312 15:33:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:01.312 15:33:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:01.312 15:33:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:01.312 15:33:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:01.312 15:33:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:01.312 15:33:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:28:01.312 15:33:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:01.312 15:33:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:01.312 15:33:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.312 15:33:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=3196173 00:28:01.312 15:33:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:28:01.312 15:33:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 3196173 00:28:01.312 15:33:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 3196173 ']' 00:28:01.312 15:33:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:01.312 15:33:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:01.312 15:33:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:01.312 15:33:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:01.313 15:33:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.245 15:33:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:02.245 15:33:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:28:02.245 15:33:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:02.245 15:33:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:02.245 15:33:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.245 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:02.245 15:33:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:28:02.245 15:33:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:28:02.245 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:02.245 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:02.245 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:02.245 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:28:02.245 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:28:02.245 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:02.245 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7bbfc11ef759bec471ee343c6991dafd 00:28:02.245 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:28:02.245 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.1Wh 00:28:02.245 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7bbfc11ef759bec471ee343c6991dafd 0 00:28:02.245 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7bbfc11ef759bec471ee343c6991dafd 0 00:28:02.245 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:02.245 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:02.245 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7bbfc11ef759bec471ee343c6991dafd 00:28:02.245 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:28:02.245 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:02.245 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.1Wh 00:28:02.245 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.1Wh 00:28:02.245 15:33:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.1Wh 00:28:02.245 15:33:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:28:02.245 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:02.245 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:02.245 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:02.245 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:28:02.245 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:28:02.245 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:02.245 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=310bbded8f189ecb0feeecce4f0ea23c0c8b9c235a0695c2834258fea6e88cfa 00:28:02.245 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:28:02.245 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.imH 00:28:02.245 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 310bbded8f189ecb0feeecce4f0ea23c0c8b9c235a0695c2834258fea6e88cfa 3 00:28:02.245 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 310bbded8f189ecb0feeecce4f0ea23c0c8b9c235a0695c2834258fea6e88cfa 3 00:28:02.245 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:02.245 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:02.245 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=310bbded8f189ecb0feeecce4f0ea23c0c8b9c235a0695c2834258fea6e88cfa 00:28:02.245 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:28:02.245 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:02.503 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.imH 00:28:02.503 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.imH 00:28:02.503 15:33:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.imH 00:28:02.503 15:33:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:28:02.503 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:02.503 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:02.503 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:02.503 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:28:02.503 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:28:02.503 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:02.503 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=22f77bea6f9533d49800e56a59123b7d99546562d13dfea4 00:28:02.503 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:28:02.503 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Oqa 00:28:02.503 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 22f77bea6f9533d49800e56a59123b7d99546562d13dfea4 0 00:28:02.503 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 22f77bea6f9533d49800e56a59123b7d99546562d13dfea4 0 00:28:02.503 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:02.503 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:02.503 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=22f77bea6f9533d49800e56a59123b7d99546562d13dfea4 00:28:02.503 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:28:02.503 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:02.503 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Oqa 00:28:02.503 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Oqa 00:28:02.503 15:33:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Oqa 00:28:02.503 15:33:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:28:02.503 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:02.503 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:02.503 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:02.503 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:28:02.503 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:28:02.503 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:02.503 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=658fd603ef6063c279f8e3430e31d88eaf9264cf33f04060 00:28:02.503 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:28:02.503 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.1gv 00:28:02.503 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 658fd603ef6063c279f8e3430e31d88eaf9264cf33f04060 2 00:28:02.503 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 658fd603ef6063c279f8e3430e31d88eaf9264cf33f04060 2 00:28:02.503 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:02.503 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:02.503 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=658fd603ef6063c279f8e3430e31d88eaf9264cf33f04060 00:28:02.503 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:28:02.503 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:02.503 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.1gv 00:28:02.503 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.1gv 00:28:02.503 15:33:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.1gv 00:28:02.503 15:33:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:02.503 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:02.503 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:02.503 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:02.503 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:28:02.503 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:28:02.504 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:02.504 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=99bb3d868d4ae5b05540e08b3669dd97 00:28:02.504 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:28:02.504 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.FWN 00:28:02.504 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 99bb3d868d4ae5b05540e08b3669dd97 1 00:28:02.504 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 99bb3d868d4ae5b05540e08b3669dd97 1 00:28:02.504 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:02.504 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:02.504 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=99bb3d868d4ae5b05540e08b3669dd97 00:28:02.504 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:28:02.504 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:02.504 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.FWN 00:28:02.504 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.FWN 00:28:02.504 15:33:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.FWN 00:28:02.504 15:33:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:02.504 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:02.504 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:02.504 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:02.504 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:28:02.504 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:28:02.504 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:02.504 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8c2be2dfc901d466701a6af461ee5e69 00:28:02.504 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:28:02.504 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.z4j 00:28:02.504 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8c2be2dfc901d466701a6af461ee5e69 1 00:28:02.504 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8c2be2dfc901d466701a6af461ee5e69 1 00:28:02.504 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:02.504 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:02.504 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8c2be2dfc901d466701a6af461ee5e69 00:28:02.504 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:28:02.504 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:02.504 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.z4j 00:28:02.504 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.z4j 00:28:02.504 15:33:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.z4j 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9cee32f740e51b795878c91dab1f0b31bc92d658aa736202 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.LPz 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9cee32f740e51b795878c91dab1f0b31bc92d658aa736202 2 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9cee32f740e51b795878c91dab1f0b31bc92d658aa736202 2 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9cee32f740e51b795878c91dab1f0b31bc92d658aa736202 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.LPz 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.LPz 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.LPz 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3b317802f7920f7dd04e1fcd95ce6c98 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.OQ5 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3b317802f7920f7dd04e1fcd95ce6c98 0 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3b317802f7920f7dd04e1fcd95ce6c98 0 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3b317802f7920f7dd04e1fcd95ce6c98 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.OQ5 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.OQ5 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.OQ5 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8372c20ec35afd50281332fc936511aef9ae928175cc96777ccf7202095bc57a 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.wn2 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8372c20ec35afd50281332fc936511aef9ae928175cc96777ccf7202095bc57a 3 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8372c20ec35afd50281332fc936511aef9ae928175cc96777ccf7202095bc57a 3 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8372c20ec35afd50281332fc936511aef9ae928175cc96777ccf7202095bc57a 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.wn2 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.wn2 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.wn2 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3196173 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 3196173 ']' 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:02.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:02.762 15:33:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.020 15:33:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.1Wh 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.imH ]] 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.imH 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Oqa 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.1gv ]] 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1gv 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.FWN 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.z4j ]] 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.z4j 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.LPz 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.OQ5 ]] 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.OQ5 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.wn2 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:03.021 15:33:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:06.300 Waiting for block devices as requested 00:28:06.300 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:06.558 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:06.558 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:06.558 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:06.816 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:06.816 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:06.816 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:06.816 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:07.073 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:07.073 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:07.073 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:07.331 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:07.331 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:07.331 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:07.589 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:07.589 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:07.589 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:28:08.553 15:33:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:28:08.553 15:33:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:08.553 15:33:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:28:08.553 15:33:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:28:08.553 15:33:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:08.553 15:33:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:28:08.553 15:33:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:28:08.553 15:33:12 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:28:08.553 15:33:12 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:08.553 No valid GPT data, bailing 00:28:08.553 15:33:12 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:08.553 15:33:12 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:28:08.553 15:33:12 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:28:08.553 15:33:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:28:08.553 15:33:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:28:08.553 15:33:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:08.553 15:33:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:08.553 15:33:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:08.553 15:33:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:28:08.553 15:33:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:28:08.553 15:33:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:28:08.553 15:33:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:28:08.553 15:33:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:28:08.553 15:33:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:28:08.553 15:33:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:28:08.553 15:33:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:28:08.553 15:33:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:08.553 15:33:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.1 -t tcp -s 4420 00:28:08.553 00:28:08.553 Discovery Log Number of Records 2, Generation counter 2 00:28:08.553 =====Discovery Log Entry 0====== 00:28:08.553 trtype: tcp 00:28:08.553 adrfam: ipv4 00:28:08.553 subtype: current discovery subsystem 00:28:08.553 treq: not specified, sq flow control disable supported 00:28:08.553 portid: 1 00:28:08.553 trsvcid: 4420 00:28:08.553 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:08.553 traddr: 10.0.0.1 00:28:08.553 eflags: none 00:28:08.553 sectype: none 00:28:08.553 =====Discovery Log Entry 1====== 00:28:08.553 trtype: tcp 00:28:08.553 adrfam: ipv4 00:28:08.553 subtype: nvme subsystem 00:28:08.553 treq: not specified, sq flow control disable supported 00:28:08.553 portid: 1 00:28:08.553 trsvcid: 4420 00:28:08.553 subnqn: nqn.2024-02.io.spdk:cnode0 00:28:08.553 traddr: 10.0.0.1 00:28:08.553 eflags: none 00:28:08.553 sectype: none 00:28:08.553 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:08.553 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:28:08.553 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:08.553 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:08.553 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.553 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:08.553 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:08.553 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:08.553 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJmNzdiZWE2Zjk1MzNkNDk4MDBlNTZhNTkxMjNiN2Q5OTU0NjU2MmQxM2RmZWE0LFoJ3g==: 00:28:08.553 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjU4ZmQ2MDNlZjYwNjNjMjc5ZjhlMzQzMGUzMWQ4OGVhZjkyNjRjZjMzZjA0MDYwiYd32w==: 00:28:08.553 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:08.553 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:08.553 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJmNzdiZWE2Zjk1MzNkNDk4MDBlNTZhNTkxMjNiN2Q5OTU0NjU2MmQxM2RmZWE0LFoJ3g==: 00:28:08.553 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjU4ZmQ2MDNlZjYwNjNjMjc5ZjhlMzQzMGUzMWQ4OGVhZjkyNjRjZjMzZjA0MDYwiYd32w==: ]] 00:28:08.553 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjU4ZmQ2MDNlZjYwNjNjMjc5ZjhlMzQzMGUzMWQ4OGVhZjkyNjRjZjMzZjA0MDYwiYd32w==: 00:28:08.553 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:08.553 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:28:08.553 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:08.553 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:08.553 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:28:08.553 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.553 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:28:08.553 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:08.553 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:08.553 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.553 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:08.553 15:33:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.553 15:33:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.812 15:33:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.812 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.812 15:33:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:08.812 15:33:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:08.812 15:33:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:08.812 15:33:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.812 15:33:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.812 15:33:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:08.812 15:33:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.812 15:33:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:08.812 15:33:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:08.812 15:33:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:08.812 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:08.812 15:33:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.812 15:33:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.812 nvme0n1 00:28:08.812 15:33:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.812 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.812 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.812 15:33:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.812 15:33:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.812 15:33:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.812 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.812 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.812 15:33:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.812 15:33:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.812 15:33:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.812 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:08.812 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:08.813 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.813 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:28:08.813 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.813 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:08.813 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:08.813 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:08.813 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2JiZmMxMWVmNzU5YmVjNDcxZWUzNDNjNjk5MWRhZmQm4JW9: 00:28:08.813 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzEwYmJkZWQ4ZjE4OWVjYjBmZWVlY2NlNGYwZWEyM2MwYzhiOWMyMzVhMDY5NWMyODM0MjU4ZmVhNmU4OGNmYchvmk4=: 00:28:08.813 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:08.813 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:08.813 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2JiZmMxMWVmNzU5YmVjNDcxZWUzNDNjNjk5MWRhZmQm4JW9: 00:28:08.813 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzEwYmJkZWQ4ZjE4OWVjYjBmZWVlY2NlNGYwZWEyM2MwYzhiOWMyMzVhMDY5NWMyODM0MjU4ZmVhNmU4OGNmYchvmk4=: ]] 00:28:08.813 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzEwYmJkZWQ4ZjE4OWVjYjBmZWVlY2NlNGYwZWEyM2MwYzhiOWMyMzVhMDY5NWMyODM0MjU4ZmVhNmU4OGNmYchvmk4=: 00:28:08.813 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:28:08.813 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.813 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:08.813 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:08.813 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:08.813 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.813 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:08.813 15:33:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.813 15:33:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.813 15:33:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.813 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.813 15:33:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:08.813 15:33:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:08.813 15:33:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:08.813 15:33:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.813 15:33:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.813 15:33:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:08.813 15:33:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.813 15:33:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:08.813 15:33:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:08.813 15:33:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:08.813 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:08.813 15:33:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.813 15:33:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.071 nvme0n1 00:28:09.071 15:33:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.071 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.071 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.071 15:33:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.071 15:33:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.071 15:33:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.071 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.071 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.071 15:33:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.071 15:33:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.071 15:33:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.071 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.071 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:09.071 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.071 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:09.071 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:09.071 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:09.071 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJmNzdiZWE2Zjk1MzNkNDk4MDBlNTZhNTkxMjNiN2Q5OTU0NjU2MmQxM2RmZWE0LFoJ3g==: 00:28:09.071 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjU4ZmQ2MDNlZjYwNjNjMjc5ZjhlMzQzMGUzMWQ4OGVhZjkyNjRjZjMzZjA0MDYwiYd32w==: 00:28:09.071 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:09.071 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:09.071 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJmNzdiZWE2Zjk1MzNkNDk4MDBlNTZhNTkxMjNiN2Q5OTU0NjU2MmQxM2RmZWE0LFoJ3g==: 00:28:09.071 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjU4ZmQ2MDNlZjYwNjNjMjc5ZjhlMzQzMGUzMWQ4OGVhZjkyNjRjZjMzZjA0MDYwiYd32w==: ]] 00:28:09.071 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjU4ZmQ2MDNlZjYwNjNjMjc5ZjhlMzQzMGUzMWQ4OGVhZjkyNjRjZjMzZjA0MDYwiYd32w==: 00:28:09.071 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:28:09.071 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.071 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:09.071 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:09.071 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:09.071 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.071 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:09.071 15:33:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.071 15:33:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.071 15:33:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.071 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.071 15:33:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:09.071 15:33:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:09.071 15:33:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:09.071 15:33:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.071 15:33:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.071 15:33:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:09.071 15:33:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.071 15:33:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:09.071 15:33:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:09.071 15:33:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:09.071 15:33:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:09.071 15:33:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.071 15:33:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.329 nvme0n1 00:28:09.329 15:33:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.329 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.329 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.329 15:33:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.329 15:33:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.329 15:33:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.329 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.329 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.329 15:33:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.329 15:33:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.329 15:33:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.329 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.329 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:09.329 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.329 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:09.329 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:09.329 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:09.329 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTliYjNkODY4ZDRhZTViMDU1NDBlMDhiMzY2OWRkOTdpGiRa: 00:28:09.329 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGMyYmUyZGZjOTAxZDQ2NjcwMWE2YWY0NjFlZTVlNjne8XT0: 00:28:09.329 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:09.329 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:09.329 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTliYjNkODY4ZDRhZTViMDU1NDBlMDhiMzY2OWRkOTdpGiRa: 00:28:09.329 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGMyYmUyZGZjOTAxZDQ2NjcwMWE2YWY0NjFlZTVlNjne8XT0: ]] 00:28:09.329 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGMyYmUyZGZjOTAxZDQ2NjcwMWE2YWY0NjFlZTVlNjne8XT0: 00:28:09.329 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:28:09.329 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.329 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:09.329 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:09.329 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:09.329 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.329 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:09.329 15:33:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.329 15:33:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.329 15:33:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.329 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.329 15:33:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:09.329 15:33:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:09.329 15:33:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:09.329 15:33:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.330 15:33:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.330 15:33:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:09.330 15:33:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.330 15:33:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:09.330 15:33:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:09.330 15:33:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:09.330 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:09.330 15:33:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.330 15:33:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.330 nvme0n1 00:28:09.330 15:33:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.587 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.587 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.587 15:33:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.587 15:33:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.587 15:33:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.587 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.587 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.587 15:33:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.587 15:33:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.587 15:33:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.587 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.587 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:28:09.587 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.587 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:09.587 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:09.587 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:09.587 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWNlZTMyZjc0MGU1MWI3OTU4NzhjOTFkYWIxZjBiMzFiYzkyZDY1OGFhNzM2MjAyGACbjA==: 00:28:09.587 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2IzMTc4MDJmNzkyMGY3ZGQwNGUxZmNkOTVjZTZjOTj8C3Du: 00:28:09.587 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:09.587 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:09.587 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWNlZTMyZjc0MGU1MWI3OTU4NzhjOTFkYWIxZjBiMzFiYzkyZDY1OGFhNzM2MjAyGACbjA==: 00:28:09.587 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2IzMTc4MDJmNzkyMGY3ZGQwNGUxZmNkOTVjZTZjOTj8C3Du: ]] 00:28:09.587 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2IzMTc4MDJmNzkyMGY3ZGQwNGUxZmNkOTVjZTZjOTj8C3Du: 00:28:09.587 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:28:09.587 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.587 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:09.587 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:09.587 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:09.587 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.587 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:09.587 15:33:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.587 15:33:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.587 15:33:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.587 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.587 15:33:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:09.587 15:33:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:09.587 15:33:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:09.587 15:33:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.587 15:33:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.587 15:33:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:09.587 15:33:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.587 15:33:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:09.587 15:33:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:09.587 15:33:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:09.587 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:09.587 15:33:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.587 15:33:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.587 nvme0n1 00:28:09.587 15:33:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.587 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.587 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.587 15:33:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.587 15:33:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.587 15:33:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.587 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.587 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.587 15:33:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.587 15:33:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODM3MmMyMGVjMzVhZmQ1MDI4MTMzMmZjOTM2NTExYWVmOWFlOTI4MTc1Y2M5Njc3N2NjZjcyMDIwOTViYzU3YTMs+MM=: 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODM3MmMyMGVjMzVhZmQ1MDI4MTMzMmZjOTM2NTExYWVmOWFlOTI4MTc1Y2M5Njc3N2NjZjcyMDIwOTViYzU3YTMs+MM=: 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.845 nvme0n1 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2JiZmMxMWVmNzU5YmVjNDcxZWUzNDNjNjk5MWRhZmQm4JW9: 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzEwYmJkZWQ4ZjE4OWVjYjBmZWVlY2NlNGYwZWEyM2MwYzhiOWMyMzVhMDY5NWMyODM0MjU4ZmVhNmU4OGNmYchvmk4=: 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2JiZmMxMWVmNzU5YmVjNDcxZWUzNDNjNjk5MWRhZmQm4JW9: 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzEwYmJkZWQ4ZjE4OWVjYjBmZWVlY2NlNGYwZWEyM2MwYzhiOWMyMzVhMDY5NWMyODM0MjU4ZmVhNmU4OGNmYchvmk4=: ]] 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzEwYmJkZWQ4ZjE4OWVjYjBmZWVlY2NlNGYwZWEyM2MwYzhiOWMyMzVhMDY5NWMyODM0MjU4ZmVhNmU4OGNmYchvmk4=: 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.845 15:33:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.103 nvme0n1 00:28:10.103 15:33:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.103 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.103 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.103 15:33:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.103 15:33:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.103 15:33:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.103 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.103 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.103 15:33:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.103 15:33:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.103 15:33:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.103 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.103 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:28:10.103 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.103 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:10.103 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:10.103 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:10.103 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJmNzdiZWE2Zjk1MzNkNDk4MDBlNTZhNTkxMjNiN2Q5OTU0NjU2MmQxM2RmZWE0LFoJ3g==: 00:28:10.103 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjU4ZmQ2MDNlZjYwNjNjMjc5ZjhlMzQzMGUzMWQ4OGVhZjkyNjRjZjMzZjA0MDYwiYd32w==: 00:28:10.103 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:10.103 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:10.103 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJmNzdiZWE2Zjk1MzNkNDk4MDBlNTZhNTkxMjNiN2Q5OTU0NjU2MmQxM2RmZWE0LFoJ3g==: 00:28:10.103 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjU4ZmQ2MDNlZjYwNjNjMjc5ZjhlMzQzMGUzMWQ4OGVhZjkyNjRjZjMzZjA0MDYwiYd32w==: ]] 00:28:10.103 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjU4ZmQ2MDNlZjYwNjNjMjc5ZjhlMzQzMGUzMWQ4OGVhZjkyNjRjZjMzZjA0MDYwiYd32w==: 00:28:10.103 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:28:10.103 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.103 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:10.103 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:10.103 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:10.103 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.103 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:10.103 15:33:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.103 15:33:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.103 15:33:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.103 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.103 15:33:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:10.103 15:33:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:10.103 15:33:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:10.103 15:33:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.104 15:33:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.104 15:33:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:10.104 15:33:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.104 15:33:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:10.104 15:33:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:10.104 15:33:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:10.104 15:33:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:10.104 15:33:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.104 15:33:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.361 nvme0n1 00:28:10.361 15:33:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.361 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.361 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.361 15:33:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.361 15:33:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.361 15:33:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.361 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.361 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.361 15:33:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.361 15:33:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.361 15:33:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.361 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.361 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:28:10.361 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.361 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:10.361 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:10.361 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:10.361 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTliYjNkODY4ZDRhZTViMDU1NDBlMDhiMzY2OWRkOTdpGiRa: 00:28:10.361 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGMyYmUyZGZjOTAxZDQ2NjcwMWE2YWY0NjFlZTVlNjne8XT0: 00:28:10.361 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:10.361 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:10.362 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTliYjNkODY4ZDRhZTViMDU1NDBlMDhiMzY2OWRkOTdpGiRa: 00:28:10.362 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGMyYmUyZGZjOTAxZDQ2NjcwMWE2YWY0NjFlZTVlNjne8XT0: ]] 00:28:10.362 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGMyYmUyZGZjOTAxZDQ2NjcwMWE2YWY0NjFlZTVlNjne8XT0: 00:28:10.362 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:28:10.362 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.362 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:10.362 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:10.362 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:10.362 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.362 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:10.362 15:33:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.362 15:33:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.362 15:33:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.362 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.362 15:33:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:10.362 15:33:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:10.362 15:33:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:10.362 15:33:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.362 15:33:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.362 15:33:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:10.362 15:33:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.362 15:33:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:10.362 15:33:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:10.362 15:33:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:10.362 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:10.362 15:33:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.362 15:33:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.619 nvme0n1 00:28:10.619 15:33:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.619 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.619 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.619 15:33:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.619 15:33:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.619 15:33:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.619 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.619 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.619 15:33:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.619 15:33:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.619 15:33:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.619 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.619 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:28:10.619 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.619 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:10.619 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:10.619 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:10.619 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWNlZTMyZjc0MGU1MWI3OTU4NzhjOTFkYWIxZjBiMzFiYzkyZDY1OGFhNzM2MjAyGACbjA==: 00:28:10.619 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2IzMTc4MDJmNzkyMGY3ZGQwNGUxZmNkOTVjZTZjOTj8C3Du: 00:28:10.619 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:10.619 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:10.619 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWNlZTMyZjc0MGU1MWI3OTU4NzhjOTFkYWIxZjBiMzFiYzkyZDY1OGFhNzM2MjAyGACbjA==: 00:28:10.619 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2IzMTc4MDJmNzkyMGY3ZGQwNGUxZmNkOTVjZTZjOTj8C3Du: ]] 00:28:10.619 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2IzMTc4MDJmNzkyMGY3ZGQwNGUxZmNkOTVjZTZjOTj8C3Du: 00:28:10.619 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:28:10.619 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.619 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:10.619 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:10.619 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:10.619 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.619 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:10.619 15:33:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.619 15:33:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.619 15:33:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.619 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.619 15:33:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:10.619 15:33:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:10.619 15:33:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:10.619 15:33:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.619 15:33:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.619 15:33:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:10.619 15:33:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.619 15:33:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:10.619 15:33:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:10.619 15:33:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:10.619 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:10.619 15:33:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.619 15:33:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.878 nvme0n1 00:28:10.878 15:33:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.878 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.878 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.878 15:33:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.878 15:33:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.878 15:33:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.878 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.878 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.878 15:33:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.878 15:33:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.878 15:33:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.878 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.878 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:28:10.878 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.878 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:10.878 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:10.878 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:10.878 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODM3MmMyMGVjMzVhZmQ1MDI4MTMzMmZjOTM2NTExYWVmOWFlOTI4MTc1Y2M5Njc3N2NjZjcyMDIwOTViYzU3YTMs+MM=: 00:28:10.878 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:10.878 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:10.878 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:10.878 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODM3MmMyMGVjMzVhZmQ1MDI4MTMzMmZjOTM2NTExYWVmOWFlOTI4MTc1Y2M5Njc3N2NjZjcyMDIwOTViYzU3YTMs+MM=: 00:28:10.878 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:10.878 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:28:10.878 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.878 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:10.878 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:10.878 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:10.878 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.878 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:10.878 15:33:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.878 15:33:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.878 15:33:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.878 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.878 15:33:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:10.878 15:33:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:10.878 15:33:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:10.878 15:33:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.878 15:33:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.878 15:33:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:10.878 15:33:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.878 15:33:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:10.878 15:33:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:10.878 15:33:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:10.878 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:10.878 15:33:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.878 15:33:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.136 nvme0n1 00:28:11.136 15:33:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.136 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.136 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.136 15:33:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.136 15:33:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.136 15:33:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.136 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.136 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.136 15:33:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.136 15:33:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.136 15:33:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.136 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:11.136 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.136 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:28:11.136 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.136 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:11.136 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:11.136 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:11.136 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2JiZmMxMWVmNzU5YmVjNDcxZWUzNDNjNjk5MWRhZmQm4JW9: 00:28:11.136 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzEwYmJkZWQ4ZjE4OWVjYjBmZWVlY2NlNGYwZWEyM2MwYzhiOWMyMzVhMDY5NWMyODM0MjU4ZmVhNmU4OGNmYchvmk4=: 00:28:11.136 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:11.136 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:11.136 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2JiZmMxMWVmNzU5YmVjNDcxZWUzNDNjNjk5MWRhZmQm4JW9: 00:28:11.136 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzEwYmJkZWQ4ZjE4OWVjYjBmZWVlY2NlNGYwZWEyM2MwYzhiOWMyMzVhMDY5NWMyODM0MjU4ZmVhNmU4OGNmYchvmk4=: ]] 00:28:11.136 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzEwYmJkZWQ4ZjE4OWVjYjBmZWVlY2NlNGYwZWEyM2MwYzhiOWMyMzVhMDY5NWMyODM0MjU4ZmVhNmU4OGNmYchvmk4=: 00:28:11.136 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:28:11.136 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.136 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:11.136 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:11.136 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:11.136 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.136 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:11.136 15:33:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.136 15:33:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.136 15:33:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.136 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.136 15:33:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:11.136 15:33:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:11.136 15:33:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:11.136 15:33:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.136 15:33:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.136 15:33:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:11.136 15:33:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.136 15:33:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:11.136 15:33:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:11.136 15:33:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:11.136 15:33:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:11.136 15:33:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.137 15:33:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.395 nvme0n1 00:28:11.395 15:33:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.395 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.395 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.395 15:33:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.395 15:33:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.395 15:33:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.395 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.395 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.395 15:33:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.395 15:33:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.395 15:33:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.395 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.395 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:28:11.395 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.395 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:11.395 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:11.395 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:11.395 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJmNzdiZWE2Zjk1MzNkNDk4MDBlNTZhNTkxMjNiN2Q5OTU0NjU2MmQxM2RmZWE0LFoJ3g==: 00:28:11.395 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjU4ZmQ2MDNlZjYwNjNjMjc5ZjhlMzQzMGUzMWQ4OGVhZjkyNjRjZjMzZjA0MDYwiYd32w==: 00:28:11.395 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:11.395 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:11.395 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJmNzdiZWE2Zjk1MzNkNDk4MDBlNTZhNTkxMjNiN2Q5OTU0NjU2MmQxM2RmZWE0LFoJ3g==: 00:28:11.395 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjU4ZmQ2MDNlZjYwNjNjMjc5ZjhlMzQzMGUzMWQ4OGVhZjkyNjRjZjMzZjA0MDYwiYd32w==: ]] 00:28:11.395 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjU4ZmQ2MDNlZjYwNjNjMjc5ZjhlMzQzMGUzMWQ4OGVhZjkyNjRjZjMzZjA0MDYwiYd32w==: 00:28:11.395 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:28:11.395 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.395 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:11.395 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:11.395 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:11.395 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.395 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:11.395 15:33:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.395 15:33:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.395 15:33:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.395 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.395 15:33:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:11.395 15:33:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:11.395 15:33:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:11.395 15:33:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.395 15:33:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.395 15:33:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:11.395 15:33:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.395 15:33:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:11.395 15:33:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:11.395 15:33:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:11.395 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:11.395 15:33:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.395 15:33:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.653 nvme0n1 00:28:11.653 15:33:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.653 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.653 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.653 15:33:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.653 15:33:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.653 15:33:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.653 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.653 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.653 15:33:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.653 15:33:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.653 15:33:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.653 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.653 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:28:11.653 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.653 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:11.653 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:11.653 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:11.653 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTliYjNkODY4ZDRhZTViMDU1NDBlMDhiMzY2OWRkOTdpGiRa: 00:28:11.653 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGMyYmUyZGZjOTAxZDQ2NjcwMWE2YWY0NjFlZTVlNjne8XT0: 00:28:11.653 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:11.653 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:11.653 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTliYjNkODY4ZDRhZTViMDU1NDBlMDhiMzY2OWRkOTdpGiRa: 00:28:11.653 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGMyYmUyZGZjOTAxZDQ2NjcwMWE2YWY0NjFlZTVlNjne8XT0: ]] 00:28:11.653 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGMyYmUyZGZjOTAxZDQ2NjcwMWE2YWY0NjFlZTVlNjne8XT0: 00:28:11.653 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:28:11.653 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.653 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:11.653 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:11.653 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:11.653 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.654 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:11.654 15:33:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.654 15:33:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.654 15:33:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.654 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.654 15:33:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:11.654 15:33:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:11.654 15:33:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:11.654 15:33:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.654 15:33:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.654 15:33:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:11.654 15:33:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.654 15:33:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:11.654 15:33:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:11.654 15:33:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:11.654 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:11.654 15:33:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.654 15:33:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.911 nvme0n1 00:28:11.911 15:33:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.911 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.911 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.911 15:33:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.911 15:33:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.911 15:33:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.911 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.911 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.911 15:33:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.911 15:33:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.911 15:33:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.911 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.911 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:28:11.911 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.911 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:11.911 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:11.911 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:11.911 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWNlZTMyZjc0MGU1MWI3OTU4NzhjOTFkYWIxZjBiMzFiYzkyZDY1OGFhNzM2MjAyGACbjA==: 00:28:11.911 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2IzMTc4MDJmNzkyMGY3ZGQwNGUxZmNkOTVjZTZjOTj8C3Du: 00:28:11.911 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:11.911 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:11.912 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWNlZTMyZjc0MGU1MWI3OTU4NzhjOTFkYWIxZjBiMzFiYzkyZDY1OGFhNzM2MjAyGACbjA==: 00:28:11.912 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2IzMTc4MDJmNzkyMGY3ZGQwNGUxZmNkOTVjZTZjOTj8C3Du: ]] 00:28:11.912 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2IzMTc4MDJmNzkyMGY3ZGQwNGUxZmNkOTVjZTZjOTj8C3Du: 00:28:11.912 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:28:11.912 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.912 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:11.912 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:11.912 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:11.912 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.912 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:11.912 15:33:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.912 15:33:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.912 15:33:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.912 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.912 15:33:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:11.912 15:33:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:11.912 15:33:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:11.912 15:33:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.912 15:33:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.912 15:33:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:11.912 15:33:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.912 15:33:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:11.912 15:33:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:11.912 15:33:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:11.912 15:33:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:11.912 15:33:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.912 15:33:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.169 nvme0n1 00:28:12.169 15:33:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.169 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.169 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.169 15:33:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.169 15:33:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.169 15:33:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.427 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.427 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.427 15:33:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.427 15:33:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.427 15:33:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.427 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.427 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:28:12.427 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.427 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:12.427 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:12.427 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:12.427 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODM3MmMyMGVjMzVhZmQ1MDI4MTMzMmZjOTM2NTExYWVmOWFlOTI4MTc1Y2M5Njc3N2NjZjcyMDIwOTViYzU3YTMs+MM=: 00:28:12.427 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:12.427 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:12.427 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:12.427 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODM3MmMyMGVjMzVhZmQ1MDI4MTMzMmZjOTM2NTExYWVmOWFlOTI4MTc1Y2M5Njc3N2NjZjcyMDIwOTViYzU3YTMs+MM=: 00:28:12.427 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:12.427 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:28:12.427 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.427 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:12.427 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:12.427 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:12.427 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.427 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:12.427 15:33:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.427 15:33:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.427 15:33:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.427 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.427 15:33:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:12.427 15:33:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:12.427 15:33:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:12.427 15:33:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.427 15:33:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.427 15:33:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:12.427 15:33:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.427 15:33:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:12.427 15:33:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:12.427 15:33:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:12.427 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:12.427 15:33:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.427 15:33:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.685 nvme0n1 00:28:12.685 15:33:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.685 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.685 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.685 15:33:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.685 15:33:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.685 15:33:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.685 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.685 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.685 15:33:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.685 15:33:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.685 15:33:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.685 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:12.685 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.685 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:28:12.685 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.685 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:12.685 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:12.685 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:12.685 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2JiZmMxMWVmNzU5YmVjNDcxZWUzNDNjNjk5MWRhZmQm4JW9: 00:28:12.685 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzEwYmJkZWQ4ZjE4OWVjYjBmZWVlY2NlNGYwZWEyM2MwYzhiOWMyMzVhMDY5NWMyODM0MjU4ZmVhNmU4OGNmYchvmk4=: 00:28:12.685 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:12.685 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:12.685 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2JiZmMxMWVmNzU5YmVjNDcxZWUzNDNjNjk5MWRhZmQm4JW9: 00:28:12.685 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzEwYmJkZWQ4ZjE4OWVjYjBmZWVlY2NlNGYwZWEyM2MwYzhiOWMyMzVhMDY5NWMyODM0MjU4ZmVhNmU4OGNmYchvmk4=: ]] 00:28:12.685 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzEwYmJkZWQ4ZjE4OWVjYjBmZWVlY2NlNGYwZWEyM2MwYzhiOWMyMzVhMDY5NWMyODM0MjU4ZmVhNmU4OGNmYchvmk4=: 00:28:12.685 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:28:12.685 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.685 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:12.685 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:12.685 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:12.685 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.685 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:12.685 15:33:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.685 15:33:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.685 15:33:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.685 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.685 15:33:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:12.685 15:33:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:12.685 15:33:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:12.686 15:33:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.686 15:33:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.686 15:33:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:12.686 15:33:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.686 15:33:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:12.686 15:33:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:12.686 15:33:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:12.686 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:12.686 15:33:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.686 15:33:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.944 nvme0n1 00:28:12.944 15:33:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.944 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.944 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.944 15:33:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.944 15:33:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.944 15:33:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.944 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.944 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.944 15:33:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.944 15:33:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.202 15:33:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.202 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:13.202 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:28:13.202 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:13.202 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:13.202 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:13.202 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:13.202 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJmNzdiZWE2Zjk1MzNkNDk4MDBlNTZhNTkxMjNiN2Q5OTU0NjU2MmQxM2RmZWE0LFoJ3g==: 00:28:13.202 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjU4ZmQ2MDNlZjYwNjNjMjc5ZjhlMzQzMGUzMWQ4OGVhZjkyNjRjZjMzZjA0MDYwiYd32w==: 00:28:13.202 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:13.202 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:13.202 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJmNzdiZWE2Zjk1MzNkNDk4MDBlNTZhNTkxMjNiN2Q5OTU0NjU2MmQxM2RmZWE0LFoJ3g==: 00:28:13.202 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjU4ZmQ2MDNlZjYwNjNjMjc5ZjhlMzQzMGUzMWQ4OGVhZjkyNjRjZjMzZjA0MDYwiYd32w==: ]] 00:28:13.202 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjU4ZmQ2MDNlZjYwNjNjMjc5ZjhlMzQzMGUzMWQ4OGVhZjkyNjRjZjMzZjA0MDYwiYd32w==: 00:28:13.202 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:28:13.202 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:13.202 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:13.202 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:13.202 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:13.202 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:13.202 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:13.202 15:33:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.202 15:33:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.202 15:33:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.202 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:13.202 15:33:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:13.202 15:33:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:13.202 15:33:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:13.202 15:33:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.202 15:33:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.202 15:33:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:13.202 15:33:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.202 15:33:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:13.202 15:33:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:13.202 15:33:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:13.202 15:33:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:13.202 15:33:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.202 15:33:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.460 nvme0n1 00:28:13.460 15:33:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.460 15:33:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.460 15:33:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:13.460 15:33:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.460 15:33:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.460 15:33:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.460 15:33:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.460 15:33:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.460 15:33:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.460 15:33:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.460 15:33:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.460 15:33:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:13.460 15:33:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:28:13.460 15:33:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:13.460 15:33:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:13.460 15:33:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:13.460 15:33:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:13.460 15:33:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTliYjNkODY4ZDRhZTViMDU1NDBlMDhiMzY2OWRkOTdpGiRa: 00:28:13.460 15:33:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGMyYmUyZGZjOTAxZDQ2NjcwMWE2YWY0NjFlZTVlNjne8XT0: 00:28:13.460 15:33:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:13.460 15:33:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:13.460 15:33:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTliYjNkODY4ZDRhZTViMDU1NDBlMDhiMzY2OWRkOTdpGiRa: 00:28:13.460 15:33:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGMyYmUyZGZjOTAxZDQ2NjcwMWE2YWY0NjFlZTVlNjne8XT0: ]] 00:28:13.460 15:33:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGMyYmUyZGZjOTAxZDQ2NjcwMWE2YWY0NjFlZTVlNjne8XT0: 00:28:13.460 15:33:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:28:13.460 15:33:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:13.460 15:33:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:13.460 15:33:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:13.460 15:33:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:13.460 15:33:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:13.460 15:33:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:13.460 15:33:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.460 15:33:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.460 15:33:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.460 15:33:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:13.460 15:33:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:13.460 15:33:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:13.460 15:33:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:13.460 15:33:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.460 15:33:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.460 15:33:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:13.460 15:33:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.460 15:33:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:13.460 15:33:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:13.460 15:33:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:13.460 15:33:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:13.460 15:33:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.460 15:33:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.025 nvme0n1 00:28:14.025 15:33:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.025 15:33:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.025 15:33:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.025 15:33:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.025 15:33:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.025 15:33:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.025 15:33:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.025 15:33:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.025 15:33:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.025 15:33:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.025 15:33:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.025 15:33:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.025 15:33:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:28:14.025 15:33:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.025 15:33:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:14.025 15:33:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:14.025 15:33:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:14.025 15:33:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWNlZTMyZjc0MGU1MWI3OTU4NzhjOTFkYWIxZjBiMzFiYzkyZDY1OGFhNzM2MjAyGACbjA==: 00:28:14.025 15:33:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2IzMTc4MDJmNzkyMGY3ZGQwNGUxZmNkOTVjZTZjOTj8C3Du: 00:28:14.025 15:33:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:14.025 15:33:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:14.025 15:33:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWNlZTMyZjc0MGU1MWI3OTU4NzhjOTFkYWIxZjBiMzFiYzkyZDY1OGFhNzM2MjAyGACbjA==: 00:28:14.025 15:33:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2IzMTc4MDJmNzkyMGY3ZGQwNGUxZmNkOTVjZTZjOTj8C3Du: ]] 00:28:14.026 15:33:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2IzMTc4MDJmNzkyMGY3ZGQwNGUxZmNkOTVjZTZjOTj8C3Du: 00:28:14.026 15:33:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:28:14.026 15:33:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.026 15:33:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:14.026 15:33:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:14.026 15:33:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:14.026 15:33:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.026 15:33:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:14.026 15:33:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.026 15:33:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.026 15:33:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.026 15:33:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.026 15:33:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:14.026 15:33:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:14.026 15:33:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:14.026 15:33:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.026 15:33:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.026 15:33:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:14.026 15:33:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.026 15:33:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:14.026 15:33:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:14.026 15:33:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:14.026 15:33:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:14.026 15:33:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.026 15:33:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.283 nvme0n1 00:28:14.283 15:33:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.283 15:33:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.283 15:33:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.283 15:33:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.283 15:33:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.283 15:33:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.283 15:33:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.283 15:33:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.283 15:33:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.283 15:33:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.283 15:33:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.283 15:33:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.283 15:33:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:28:14.283 15:33:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.283 15:33:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:14.283 15:33:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:14.283 15:33:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:14.283 15:33:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODM3MmMyMGVjMzVhZmQ1MDI4MTMzMmZjOTM2NTExYWVmOWFlOTI4MTc1Y2M5Njc3N2NjZjcyMDIwOTViYzU3YTMs+MM=: 00:28:14.283 15:33:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:14.283 15:33:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:14.283 15:33:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:14.283 15:33:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODM3MmMyMGVjMzVhZmQ1MDI4MTMzMmZjOTM2NTExYWVmOWFlOTI4MTc1Y2M5Njc3N2NjZjcyMDIwOTViYzU3YTMs+MM=: 00:28:14.283 15:33:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:14.283 15:33:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:28:14.283 15:33:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.283 15:33:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:14.283 15:33:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:14.283 15:33:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:14.283 15:33:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.283 15:33:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:14.283 15:33:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.283 15:33:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.283 15:33:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.283 15:33:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.283 15:33:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:14.283 15:33:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:14.283 15:33:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:14.283 15:33:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.283 15:33:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.283 15:33:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:14.283 15:33:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.283 15:33:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:14.283 15:33:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:14.283 15:33:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:14.283 15:33:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:14.283 15:33:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.283 15:33:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.849 nvme0n1 00:28:14.849 15:33:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.849 15:33:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.849 15:33:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.849 15:33:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.849 15:33:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.849 15:33:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.849 15:33:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.849 15:33:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.849 15:33:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.849 15:33:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.849 15:33:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.849 15:33:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:14.849 15:33:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.849 15:33:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:28:14.849 15:33:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.849 15:33:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:14.849 15:33:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:14.849 15:33:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:14.849 15:33:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2JiZmMxMWVmNzU5YmVjNDcxZWUzNDNjNjk5MWRhZmQm4JW9: 00:28:14.849 15:33:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzEwYmJkZWQ4ZjE4OWVjYjBmZWVlY2NlNGYwZWEyM2MwYzhiOWMyMzVhMDY5NWMyODM0MjU4ZmVhNmU4OGNmYchvmk4=: 00:28:14.849 15:33:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:14.849 15:33:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:14.849 15:33:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2JiZmMxMWVmNzU5YmVjNDcxZWUzNDNjNjk5MWRhZmQm4JW9: 00:28:14.850 15:33:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzEwYmJkZWQ4ZjE4OWVjYjBmZWVlY2NlNGYwZWEyM2MwYzhiOWMyMzVhMDY5NWMyODM0MjU4ZmVhNmU4OGNmYchvmk4=: ]] 00:28:14.850 15:33:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzEwYmJkZWQ4ZjE4OWVjYjBmZWVlY2NlNGYwZWEyM2MwYzhiOWMyMzVhMDY5NWMyODM0MjU4ZmVhNmU4OGNmYchvmk4=: 00:28:14.850 15:33:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:28:14.850 15:33:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.850 15:33:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:14.850 15:33:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:14.850 15:33:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:14.850 15:33:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.850 15:33:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:14.850 15:33:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.850 15:33:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.850 15:33:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.850 15:33:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.850 15:33:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:14.850 15:33:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:14.850 15:33:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:14.850 15:33:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.850 15:33:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.850 15:33:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:14.850 15:33:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.850 15:33:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:14.850 15:33:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:14.850 15:33:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:14.850 15:33:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:14.850 15:33:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.850 15:33:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.416 nvme0n1 00:28:15.416 15:33:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.416 15:33:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.416 15:33:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.416 15:33:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.416 15:33:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.416 15:33:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.416 15:33:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.416 15:33:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.416 15:33:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.416 15:33:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.416 15:33:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.416 15:33:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.416 15:33:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:28:15.416 15:33:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.416 15:33:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:15.416 15:33:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:15.416 15:33:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:15.416 15:33:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJmNzdiZWE2Zjk1MzNkNDk4MDBlNTZhNTkxMjNiN2Q5OTU0NjU2MmQxM2RmZWE0LFoJ3g==: 00:28:15.416 15:33:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjU4ZmQ2MDNlZjYwNjNjMjc5ZjhlMzQzMGUzMWQ4OGVhZjkyNjRjZjMzZjA0MDYwiYd32w==: 00:28:15.416 15:33:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:15.416 15:33:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:15.416 15:33:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJmNzdiZWE2Zjk1MzNkNDk4MDBlNTZhNTkxMjNiN2Q5OTU0NjU2MmQxM2RmZWE0LFoJ3g==: 00:28:15.416 15:33:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjU4ZmQ2MDNlZjYwNjNjMjc5ZjhlMzQzMGUzMWQ4OGVhZjkyNjRjZjMzZjA0MDYwiYd32w==: ]] 00:28:15.416 15:33:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjU4ZmQ2MDNlZjYwNjNjMjc5ZjhlMzQzMGUzMWQ4OGVhZjkyNjRjZjMzZjA0MDYwiYd32w==: 00:28:15.416 15:33:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:28:15.416 15:33:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.416 15:33:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:15.416 15:33:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:15.416 15:33:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:15.416 15:33:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.416 15:33:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:15.416 15:33:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.416 15:33:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.416 15:33:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.416 15:33:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.416 15:33:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:15.416 15:33:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:15.416 15:33:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:15.416 15:33:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.416 15:33:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.416 15:33:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:15.416 15:33:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.416 15:33:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:15.416 15:33:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:15.416 15:33:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:15.416 15:33:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:15.416 15:33:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.416 15:33:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.979 nvme0n1 00:28:15.979 15:33:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.979 15:33:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.979 15:33:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.980 15:33:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.980 15:33:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.980 15:33:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.980 15:33:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.980 15:33:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.980 15:33:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.980 15:33:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.980 15:33:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.980 15:33:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.980 15:33:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:28:15.980 15:33:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.980 15:33:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:15.980 15:33:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:15.980 15:33:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:15.980 15:33:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTliYjNkODY4ZDRhZTViMDU1NDBlMDhiMzY2OWRkOTdpGiRa: 00:28:15.980 15:33:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGMyYmUyZGZjOTAxZDQ2NjcwMWE2YWY0NjFlZTVlNjne8XT0: 00:28:15.980 15:33:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:15.980 15:33:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:15.980 15:33:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTliYjNkODY4ZDRhZTViMDU1NDBlMDhiMzY2OWRkOTdpGiRa: 00:28:15.980 15:33:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGMyYmUyZGZjOTAxZDQ2NjcwMWE2YWY0NjFlZTVlNjne8XT0: ]] 00:28:15.980 15:33:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGMyYmUyZGZjOTAxZDQ2NjcwMWE2YWY0NjFlZTVlNjne8XT0: 00:28:15.980 15:33:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:28:15.980 15:33:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.980 15:33:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:15.980 15:33:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:15.980 15:33:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:15.980 15:33:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.980 15:33:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:15.980 15:33:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.980 15:33:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.980 15:33:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.980 15:33:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.980 15:33:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:15.980 15:33:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:15.980 15:33:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:15.980 15:33:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.980 15:33:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.980 15:33:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:15.980 15:33:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.980 15:33:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:15.980 15:33:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:15.980 15:33:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:15.980 15:33:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:15.980 15:33:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.980 15:33:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.543 nvme0n1 00:28:16.543 15:33:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.543 15:33:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.543 15:33:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:16.543 15:33:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.543 15:33:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.543 15:33:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.543 15:33:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.543 15:33:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.543 15:33:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.543 15:33:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.543 15:33:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.543 15:33:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:16.543 15:33:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:28:16.543 15:33:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.543 15:33:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:16.543 15:33:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:16.543 15:33:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:16.543 15:33:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWNlZTMyZjc0MGU1MWI3OTU4NzhjOTFkYWIxZjBiMzFiYzkyZDY1OGFhNzM2MjAyGACbjA==: 00:28:16.543 15:33:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2IzMTc4MDJmNzkyMGY3ZGQwNGUxZmNkOTVjZTZjOTj8C3Du: 00:28:16.543 15:33:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:16.543 15:33:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:16.543 15:33:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWNlZTMyZjc0MGU1MWI3OTU4NzhjOTFkYWIxZjBiMzFiYzkyZDY1OGFhNzM2MjAyGACbjA==: 00:28:16.543 15:33:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2IzMTc4MDJmNzkyMGY3ZGQwNGUxZmNkOTVjZTZjOTj8C3Du: ]] 00:28:16.543 15:33:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2IzMTc4MDJmNzkyMGY3ZGQwNGUxZmNkOTVjZTZjOTj8C3Du: 00:28:16.543 15:33:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:28:16.543 15:33:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.543 15:33:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:16.543 15:33:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:16.543 15:33:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:16.543 15:33:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.543 15:33:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:16.543 15:33:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.543 15:33:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.543 15:33:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.543 15:33:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:16.543 15:33:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:16.543 15:33:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:16.543 15:33:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:16.543 15:33:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.543 15:33:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.543 15:33:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:16.543 15:33:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.543 15:33:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:16.543 15:33:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:16.543 15:33:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:16.543 15:33:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:16.543 15:33:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.543 15:33:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.108 nvme0n1 00:28:17.108 15:33:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.108 15:33:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.108 15:33:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.108 15:33:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.108 15:33:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.108 15:33:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.108 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.108 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.108 15:33:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.108 15:33:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.365 15:33:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.365 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.365 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:28:17.365 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.365 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:17.365 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:17.365 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:17.365 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODM3MmMyMGVjMzVhZmQ1MDI4MTMzMmZjOTM2NTExYWVmOWFlOTI4MTc1Y2M5Njc3N2NjZjcyMDIwOTViYzU3YTMs+MM=: 00:28:17.365 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:17.365 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:17.365 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:17.365 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODM3MmMyMGVjMzVhZmQ1MDI4MTMzMmZjOTM2NTExYWVmOWFlOTI4MTc1Y2M5Njc3N2NjZjcyMDIwOTViYzU3YTMs+MM=: 00:28:17.365 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:17.365 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:28:17.365 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.365 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:17.365 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:17.365 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:17.365 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.365 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:17.365 15:33:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.365 15:33:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.365 15:33:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.365 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.365 15:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:17.365 15:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:17.365 15:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:17.365 15:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.365 15:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.365 15:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:17.365 15:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.365 15:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:17.365 15:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:17.365 15:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:17.365 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:17.365 15:33:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.365 15:33:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.928 nvme0n1 00:28:17.928 15:33:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.928 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.928 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.928 15:33:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.928 15:33:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.928 15:33:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.928 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.928 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.928 15:33:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.928 15:33:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.928 15:33:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.928 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:17.928 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:17.928 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.928 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:28:17.928 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.928 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:17.928 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:17.928 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:17.928 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2JiZmMxMWVmNzU5YmVjNDcxZWUzNDNjNjk5MWRhZmQm4JW9: 00:28:17.928 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzEwYmJkZWQ4ZjE4OWVjYjBmZWVlY2NlNGYwZWEyM2MwYzhiOWMyMzVhMDY5NWMyODM0MjU4ZmVhNmU4OGNmYchvmk4=: 00:28:17.928 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:17.928 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:17.929 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2JiZmMxMWVmNzU5YmVjNDcxZWUzNDNjNjk5MWRhZmQm4JW9: 00:28:17.929 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzEwYmJkZWQ4ZjE4OWVjYjBmZWVlY2NlNGYwZWEyM2MwYzhiOWMyMzVhMDY5NWMyODM0MjU4ZmVhNmU4OGNmYchvmk4=: ]] 00:28:17.929 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzEwYmJkZWQ4ZjE4OWVjYjBmZWVlY2NlNGYwZWEyM2MwYzhiOWMyMzVhMDY5NWMyODM0MjU4ZmVhNmU4OGNmYchvmk4=: 00:28:17.929 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:28:17.929 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.929 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:17.929 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:17.929 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:17.929 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.929 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:17.929 15:33:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.929 15:33:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.929 15:33:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.929 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.929 15:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:17.929 15:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:17.929 15:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:17.929 15:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.929 15:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.929 15:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:17.929 15:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.929 15:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:17.929 15:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:17.929 15:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:17.929 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:17.929 15:33:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.929 15:33:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.929 nvme0n1 00:28:17.929 15:33:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.929 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.929 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.929 15:33:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.929 15:33:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.929 15:33:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.929 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.929 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.929 15:33:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.929 15:33:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.186 15:33:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.186 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.186 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:28:18.186 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.186 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:18.186 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:18.186 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:18.186 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJmNzdiZWE2Zjk1MzNkNDk4MDBlNTZhNTkxMjNiN2Q5OTU0NjU2MmQxM2RmZWE0LFoJ3g==: 00:28:18.186 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjU4ZmQ2MDNlZjYwNjNjMjc5ZjhlMzQzMGUzMWQ4OGVhZjkyNjRjZjMzZjA0MDYwiYd32w==: 00:28:18.186 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:18.186 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:18.186 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJmNzdiZWE2Zjk1MzNkNDk4MDBlNTZhNTkxMjNiN2Q5OTU0NjU2MmQxM2RmZWE0LFoJ3g==: 00:28:18.186 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjU4ZmQ2MDNlZjYwNjNjMjc5ZjhlMzQzMGUzMWQ4OGVhZjkyNjRjZjMzZjA0MDYwiYd32w==: ]] 00:28:18.186 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjU4ZmQ2MDNlZjYwNjNjMjc5ZjhlMzQzMGUzMWQ4OGVhZjkyNjRjZjMzZjA0MDYwiYd32w==: 00:28:18.186 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:28:18.186 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.186 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:18.186 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:18.186 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:18.186 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.186 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:18.186 15:33:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.186 15:33:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.186 15:33:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.186 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.186 15:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:18.186 15:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:18.186 15:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:18.186 15:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.186 15:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.186 15:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:18.186 15:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.186 15:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:18.186 15:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:18.186 15:33:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:18.186 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:18.186 15:33:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.186 15:33:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.186 nvme0n1 00:28:18.186 15:33:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.186 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.186 15:33:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.186 15:33:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.186 15:33:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.186 15:33:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.186 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.186 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.186 15:33:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.186 15:33:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.186 15:33:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.186 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.186 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:28:18.186 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.186 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:18.186 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:18.186 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:18.186 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTliYjNkODY4ZDRhZTViMDU1NDBlMDhiMzY2OWRkOTdpGiRa: 00:28:18.186 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGMyYmUyZGZjOTAxZDQ2NjcwMWE2YWY0NjFlZTVlNjne8XT0: 00:28:18.186 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:18.186 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:18.186 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTliYjNkODY4ZDRhZTViMDU1NDBlMDhiMzY2OWRkOTdpGiRa: 00:28:18.186 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGMyYmUyZGZjOTAxZDQ2NjcwMWE2YWY0NjFlZTVlNjne8XT0: ]] 00:28:18.187 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGMyYmUyZGZjOTAxZDQ2NjcwMWE2YWY0NjFlZTVlNjne8XT0: 00:28:18.187 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:28:18.187 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.187 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:18.187 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:18.187 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:18.187 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.187 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:18.187 15:33:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.187 15:33:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.187 15:33:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.187 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.187 15:33:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:18.187 15:33:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:18.187 15:33:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:18.187 15:33:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.187 15:33:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.187 15:33:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:18.187 15:33:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.187 15:33:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:18.187 15:33:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:18.187 15:33:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:18.187 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:18.187 15:33:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.187 15:33:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.445 nvme0n1 00:28:18.445 15:33:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.445 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.445 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.445 15:33:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.445 15:33:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.445 15:33:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.445 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.445 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.445 15:33:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.445 15:33:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.445 15:33:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.445 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.445 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:28:18.445 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.445 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:18.445 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:18.445 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:18.445 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWNlZTMyZjc0MGU1MWI3OTU4NzhjOTFkYWIxZjBiMzFiYzkyZDY1OGFhNzM2MjAyGACbjA==: 00:28:18.445 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2IzMTc4MDJmNzkyMGY3ZGQwNGUxZmNkOTVjZTZjOTj8C3Du: 00:28:18.445 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:18.445 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:18.445 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWNlZTMyZjc0MGU1MWI3OTU4NzhjOTFkYWIxZjBiMzFiYzkyZDY1OGFhNzM2MjAyGACbjA==: 00:28:18.445 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2IzMTc4MDJmNzkyMGY3ZGQwNGUxZmNkOTVjZTZjOTj8C3Du: ]] 00:28:18.445 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2IzMTc4MDJmNzkyMGY3ZGQwNGUxZmNkOTVjZTZjOTj8C3Du: 00:28:18.445 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:28:18.445 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.445 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:18.445 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:18.445 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:18.445 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.445 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:18.445 15:33:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.445 15:33:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.445 15:33:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.445 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.445 15:33:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:18.445 15:33:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:18.445 15:33:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:18.445 15:33:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.445 15:33:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.445 15:33:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:18.445 15:33:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.445 15:33:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:18.445 15:33:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:18.445 15:33:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:18.445 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:18.445 15:33:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.445 15:33:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.703 nvme0n1 00:28:18.703 15:33:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.703 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.703 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.703 15:33:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.703 15:33:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.703 15:33:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.703 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.703 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.703 15:33:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.703 15:33:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.703 15:33:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.703 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.703 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:28:18.703 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.703 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:18.703 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:18.703 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:18.703 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODM3MmMyMGVjMzVhZmQ1MDI4MTMzMmZjOTM2NTExYWVmOWFlOTI4MTc1Y2M5Njc3N2NjZjcyMDIwOTViYzU3YTMs+MM=: 00:28:18.703 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:18.703 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:18.703 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:18.704 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODM3MmMyMGVjMzVhZmQ1MDI4MTMzMmZjOTM2NTExYWVmOWFlOTI4MTc1Y2M5Njc3N2NjZjcyMDIwOTViYzU3YTMs+MM=: 00:28:18.704 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:18.704 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:28:18.704 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.704 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:18.704 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:18.704 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:18.704 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.704 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:18.704 15:33:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.704 15:33:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.704 15:33:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.704 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.704 15:33:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:18.704 15:33:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:18.704 15:33:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:18.704 15:33:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.704 15:33:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.704 15:33:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:18.704 15:33:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.704 15:33:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:18.704 15:33:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:18.704 15:33:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:18.704 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:18.704 15:33:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.704 15:33:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.961 nvme0n1 00:28:18.961 15:33:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.961 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.961 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.961 15:33:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.961 15:33:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.961 15:33:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.961 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.961 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.961 15:33:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.961 15:33:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.961 15:33:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.961 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:18.961 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.961 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:28:18.961 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.961 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:18.961 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:18.961 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:18.961 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2JiZmMxMWVmNzU5YmVjNDcxZWUzNDNjNjk5MWRhZmQm4JW9: 00:28:18.961 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzEwYmJkZWQ4ZjE4OWVjYjBmZWVlY2NlNGYwZWEyM2MwYzhiOWMyMzVhMDY5NWMyODM0MjU4ZmVhNmU4OGNmYchvmk4=: 00:28:18.961 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:18.961 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:18.961 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2JiZmMxMWVmNzU5YmVjNDcxZWUzNDNjNjk5MWRhZmQm4JW9: 00:28:18.961 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzEwYmJkZWQ4ZjE4OWVjYjBmZWVlY2NlNGYwZWEyM2MwYzhiOWMyMzVhMDY5NWMyODM0MjU4ZmVhNmU4OGNmYchvmk4=: ]] 00:28:18.961 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzEwYmJkZWQ4ZjE4OWVjYjBmZWVlY2NlNGYwZWEyM2MwYzhiOWMyMzVhMDY5NWMyODM0MjU4ZmVhNmU4OGNmYchvmk4=: 00:28:18.961 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:28:18.961 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.961 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:18.961 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:18.962 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:18.962 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.962 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:18.962 15:33:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.962 15:33:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.962 15:33:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.962 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.962 15:33:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:18.962 15:33:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:18.962 15:33:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:18.962 15:33:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.962 15:33:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.962 15:33:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:18.962 15:33:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.962 15:33:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:18.962 15:33:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:18.962 15:33:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:18.962 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:18.962 15:33:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.962 15:33:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.962 nvme0n1 00:28:18.962 15:33:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.962 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.962 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.962 15:33:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.962 15:33:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.219 15:33:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.219 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.219 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.219 15:33:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.219 15:33:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.219 15:33:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.219 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:19.219 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:28:19.219 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.219 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:19.219 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:19.219 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:19.219 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJmNzdiZWE2Zjk1MzNkNDk4MDBlNTZhNTkxMjNiN2Q5OTU0NjU2MmQxM2RmZWE0LFoJ3g==: 00:28:19.219 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjU4ZmQ2MDNlZjYwNjNjMjc5ZjhlMzQzMGUzMWQ4OGVhZjkyNjRjZjMzZjA0MDYwiYd32w==: 00:28:19.219 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:19.219 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:19.219 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJmNzdiZWE2Zjk1MzNkNDk4MDBlNTZhNTkxMjNiN2Q5OTU0NjU2MmQxM2RmZWE0LFoJ3g==: 00:28:19.219 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjU4ZmQ2MDNlZjYwNjNjMjc5ZjhlMzQzMGUzMWQ4OGVhZjkyNjRjZjMzZjA0MDYwiYd32w==: ]] 00:28:19.219 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjU4ZmQ2MDNlZjYwNjNjMjc5ZjhlMzQzMGUzMWQ4OGVhZjkyNjRjZjMzZjA0MDYwiYd32w==: 00:28:19.219 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:28:19.219 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:19.220 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:19.220 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:19.220 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:19.220 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:19.220 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:19.220 15:33:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.220 15:33:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.220 15:33:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.220 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:19.220 15:33:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:19.220 15:33:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:19.220 15:33:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:19.220 15:33:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.220 15:33:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.220 15:33:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:19.220 15:33:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.220 15:33:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:19.220 15:33:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:19.220 15:33:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:19.220 15:33:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:19.220 15:33:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.220 15:33:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.220 nvme0n1 00:28:19.220 15:33:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.220 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:19.220 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.220 15:33:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.220 15:33:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.220 15:33:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTliYjNkODY4ZDRhZTViMDU1NDBlMDhiMzY2OWRkOTdpGiRa: 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGMyYmUyZGZjOTAxZDQ2NjcwMWE2YWY0NjFlZTVlNjne8XT0: 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTliYjNkODY4ZDRhZTViMDU1NDBlMDhiMzY2OWRkOTdpGiRa: 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGMyYmUyZGZjOTAxZDQ2NjcwMWE2YWY0NjFlZTVlNjne8XT0: ]] 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGMyYmUyZGZjOTAxZDQ2NjcwMWE2YWY0NjFlZTVlNjne8XT0: 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.476 nvme0n1 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWNlZTMyZjc0MGU1MWI3OTU4NzhjOTFkYWIxZjBiMzFiYzkyZDY1OGFhNzM2MjAyGACbjA==: 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2IzMTc4MDJmNzkyMGY3ZGQwNGUxZmNkOTVjZTZjOTj8C3Du: 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:19.476 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWNlZTMyZjc0MGU1MWI3OTU4NzhjOTFkYWIxZjBiMzFiYzkyZDY1OGFhNzM2MjAyGACbjA==: 00:28:19.733 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2IzMTc4MDJmNzkyMGY3ZGQwNGUxZmNkOTVjZTZjOTj8C3Du: ]] 00:28:19.733 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2IzMTc4MDJmNzkyMGY3ZGQwNGUxZmNkOTVjZTZjOTj8C3Du: 00:28:19.733 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:28:19.733 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:19.733 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:19.733 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:19.733 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:19.733 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:19.733 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:19.733 15:33:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.733 15:33:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.733 15:33:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.733 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:19.733 15:33:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:19.733 15:33:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:19.733 15:33:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:19.733 15:33:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.733 15:33:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.733 15:33:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:19.733 15:33:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.733 15:33:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:19.733 15:33:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:19.733 15:33:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:19.733 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:19.733 15:33:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.734 15:33:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.734 nvme0n1 00:28:19.734 15:33:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.734 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:19.734 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.734 15:33:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.734 15:33:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.734 15:33:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.734 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.734 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.734 15:33:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.734 15:33:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.734 15:33:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.734 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:19.734 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:28:19.734 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.734 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:19.734 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:19.734 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:19.734 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODM3MmMyMGVjMzVhZmQ1MDI4MTMzMmZjOTM2NTExYWVmOWFlOTI4MTc1Y2M5Njc3N2NjZjcyMDIwOTViYzU3YTMs+MM=: 00:28:19.734 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:19.734 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:19.734 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:19.734 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODM3MmMyMGVjMzVhZmQ1MDI4MTMzMmZjOTM2NTExYWVmOWFlOTI4MTc1Y2M5Njc3N2NjZjcyMDIwOTViYzU3YTMs+MM=: 00:28:19.734 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:19.734 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:28:19.734 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:19.734 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:19.734 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:19.734 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:19.734 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:19.734 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:19.734 15:33:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.734 15:33:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.734 15:33:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.734 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:19.734 15:33:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:19.734 15:33:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:19.734 15:33:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:19.734 15:33:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.734 15:33:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.734 15:33:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:19.734 15:33:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.734 15:33:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:19.734 15:33:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:19.734 15:33:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:19.734 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:19.734 15:33:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.734 15:33:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.992 nvme0n1 00:28:19.992 15:33:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.992 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.992 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:19.992 15:33:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.992 15:33:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.992 15:33:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.992 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.992 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.992 15:33:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.992 15:33:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.992 15:33:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.992 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:19.992 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:19.992 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:28:19.992 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.992 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:19.992 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:19.992 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:19.992 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2JiZmMxMWVmNzU5YmVjNDcxZWUzNDNjNjk5MWRhZmQm4JW9: 00:28:19.992 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzEwYmJkZWQ4ZjE4OWVjYjBmZWVlY2NlNGYwZWEyM2MwYzhiOWMyMzVhMDY5NWMyODM0MjU4ZmVhNmU4OGNmYchvmk4=: 00:28:19.992 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:19.992 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:19.992 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2JiZmMxMWVmNzU5YmVjNDcxZWUzNDNjNjk5MWRhZmQm4JW9: 00:28:19.992 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzEwYmJkZWQ4ZjE4OWVjYjBmZWVlY2NlNGYwZWEyM2MwYzhiOWMyMzVhMDY5NWMyODM0MjU4ZmVhNmU4OGNmYchvmk4=: ]] 00:28:19.992 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzEwYmJkZWQ4ZjE4OWVjYjBmZWVlY2NlNGYwZWEyM2MwYzhiOWMyMzVhMDY5NWMyODM0MjU4ZmVhNmU4OGNmYchvmk4=: 00:28:19.992 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:28:19.992 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:19.992 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:19.992 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:19.992 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:19.992 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:19.992 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:19.992 15:33:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.992 15:33:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.992 15:33:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.992 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:19.992 15:33:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:19.992 15:33:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:19.992 15:33:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:19.992 15:33:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.992 15:33:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.992 15:33:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:19.992 15:33:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.992 15:33:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:19.992 15:33:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:19.992 15:33:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:19.992 15:33:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:19.992 15:33:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.992 15:33:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.250 nvme0n1 00:28:20.250 15:33:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.250 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.250 15:33:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.250 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:20.250 15:33:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.250 15:33:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.250 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.250 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.250 15:33:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.250 15:33:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.250 15:33:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.250 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:20.250 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:28:20.250 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:20.250 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:20.250 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:20.250 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:20.250 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJmNzdiZWE2Zjk1MzNkNDk4MDBlNTZhNTkxMjNiN2Q5OTU0NjU2MmQxM2RmZWE0LFoJ3g==: 00:28:20.250 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjU4ZmQ2MDNlZjYwNjNjMjc5ZjhlMzQzMGUzMWQ4OGVhZjkyNjRjZjMzZjA0MDYwiYd32w==: 00:28:20.250 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:20.250 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:20.250 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJmNzdiZWE2Zjk1MzNkNDk4MDBlNTZhNTkxMjNiN2Q5OTU0NjU2MmQxM2RmZWE0LFoJ3g==: 00:28:20.250 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjU4ZmQ2MDNlZjYwNjNjMjc5ZjhlMzQzMGUzMWQ4OGVhZjkyNjRjZjMzZjA0MDYwiYd32w==: ]] 00:28:20.250 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjU4ZmQ2MDNlZjYwNjNjMjc5ZjhlMzQzMGUzMWQ4OGVhZjkyNjRjZjMzZjA0MDYwiYd32w==: 00:28:20.250 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:28:20.250 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:20.250 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:20.251 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:20.251 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:20.251 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:20.251 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:20.251 15:33:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.251 15:33:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.251 15:33:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.509 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:20.509 15:33:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:20.509 15:33:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:20.509 15:33:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:20.509 15:33:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.509 15:33:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.509 15:33:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:20.509 15:33:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.509 15:33:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:20.509 15:33:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:20.509 15:33:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:20.509 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:20.509 15:33:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.509 15:33:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.509 nvme0n1 00:28:20.509 15:33:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.509 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.509 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:20.509 15:33:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.509 15:33:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.509 15:33:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.767 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.767 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.767 15:33:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.767 15:33:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.767 15:33:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.767 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:20.767 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:28:20.767 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:20.767 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:20.767 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:20.767 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:20.767 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTliYjNkODY4ZDRhZTViMDU1NDBlMDhiMzY2OWRkOTdpGiRa: 00:28:20.767 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGMyYmUyZGZjOTAxZDQ2NjcwMWE2YWY0NjFlZTVlNjne8XT0: 00:28:20.767 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:20.767 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:20.767 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTliYjNkODY4ZDRhZTViMDU1NDBlMDhiMzY2OWRkOTdpGiRa: 00:28:20.767 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGMyYmUyZGZjOTAxZDQ2NjcwMWE2YWY0NjFlZTVlNjne8XT0: ]] 00:28:20.767 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGMyYmUyZGZjOTAxZDQ2NjcwMWE2YWY0NjFlZTVlNjne8XT0: 00:28:20.767 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:28:20.767 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:20.767 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:20.767 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:20.767 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:20.767 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:20.767 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:20.767 15:33:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.767 15:33:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.767 15:33:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.767 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:20.767 15:33:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:20.767 15:33:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:20.767 15:33:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:20.767 15:33:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.767 15:33:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.767 15:33:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:20.767 15:33:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.767 15:33:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:20.767 15:33:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:20.767 15:33:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:20.767 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:20.767 15:33:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.767 15:33:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.024 nvme0n1 00:28:21.024 15:33:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.024 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:21.024 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.024 15:33:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.024 15:33:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.024 15:33:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.024 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.024 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:21.024 15:33:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.024 15:33:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.024 15:33:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.024 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:21.024 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:28:21.024 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:21.024 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:21.024 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:21.024 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:21.024 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWNlZTMyZjc0MGU1MWI3OTU4NzhjOTFkYWIxZjBiMzFiYzkyZDY1OGFhNzM2MjAyGACbjA==: 00:28:21.025 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2IzMTc4MDJmNzkyMGY3ZGQwNGUxZmNkOTVjZTZjOTj8C3Du: 00:28:21.025 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:21.025 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:21.025 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWNlZTMyZjc0MGU1MWI3OTU4NzhjOTFkYWIxZjBiMzFiYzkyZDY1OGFhNzM2MjAyGACbjA==: 00:28:21.025 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2IzMTc4MDJmNzkyMGY3ZGQwNGUxZmNkOTVjZTZjOTj8C3Du: ]] 00:28:21.025 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2IzMTc4MDJmNzkyMGY3ZGQwNGUxZmNkOTVjZTZjOTj8C3Du: 00:28:21.025 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:28:21.025 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:21.025 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:21.025 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:21.025 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:21.025 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:21.025 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:21.025 15:33:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.025 15:33:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.025 15:33:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.025 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:21.025 15:33:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:21.025 15:33:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:21.025 15:33:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:21.025 15:33:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.025 15:33:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.025 15:33:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:21.025 15:33:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.025 15:33:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:21.025 15:33:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:21.025 15:33:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:21.025 15:33:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:21.025 15:33:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.025 15:33:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.283 nvme0n1 00:28:21.283 15:33:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.283 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:21.283 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.283 15:33:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.283 15:33:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.283 15:33:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.283 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.283 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:21.283 15:33:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.283 15:33:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.283 15:33:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.283 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:21.283 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:28:21.283 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:21.283 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:21.283 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:21.283 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:21.283 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODM3MmMyMGVjMzVhZmQ1MDI4MTMzMmZjOTM2NTExYWVmOWFlOTI4MTc1Y2M5Njc3N2NjZjcyMDIwOTViYzU3YTMs+MM=: 00:28:21.283 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:21.283 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:21.283 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:21.283 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODM3MmMyMGVjMzVhZmQ1MDI4MTMzMmZjOTM2NTExYWVmOWFlOTI4MTc1Y2M5Njc3N2NjZjcyMDIwOTViYzU3YTMs+MM=: 00:28:21.283 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:21.283 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:28:21.283 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:21.283 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:21.283 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:21.283 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:21.283 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:21.283 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:21.283 15:33:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.283 15:33:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.283 15:33:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.283 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:21.283 15:33:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:21.283 15:33:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:21.283 15:33:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:21.283 15:33:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.283 15:33:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.283 15:33:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:21.283 15:33:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.283 15:33:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:21.283 15:33:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:21.283 15:33:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:21.283 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:21.283 15:33:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.283 15:33:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.541 nvme0n1 00:28:21.541 15:33:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.541 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.542 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:21.542 15:33:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.542 15:33:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.542 15:33:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.542 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.542 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:21.542 15:33:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.542 15:33:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.542 15:33:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.542 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:21.542 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:21.542 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:28:21.542 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:21.542 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:21.542 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:21.542 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:21.542 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2JiZmMxMWVmNzU5YmVjNDcxZWUzNDNjNjk5MWRhZmQm4JW9: 00:28:21.542 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzEwYmJkZWQ4ZjE4OWVjYjBmZWVlY2NlNGYwZWEyM2MwYzhiOWMyMzVhMDY5NWMyODM0MjU4ZmVhNmU4OGNmYchvmk4=: 00:28:21.542 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:21.542 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:21.542 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2JiZmMxMWVmNzU5YmVjNDcxZWUzNDNjNjk5MWRhZmQm4JW9: 00:28:21.542 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzEwYmJkZWQ4ZjE4OWVjYjBmZWVlY2NlNGYwZWEyM2MwYzhiOWMyMzVhMDY5NWMyODM0MjU4ZmVhNmU4OGNmYchvmk4=: ]] 00:28:21.542 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzEwYmJkZWQ4ZjE4OWVjYjBmZWVlY2NlNGYwZWEyM2MwYzhiOWMyMzVhMDY5NWMyODM0MjU4ZmVhNmU4OGNmYchvmk4=: 00:28:21.542 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:28:21.542 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:21.542 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:21.542 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:21.542 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:21.542 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:21.542 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:21.542 15:33:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.542 15:33:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.542 15:33:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.542 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:21.542 15:33:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:21.542 15:33:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:21.542 15:33:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:21.542 15:33:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.542 15:33:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.542 15:33:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:21.542 15:33:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.542 15:33:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:21.542 15:33:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:21.542 15:33:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:21.542 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:21.542 15:33:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.542 15:33:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.108 nvme0n1 00:28:22.108 15:33:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.108 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.108 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:22.108 15:33:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.108 15:33:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.108 15:33:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.109 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:22.109 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:22.109 15:33:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.109 15:33:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.109 15:33:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.109 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:22.109 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:28:22.109 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:22.109 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:22.109 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:22.109 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:22.109 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJmNzdiZWE2Zjk1MzNkNDk4MDBlNTZhNTkxMjNiN2Q5OTU0NjU2MmQxM2RmZWE0LFoJ3g==: 00:28:22.109 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjU4ZmQ2MDNlZjYwNjNjMjc5ZjhlMzQzMGUzMWQ4OGVhZjkyNjRjZjMzZjA0MDYwiYd32w==: 00:28:22.109 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:22.109 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:22.109 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJmNzdiZWE2Zjk1MzNkNDk4MDBlNTZhNTkxMjNiN2Q5OTU0NjU2MmQxM2RmZWE0LFoJ3g==: 00:28:22.109 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjU4ZmQ2MDNlZjYwNjNjMjc5ZjhlMzQzMGUzMWQ4OGVhZjkyNjRjZjMzZjA0MDYwiYd32w==: ]] 00:28:22.109 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjU4ZmQ2MDNlZjYwNjNjMjc5ZjhlMzQzMGUzMWQ4OGVhZjkyNjRjZjMzZjA0MDYwiYd32w==: 00:28:22.109 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:28:22.109 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:22.109 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:22.109 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:22.109 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:22.109 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:22.109 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:22.109 15:33:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.109 15:33:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.109 15:33:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.109 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:22.109 15:33:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:22.109 15:33:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:22.109 15:33:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:22.109 15:33:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:22.109 15:33:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:22.109 15:33:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:22.109 15:33:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:22.109 15:33:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:22.109 15:33:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:22.109 15:33:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:22.109 15:33:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:22.109 15:33:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.109 15:33:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.367 nvme0n1 00:28:22.367 15:33:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.367 15:33:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:22.367 15:33:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.367 15:33:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.367 15:33:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.367 15:33:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.367 15:33:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:22.367 15:33:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:22.367 15:33:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.367 15:33:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.367 15:33:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.367 15:33:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:22.367 15:33:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:28:22.367 15:33:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:22.367 15:33:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:22.367 15:33:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:22.367 15:33:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:22.367 15:33:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTliYjNkODY4ZDRhZTViMDU1NDBlMDhiMzY2OWRkOTdpGiRa: 00:28:22.367 15:33:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGMyYmUyZGZjOTAxZDQ2NjcwMWE2YWY0NjFlZTVlNjne8XT0: 00:28:22.367 15:33:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:22.367 15:33:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:22.367 15:33:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTliYjNkODY4ZDRhZTViMDU1NDBlMDhiMzY2OWRkOTdpGiRa: 00:28:22.367 15:33:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGMyYmUyZGZjOTAxZDQ2NjcwMWE2YWY0NjFlZTVlNjne8XT0: ]] 00:28:22.367 15:33:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGMyYmUyZGZjOTAxZDQ2NjcwMWE2YWY0NjFlZTVlNjne8XT0: 00:28:22.367 15:33:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:28:22.367 15:33:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:22.367 15:33:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:22.367 15:33:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:22.367 15:33:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:22.367 15:33:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:22.367 15:33:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:22.367 15:33:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.367 15:33:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.367 15:33:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.367 15:33:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:22.367 15:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:22.367 15:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:22.367 15:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:22.367 15:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:22.367 15:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:22.367 15:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:22.367 15:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:22.367 15:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:22.367 15:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:22.367 15:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:22.367 15:33:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:22.367 15:33:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.367 15:33:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.934 nvme0n1 00:28:22.934 15:33:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.934 15:33:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.934 15:33:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:22.934 15:33:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.934 15:33:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.934 15:33:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.934 15:33:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:22.934 15:33:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:22.934 15:33:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.934 15:33:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.934 15:33:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.934 15:33:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:22.934 15:33:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:28:22.934 15:33:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:22.934 15:33:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:22.934 15:33:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:22.934 15:33:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:22.934 15:33:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWNlZTMyZjc0MGU1MWI3OTU4NzhjOTFkYWIxZjBiMzFiYzkyZDY1OGFhNzM2MjAyGACbjA==: 00:28:22.934 15:33:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2IzMTc4MDJmNzkyMGY3ZGQwNGUxZmNkOTVjZTZjOTj8C3Du: 00:28:22.934 15:33:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:22.934 15:33:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:22.934 15:33:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWNlZTMyZjc0MGU1MWI3OTU4NzhjOTFkYWIxZjBiMzFiYzkyZDY1OGFhNzM2MjAyGACbjA==: 00:28:22.934 15:33:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2IzMTc4MDJmNzkyMGY3ZGQwNGUxZmNkOTVjZTZjOTj8C3Du: ]] 00:28:22.934 15:33:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2IzMTc4MDJmNzkyMGY3ZGQwNGUxZmNkOTVjZTZjOTj8C3Du: 00:28:22.934 15:33:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:28:22.934 15:33:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:22.934 15:33:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:22.934 15:33:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:22.934 15:33:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:22.934 15:33:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:22.934 15:33:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:22.934 15:33:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.934 15:33:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.934 15:33:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.934 15:33:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:22.934 15:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:22.934 15:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:22.934 15:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:22.934 15:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:22.934 15:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:22.934 15:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:22.934 15:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:22.934 15:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:22.934 15:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:22.934 15:33:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:22.934 15:33:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:22.934 15:33:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.934 15:33:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.192 nvme0n1 00:28:23.192 15:33:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.192 15:33:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:23.192 15:33:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:23.192 15:33:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.192 15:33:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.192 15:33:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.192 15:33:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:23.192 15:33:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:23.192 15:33:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.192 15:33:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.192 15:33:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.192 15:33:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:23.192 15:33:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:28:23.192 15:33:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:23.192 15:33:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:23.192 15:33:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:23.192 15:33:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:23.192 15:33:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODM3MmMyMGVjMzVhZmQ1MDI4MTMzMmZjOTM2NTExYWVmOWFlOTI4MTc1Y2M5Njc3N2NjZjcyMDIwOTViYzU3YTMs+MM=: 00:28:23.192 15:33:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:23.192 15:33:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:23.192 15:33:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:23.192 15:33:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODM3MmMyMGVjMzVhZmQ1MDI4MTMzMmZjOTM2NTExYWVmOWFlOTI4MTc1Y2M5Njc3N2NjZjcyMDIwOTViYzU3YTMs+MM=: 00:28:23.192 15:33:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:23.192 15:33:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:28:23.192 15:33:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:23.192 15:33:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:23.192 15:33:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:23.192 15:33:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:23.192 15:33:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:23.192 15:33:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:23.192 15:33:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.192 15:33:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.192 15:33:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.192 15:33:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:23.192 15:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:23.192 15:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:23.192 15:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:23.192 15:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:23.192 15:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:23.192 15:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:23.192 15:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:23.192 15:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:23.192 15:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:23.192 15:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:23.192 15:33:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:23.192 15:33:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.192 15:33:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.760 nvme0n1 00:28:23.760 15:33:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.760 15:33:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:23.760 15:33:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:23.760 15:33:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.760 15:33:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.760 15:33:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.760 15:33:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:23.760 15:33:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:23.760 15:33:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.760 15:33:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.760 15:33:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.760 15:33:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:23.760 15:33:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:23.760 15:33:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:28:23.760 15:33:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:23.760 15:33:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:23.760 15:33:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:23.760 15:33:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:23.760 15:33:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2JiZmMxMWVmNzU5YmVjNDcxZWUzNDNjNjk5MWRhZmQm4JW9: 00:28:23.760 15:33:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzEwYmJkZWQ4ZjE4OWVjYjBmZWVlY2NlNGYwZWEyM2MwYzhiOWMyMzVhMDY5NWMyODM0MjU4ZmVhNmU4OGNmYchvmk4=: 00:28:23.760 15:33:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:23.760 15:33:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:23.760 15:33:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2JiZmMxMWVmNzU5YmVjNDcxZWUzNDNjNjk5MWRhZmQm4JW9: 00:28:23.760 15:33:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzEwYmJkZWQ4ZjE4OWVjYjBmZWVlY2NlNGYwZWEyM2MwYzhiOWMyMzVhMDY5NWMyODM0MjU4ZmVhNmU4OGNmYchvmk4=: ]] 00:28:23.760 15:33:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzEwYmJkZWQ4ZjE4OWVjYjBmZWVlY2NlNGYwZWEyM2MwYzhiOWMyMzVhMDY5NWMyODM0MjU4ZmVhNmU4OGNmYchvmk4=: 00:28:23.760 15:33:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:28:23.760 15:33:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:23.760 15:33:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:23.760 15:33:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:23.760 15:33:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:23.760 15:33:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:23.760 15:33:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:23.760 15:33:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.760 15:33:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.760 15:33:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.760 15:33:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:23.760 15:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:23.760 15:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:23.760 15:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:23.760 15:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:23.760 15:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:23.760 15:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:23.760 15:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:23.760 15:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:23.760 15:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:23.760 15:33:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:23.760 15:33:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:23.760 15:33:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.760 15:33:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.326 nvme0n1 00:28:24.326 15:33:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.326 15:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:24.326 15:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:24.326 15:33:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.326 15:33:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.326 15:33:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.326 15:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.327 15:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.327 15:33:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.327 15:33:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.327 15:33:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.327 15:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:24.327 15:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:28:24.327 15:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:24.327 15:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:24.327 15:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:24.327 15:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:24.327 15:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJmNzdiZWE2Zjk1MzNkNDk4MDBlNTZhNTkxMjNiN2Q5OTU0NjU2MmQxM2RmZWE0LFoJ3g==: 00:28:24.327 15:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjU4ZmQ2MDNlZjYwNjNjMjc5ZjhlMzQzMGUzMWQ4OGVhZjkyNjRjZjMzZjA0MDYwiYd32w==: 00:28:24.327 15:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:24.327 15:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:24.327 15:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJmNzdiZWE2Zjk1MzNkNDk4MDBlNTZhNTkxMjNiN2Q5OTU0NjU2MmQxM2RmZWE0LFoJ3g==: 00:28:24.327 15:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjU4ZmQ2MDNlZjYwNjNjMjc5ZjhlMzQzMGUzMWQ4OGVhZjkyNjRjZjMzZjA0MDYwiYd32w==: ]] 00:28:24.327 15:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjU4ZmQ2MDNlZjYwNjNjMjc5ZjhlMzQzMGUzMWQ4OGVhZjkyNjRjZjMzZjA0MDYwiYd32w==: 00:28:24.327 15:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:28:24.327 15:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:24.327 15:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:24.327 15:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:24.327 15:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:24.327 15:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:24.327 15:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:24.327 15:33:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.327 15:33:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.327 15:33:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.327 15:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:24.327 15:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:24.327 15:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:24.327 15:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:24.327 15:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.327 15:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.327 15:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:24.327 15:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.327 15:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:24.327 15:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:24.327 15:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:24.327 15:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:24.327 15:33:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.327 15:33:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.894 nvme0n1 00:28:24.894 15:33:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.894 15:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:24.894 15:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:24.894 15:33:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.894 15:33:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.894 15:33:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.894 15:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.894 15:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.894 15:33:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.894 15:33:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.894 15:33:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.894 15:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:24.894 15:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:28:24.894 15:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:24.894 15:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:24.894 15:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:24.894 15:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:24.894 15:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTliYjNkODY4ZDRhZTViMDU1NDBlMDhiMzY2OWRkOTdpGiRa: 00:28:24.894 15:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGMyYmUyZGZjOTAxZDQ2NjcwMWE2YWY0NjFlZTVlNjne8XT0: 00:28:24.894 15:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:24.894 15:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:24.894 15:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTliYjNkODY4ZDRhZTViMDU1NDBlMDhiMzY2OWRkOTdpGiRa: 00:28:24.894 15:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGMyYmUyZGZjOTAxZDQ2NjcwMWE2YWY0NjFlZTVlNjne8XT0: ]] 00:28:24.894 15:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGMyYmUyZGZjOTAxZDQ2NjcwMWE2YWY0NjFlZTVlNjne8XT0: 00:28:24.894 15:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:28:24.894 15:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:24.894 15:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:24.894 15:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:24.894 15:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:24.894 15:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:24.894 15:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:24.894 15:33:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.894 15:33:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.894 15:33:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.894 15:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:24.894 15:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:24.894 15:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:24.894 15:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:24.894 15:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.894 15:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.894 15:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:24.894 15:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.894 15:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:24.894 15:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:24.894 15:33:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:24.894 15:33:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:24.894 15:33:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.894 15:33:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.487 nvme0n1 00:28:25.487 15:33:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.487 15:33:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:25.487 15:33:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.487 15:33:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.487 15:33:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.487 15:33:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.487 15:33:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:25.487 15:33:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:25.487 15:33:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.487 15:33:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.487 15:33:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.487 15:33:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:25.487 15:33:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:28:25.487 15:33:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:25.487 15:33:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:25.487 15:33:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:25.487 15:33:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:25.487 15:33:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWNlZTMyZjc0MGU1MWI3OTU4NzhjOTFkYWIxZjBiMzFiYzkyZDY1OGFhNzM2MjAyGACbjA==: 00:28:25.487 15:33:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2IzMTc4MDJmNzkyMGY3ZGQwNGUxZmNkOTVjZTZjOTj8C3Du: 00:28:25.487 15:33:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:25.487 15:33:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:25.487 15:33:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWNlZTMyZjc0MGU1MWI3OTU4NzhjOTFkYWIxZjBiMzFiYzkyZDY1OGFhNzM2MjAyGACbjA==: 00:28:25.487 15:33:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2IzMTc4MDJmNzkyMGY3ZGQwNGUxZmNkOTVjZTZjOTj8C3Du: ]] 00:28:25.487 15:33:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2IzMTc4MDJmNzkyMGY3ZGQwNGUxZmNkOTVjZTZjOTj8C3Du: 00:28:25.487 15:33:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:28:25.487 15:33:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:25.487 15:33:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:25.487 15:33:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:25.487 15:33:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:25.487 15:33:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:25.487 15:33:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:25.487 15:33:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.487 15:33:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.487 15:33:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.487 15:33:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:25.487 15:33:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:25.487 15:33:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:25.487 15:33:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:25.487 15:33:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.487 15:33:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.487 15:33:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:25.487 15:33:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.487 15:33:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:25.487 15:33:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:25.487 15:33:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:25.487 15:33:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:25.487 15:33:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.487 15:33:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.065 nvme0n1 00:28:26.065 15:33:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.065 15:33:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:26.065 15:33:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:26.065 15:33:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.065 15:33:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.065 15:33:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.065 15:33:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:26.065 15:33:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:26.065 15:33:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.065 15:33:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.065 15:33:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.065 15:33:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:26.065 15:33:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:28:26.065 15:33:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:26.065 15:33:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:26.065 15:33:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:26.065 15:33:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:26.065 15:33:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODM3MmMyMGVjMzVhZmQ1MDI4MTMzMmZjOTM2NTExYWVmOWFlOTI4MTc1Y2M5Njc3N2NjZjcyMDIwOTViYzU3YTMs+MM=: 00:28:26.065 15:33:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:26.065 15:33:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:26.065 15:33:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:26.065 15:33:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODM3MmMyMGVjMzVhZmQ1MDI4MTMzMmZjOTM2NTExYWVmOWFlOTI4MTc1Y2M5Njc3N2NjZjcyMDIwOTViYzU3YTMs+MM=: 00:28:26.065 15:33:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:26.065 15:33:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:28:26.065 15:33:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:26.065 15:33:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:26.065 15:33:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:26.065 15:33:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:26.065 15:33:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:26.065 15:33:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:26.065 15:33:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.065 15:33:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.065 15:33:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.065 15:33:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:26.065 15:33:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:26.065 15:33:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:26.065 15:33:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:26.065 15:33:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:26.065 15:33:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:26.065 15:33:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:26.065 15:33:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:26.065 15:33:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:26.065 15:33:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:26.065 15:33:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:26.065 15:33:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:26.065 15:33:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.065 15:33:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.632 nvme0n1 00:28:26.632 15:33:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.632 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:26.632 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:26.632 15:33:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.632 15:33:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.632 15:33:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.632 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:26.632 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:26.632 15:33:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.632 15:33:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.890 15:33:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.890 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:26.890 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:26.890 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:26.890 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:28:26.890 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:26.890 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:26.890 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:26.890 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:26.890 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2JiZmMxMWVmNzU5YmVjNDcxZWUzNDNjNjk5MWRhZmQm4JW9: 00:28:26.890 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzEwYmJkZWQ4ZjE4OWVjYjBmZWVlY2NlNGYwZWEyM2MwYzhiOWMyMzVhMDY5NWMyODM0MjU4ZmVhNmU4OGNmYchvmk4=: 00:28:26.890 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:26.890 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:26.890 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2JiZmMxMWVmNzU5YmVjNDcxZWUzNDNjNjk5MWRhZmQm4JW9: 00:28:26.890 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzEwYmJkZWQ4ZjE4OWVjYjBmZWVlY2NlNGYwZWEyM2MwYzhiOWMyMzVhMDY5NWMyODM0MjU4ZmVhNmU4OGNmYchvmk4=: ]] 00:28:26.890 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzEwYmJkZWQ4ZjE4OWVjYjBmZWVlY2NlNGYwZWEyM2MwYzhiOWMyMzVhMDY5NWMyODM0MjU4ZmVhNmU4OGNmYchvmk4=: 00:28:26.890 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:28:26.890 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.891 nvme0n1 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJmNzdiZWE2Zjk1MzNkNDk4MDBlNTZhNTkxMjNiN2Q5OTU0NjU2MmQxM2RmZWE0LFoJ3g==: 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjU4ZmQ2MDNlZjYwNjNjMjc5ZjhlMzQzMGUzMWQ4OGVhZjkyNjRjZjMzZjA0MDYwiYd32w==: 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJmNzdiZWE2Zjk1MzNkNDk4MDBlNTZhNTkxMjNiN2Q5OTU0NjU2MmQxM2RmZWE0LFoJ3g==: 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjU4ZmQ2MDNlZjYwNjNjMjc5ZjhlMzQzMGUzMWQ4OGVhZjkyNjRjZjMzZjA0MDYwiYd32w==: ]] 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjU4ZmQ2MDNlZjYwNjNjMjc5ZjhlMzQzMGUzMWQ4OGVhZjkyNjRjZjMzZjA0MDYwiYd32w==: 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.891 15:33:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.149 nvme0n1 00:28:27.149 15:33:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.149 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:27.149 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:27.149 15:33:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.149 15:33:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.149 15:33:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.149 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.149 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:27.149 15:33:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.149 15:33:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.149 15:33:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.149 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:27.149 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:28:27.149 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:27.149 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:27.149 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:27.149 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:27.149 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTliYjNkODY4ZDRhZTViMDU1NDBlMDhiMzY2OWRkOTdpGiRa: 00:28:27.149 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGMyYmUyZGZjOTAxZDQ2NjcwMWE2YWY0NjFlZTVlNjne8XT0: 00:28:27.149 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:27.149 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:27.149 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTliYjNkODY4ZDRhZTViMDU1NDBlMDhiMzY2OWRkOTdpGiRa: 00:28:27.149 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGMyYmUyZGZjOTAxZDQ2NjcwMWE2YWY0NjFlZTVlNjne8XT0: ]] 00:28:27.149 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGMyYmUyZGZjOTAxZDQ2NjcwMWE2YWY0NjFlZTVlNjne8XT0: 00:28:27.149 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:28:27.149 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:27.149 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:27.149 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:27.149 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:27.149 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:27.149 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:27.149 15:33:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.149 15:33:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.149 15:33:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.149 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:27.149 15:33:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:27.149 15:33:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:27.149 15:33:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:27.149 15:33:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:27.149 15:33:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:27.149 15:33:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:27.149 15:33:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:27.149 15:33:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:27.149 15:33:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:27.149 15:33:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:27.149 15:33:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:27.149 15:33:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.149 15:33:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.408 nvme0n1 00:28:27.408 15:33:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.408 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:27.408 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:27.408 15:33:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.408 15:33:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.408 15:33:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.408 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.408 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:27.408 15:33:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.408 15:33:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.408 15:33:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.408 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:27.408 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:28:27.408 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:27.408 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:27.408 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:27.408 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:27.408 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWNlZTMyZjc0MGU1MWI3OTU4NzhjOTFkYWIxZjBiMzFiYzkyZDY1OGFhNzM2MjAyGACbjA==: 00:28:27.408 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2IzMTc4MDJmNzkyMGY3ZGQwNGUxZmNkOTVjZTZjOTj8C3Du: 00:28:27.408 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:27.408 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:27.408 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWNlZTMyZjc0MGU1MWI3OTU4NzhjOTFkYWIxZjBiMzFiYzkyZDY1OGFhNzM2MjAyGACbjA==: 00:28:27.408 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2IzMTc4MDJmNzkyMGY3ZGQwNGUxZmNkOTVjZTZjOTj8C3Du: ]] 00:28:27.408 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2IzMTc4MDJmNzkyMGY3ZGQwNGUxZmNkOTVjZTZjOTj8C3Du: 00:28:27.408 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:28:27.408 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:27.408 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:27.408 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:27.408 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:27.408 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:27.408 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:27.408 15:33:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.408 15:33:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.408 15:33:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.408 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:27.408 15:33:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:27.408 15:33:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:27.408 15:33:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:27.408 15:33:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:27.408 15:33:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:27.408 15:33:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:27.408 15:33:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:27.408 15:33:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:27.408 15:33:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:27.408 15:33:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:27.408 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:27.408 15:33:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.408 15:33:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.666 nvme0n1 00:28:27.666 15:33:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.666 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:27.666 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:27.666 15:33:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.666 15:33:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.667 15:33:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.667 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.667 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:27.667 15:33:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.667 15:33:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.667 15:33:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.667 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:27.667 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:28:27.667 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:27.667 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:27.667 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:27.667 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:27.667 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODM3MmMyMGVjMzVhZmQ1MDI4MTMzMmZjOTM2NTExYWVmOWFlOTI4MTc1Y2M5Njc3N2NjZjcyMDIwOTViYzU3YTMs+MM=: 00:28:27.667 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:27.667 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:27.667 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:27.667 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODM3MmMyMGVjMzVhZmQ1MDI4MTMzMmZjOTM2NTExYWVmOWFlOTI4MTc1Y2M5Njc3N2NjZjcyMDIwOTViYzU3YTMs+MM=: 00:28:27.667 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:27.667 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:28:27.667 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:27.667 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:27.667 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:27.667 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:27.667 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:27.667 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:27.667 15:33:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.667 15:33:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.667 15:33:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.667 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:27.667 15:33:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:27.667 15:33:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:27.667 15:33:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:27.667 15:33:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:27.667 15:33:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:27.667 15:33:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:27.667 15:33:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:27.667 15:33:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:27.667 15:33:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:27.667 15:33:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:27.667 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:27.667 15:33:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.667 15:33:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.667 nvme0n1 00:28:27.667 15:33:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.667 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:27.667 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:27.667 15:33:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.667 15:33:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.667 15:33:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.926 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.926 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:27.926 15:33:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.926 15:33:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.926 15:33:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.926 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:27.926 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:27.926 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:28:27.926 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:27.926 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:27.926 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:27.926 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:27.926 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2JiZmMxMWVmNzU5YmVjNDcxZWUzNDNjNjk5MWRhZmQm4JW9: 00:28:27.926 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzEwYmJkZWQ4ZjE4OWVjYjBmZWVlY2NlNGYwZWEyM2MwYzhiOWMyMzVhMDY5NWMyODM0MjU4ZmVhNmU4OGNmYchvmk4=: 00:28:27.926 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:27.926 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:27.926 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2JiZmMxMWVmNzU5YmVjNDcxZWUzNDNjNjk5MWRhZmQm4JW9: 00:28:27.926 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzEwYmJkZWQ4ZjE4OWVjYjBmZWVlY2NlNGYwZWEyM2MwYzhiOWMyMzVhMDY5NWMyODM0MjU4ZmVhNmU4OGNmYchvmk4=: ]] 00:28:27.926 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzEwYmJkZWQ4ZjE4OWVjYjBmZWVlY2NlNGYwZWEyM2MwYzhiOWMyMzVhMDY5NWMyODM0MjU4ZmVhNmU4OGNmYchvmk4=: 00:28:27.926 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:28:27.926 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:27.926 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:27.926 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:27.926 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:27.926 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:27.926 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:27.926 15:33:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.926 15:33:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.926 15:33:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.926 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:27.926 15:33:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:27.926 15:33:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:27.926 15:33:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:27.926 15:33:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:27.926 15:33:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:27.926 15:33:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:27.926 15:33:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:27.926 15:33:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:27.926 15:33:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:27.926 15:33:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:27.926 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:27.926 15:33:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.926 15:33:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.926 nvme0n1 00:28:27.926 15:33:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.926 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:27.926 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:27.926 15:33:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.926 15:33:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.926 15:33:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.926 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.926 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:27.926 15:33:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.926 15:33:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.186 15:33:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.186 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:28.186 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:28:28.186 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.186 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:28.186 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:28.186 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:28.186 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJmNzdiZWE2Zjk1MzNkNDk4MDBlNTZhNTkxMjNiN2Q5OTU0NjU2MmQxM2RmZWE0LFoJ3g==: 00:28:28.186 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjU4ZmQ2MDNlZjYwNjNjMjc5ZjhlMzQzMGUzMWQ4OGVhZjkyNjRjZjMzZjA0MDYwiYd32w==: 00:28:28.186 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:28.186 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:28.186 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJmNzdiZWE2Zjk1MzNkNDk4MDBlNTZhNTkxMjNiN2Q5OTU0NjU2MmQxM2RmZWE0LFoJ3g==: 00:28:28.186 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjU4ZmQ2MDNlZjYwNjNjMjc5ZjhlMzQzMGUzMWQ4OGVhZjkyNjRjZjMzZjA0MDYwiYd32w==: ]] 00:28:28.186 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjU4ZmQ2MDNlZjYwNjNjMjc5ZjhlMzQzMGUzMWQ4OGVhZjkyNjRjZjMzZjA0MDYwiYd32w==: 00:28:28.186 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:28:28.186 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:28.186 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:28.186 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:28.186 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:28.186 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:28.186 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:28.186 15:33:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.186 15:33:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.186 15:33:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.186 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.186 15:33:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:28.186 15:33:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:28.186 15:33:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:28.186 15:33:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.186 15:33:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.186 15:33:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:28.186 15:33:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.186 15:33:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:28.186 15:33:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:28.186 15:33:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:28.186 15:33:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:28.186 15:33:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.186 15:33:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.186 nvme0n1 00:28:28.186 15:33:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.186 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.186 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:28.186 15:33:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.186 15:33:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.186 15:33:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.186 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.186 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.186 15:33:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.186 15:33:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.186 15:33:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.186 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:28.186 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:28:28.186 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.186 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:28.186 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:28.186 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:28.186 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTliYjNkODY4ZDRhZTViMDU1NDBlMDhiMzY2OWRkOTdpGiRa: 00:28:28.186 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGMyYmUyZGZjOTAxZDQ2NjcwMWE2YWY0NjFlZTVlNjne8XT0: 00:28:28.186 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:28.186 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:28.186 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTliYjNkODY4ZDRhZTViMDU1NDBlMDhiMzY2OWRkOTdpGiRa: 00:28:28.186 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGMyYmUyZGZjOTAxZDQ2NjcwMWE2YWY0NjFlZTVlNjne8XT0: ]] 00:28:28.186 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGMyYmUyZGZjOTAxZDQ2NjcwMWE2YWY0NjFlZTVlNjne8XT0: 00:28:28.186 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:28:28.186 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:28.187 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:28.187 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:28.187 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:28.187 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:28.187 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:28.187 15:33:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.187 15:33:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.187 15:33:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.187 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.187 15:33:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:28.187 15:33:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:28.187 15:33:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:28.187 15:33:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.187 15:33:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.187 15:33:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:28.187 15:33:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.187 15:33:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:28.187 15:33:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:28.187 15:33:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:28.187 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:28.187 15:33:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.187 15:33:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.446 nvme0n1 00:28:28.446 15:33:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.446 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.446 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:28.446 15:33:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.446 15:33:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.446 15:33:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.446 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.446 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.446 15:33:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.446 15:33:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.446 15:33:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.446 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:28.446 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:28:28.446 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.446 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:28.446 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:28.446 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:28.446 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWNlZTMyZjc0MGU1MWI3OTU4NzhjOTFkYWIxZjBiMzFiYzkyZDY1OGFhNzM2MjAyGACbjA==: 00:28:28.446 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2IzMTc4MDJmNzkyMGY3ZGQwNGUxZmNkOTVjZTZjOTj8C3Du: 00:28:28.446 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:28.446 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:28.446 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWNlZTMyZjc0MGU1MWI3OTU4NzhjOTFkYWIxZjBiMzFiYzkyZDY1OGFhNzM2MjAyGACbjA==: 00:28:28.446 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2IzMTc4MDJmNzkyMGY3ZGQwNGUxZmNkOTVjZTZjOTj8C3Du: ]] 00:28:28.447 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2IzMTc4MDJmNzkyMGY3ZGQwNGUxZmNkOTVjZTZjOTj8C3Du: 00:28:28.447 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:28:28.447 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:28.447 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:28.447 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:28.447 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:28.447 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:28.447 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:28.447 15:33:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.447 15:33:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.447 15:33:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.447 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.447 15:33:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:28.447 15:33:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:28.447 15:33:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:28.447 15:33:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.447 15:33:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.447 15:33:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:28.447 15:33:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.447 15:33:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:28.447 15:33:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:28.447 15:33:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:28.447 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:28.447 15:33:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.447 15:33:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.706 nvme0n1 00:28:28.706 15:33:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.706 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:28.706 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.706 15:33:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.706 15:33:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.706 15:33:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.706 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.706 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.706 15:33:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.706 15:33:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.706 15:33:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.706 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:28.706 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:28:28.706 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.706 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:28.706 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:28.706 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:28.706 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODM3MmMyMGVjMzVhZmQ1MDI4MTMzMmZjOTM2NTExYWVmOWFlOTI4MTc1Y2M5Njc3N2NjZjcyMDIwOTViYzU3YTMs+MM=: 00:28:28.706 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:28.706 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:28.706 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:28.706 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODM3MmMyMGVjMzVhZmQ1MDI4MTMzMmZjOTM2NTExYWVmOWFlOTI4MTc1Y2M5Njc3N2NjZjcyMDIwOTViYzU3YTMs+MM=: 00:28:28.706 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:28.706 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:28:28.706 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:28.706 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:28.706 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:28.706 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:28.706 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:28.706 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:28.706 15:33:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.706 15:33:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.706 15:33:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.706 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.706 15:33:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:28.706 15:33:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:28.706 15:33:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:28.706 15:33:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.706 15:33:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.706 15:33:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:28.706 15:33:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.706 15:33:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:28.706 15:33:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:28.706 15:33:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:28.706 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:28.706 15:33:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.706 15:33:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.965 nvme0n1 00:28:28.965 15:33:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.965 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.965 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:28.965 15:33:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.965 15:33:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.965 15:33:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.965 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.965 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.965 15:33:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.965 15:33:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.965 15:33:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.965 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:28.965 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:28.965 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:28:28.965 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.965 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:28.965 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:28.965 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:28.965 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2JiZmMxMWVmNzU5YmVjNDcxZWUzNDNjNjk5MWRhZmQm4JW9: 00:28:28.965 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzEwYmJkZWQ4ZjE4OWVjYjBmZWVlY2NlNGYwZWEyM2MwYzhiOWMyMzVhMDY5NWMyODM0MjU4ZmVhNmU4OGNmYchvmk4=: 00:28:28.965 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:28.965 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:28.965 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2JiZmMxMWVmNzU5YmVjNDcxZWUzNDNjNjk5MWRhZmQm4JW9: 00:28:28.965 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzEwYmJkZWQ4ZjE4OWVjYjBmZWVlY2NlNGYwZWEyM2MwYzhiOWMyMzVhMDY5NWMyODM0MjU4ZmVhNmU4OGNmYchvmk4=: ]] 00:28:28.965 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzEwYmJkZWQ4ZjE4OWVjYjBmZWVlY2NlNGYwZWEyM2MwYzhiOWMyMzVhMDY5NWMyODM0MjU4ZmVhNmU4OGNmYchvmk4=: 00:28:28.965 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:28:28.965 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:28.965 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:28.965 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:28.965 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:28.965 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:28.965 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:28.966 15:33:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.966 15:33:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.966 15:33:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.966 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.966 15:33:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:28.966 15:33:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:28.966 15:33:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:28.966 15:33:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.966 15:33:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.966 15:33:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:28.966 15:33:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.966 15:33:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:28.966 15:33:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:28.966 15:33:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:28.966 15:33:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:28.966 15:33:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.966 15:33:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.224 nvme0n1 00:28:29.224 15:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.224 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.225 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.225 15:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.225 15:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.225 15:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.225 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.225 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.225 15:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.225 15:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.225 15:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.225 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.225 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:28:29.225 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.225 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:29.225 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:29.225 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:29.225 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJmNzdiZWE2Zjk1MzNkNDk4MDBlNTZhNTkxMjNiN2Q5OTU0NjU2MmQxM2RmZWE0LFoJ3g==: 00:28:29.225 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjU4ZmQ2MDNlZjYwNjNjMjc5ZjhlMzQzMGUzMWQ4OGVhZjkyNjRjZjMzZjA0MDYwiYd32w==: 00:28:29.225 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:29.225 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:29.225 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJmNzdiZWE2Zjk1MzNkNDk4MDBlNTZhNTkxMjNiN2Q5OTU0NjU2MmQxM2RmZWE0LFoJ3g==: 00:28:29.225 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjU4ZmQ2MDNlZjYwNjNjMjc5ZjhlMzQzMGUzMWQ4OGVhZjkyNjRjZjMzZjA0MDYwiYd32w==: ]] 00:28:29.225 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjU4ZmQ2MDNlZjYwNjNjMjc5ZjhlMzQzMGUzMWQ4OGVhZjkyNjRjZjMzZjA0MDYwiYd32w==: 00:28:29.225 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:28:29.225 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.225 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:29.225 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:29.225 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:29.225 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.225 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:29.225 15:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.225 15:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.225 15:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.225 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.225 15:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:29.225 15:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:29.225 15:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:29.225 15:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.225 15:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.225 15:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:29.225 15:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.225 15:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:29.225 15:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:29.225 15:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:29.225 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:29.225 15:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.225 15:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.484 nvme0n1 00:28:29.484 15:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.484 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.484 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.484 15:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.484 15:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.484 15:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.484 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.484 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.484 15:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.484 15:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.484 15:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.484 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.484 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:28:29.484 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.484 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:29.484 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:29.484 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:29.484 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTliYjNkODY4ZDRhZTViMDU1NDBlMDhiMzY2OWRkOTdpGiRa: 00:28:29.484 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGMyYmUyZGZjOTAxZDQ2NjcwMWE2YWY0NjFlZTVlNjne8XT0: 00:28:29.484 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:29.484 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:29.484 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTliYjNkODY4ZDRhZTViMDU1NDBlMDhiMzY2OWRkOTdpGiRa: 00:28:29.484 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGMyYmUyZGZjOTAxZDQ2NjcwMWE2YWY0NjFlZTVlNjne8XT0: ]] 00:28:29.484 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGMyYmUyZGZjOTAxZDQ2NjcwMWE2YWY0NjFlZTVlNjne8XT0: 00:28:29.484 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:28:29.484 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.484 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:29.484 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:29.484 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:29.484 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.484 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:29.484 15:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.484 15:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.484 15:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.484 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.484 15:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:29.484 15:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:29.484 15:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:29.484 15:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.484 15:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.484 15:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:29.484 15:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.484 15:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:29.484 15:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:29.484 15:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:29.484 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:29.484 15:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.484 15:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.743 nvme0n1 00:28:29.743 15:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.743 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.743 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.743 15:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.743 15:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.743 15:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.743 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.743 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.743 15:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.743 15:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.002 15:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.002 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.002 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:28:30.002 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.002 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:30.002 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:30.002 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:30.002 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWNlZTMyZjc0MGU1MWI3OTU4NzhjOTFkYWIxZjBiMzFiYzkyZDY1OGFhNzM2MjAyGACbjA==: 00:28:30.002 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2IzMTc4MDJmNzkyMGY3ZGQwNGUxZmNkOTVjZTZjOTj8C3Du: 00:28:30.002 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:30.002 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:30.002 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWNlZTMyZjc0MGU1MWI3OTU4NzhjOTFkYWIxZjBiMzFiYzkyZDY1OGFhNzM2MjAyGACbjA==: 00:28:30.002 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2IzMTc4MDJmNzkyMGY3ZGQwNGUxZmNkOTVjZTZjOTj8C3Du: ]] 00:28:30.002 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2IzMTc4MDJmNzkyMGY3ZGQwNGUxZmNkOTVjZTZjOTj8C3Du: 00:28:30.002 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:28:30.002 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.002 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:30.002 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:30.002 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:30.002 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.002 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:30.002 15:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.002 15:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.002 15:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.002 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.002 15:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:30.002 15:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:30.002 15:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:30.002 15:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.002 15:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.002 15:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:30.002 15:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.002 15:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:30.002 15:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:30.002 15:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:30.002 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:30.002 15:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.002 15:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.002 nvme0n1 00:28:30.262 15:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.262 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.262 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.262 15:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.262 15:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.262 15:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.262 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.262 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.262 15:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.262 15:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.262 15:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.262 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.262 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:28:30.262 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.262 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:30.262 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:30.262 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:30.262 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODM3MmMyMGVjMzVhZmQ1MDI4MTMzMmZjOTM2NTExYWVmOWFlOTI4MTc1Y2M5Njc3N2NjZjcyMDIwOTViYzU3YTMs+MM=: 00:28:30.262 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:30.262 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:30.262 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:30.262 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODM3MmMyMGVjMzVhZmQ1MDI4MTMzMmZjOTM2NTExYWVmOWFlOTI4MTc1Y2M5Njc3N2NjZjcyMDIwOTViYzU3YTMs+MM=: 00:28:30.262 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:30.262 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:28:30.262 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.262 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:30.262 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:30.262 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:30.262 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.262 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:30.262 15:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.262 15:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.262 15:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.262 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.262 15:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:30.262 15:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:30.262 15:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:30.262 15:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.262 15:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.262 15:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:30.262 15:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.262 15:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:30.262 15:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:30.262 15:33:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:30.262 15:33:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:30.262 15:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.262 15:33:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.522 nvme0n1 00:28:30.522 15:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.522 15:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.522 15:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.522 15:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.522 15:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.522 15:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.522 15:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.522 15:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.522 15:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.522 15:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.522 15:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.522 15:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:30.522 15:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.522 15:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:28:30.522 15:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.522 15:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:30.522 15:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:30.522 15:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:30.522 15:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2JiZmMxMWVmNzU5YmVjNDcxZWUzNDNjNjk5MWRhZmQm4JW9: 00:28:30.522 15:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzEwYmJkZWQ4ZjE4OWVjYjBmZWVlY2NlNGYwZWEyM2MwYzhiOWMyMzVhMDY5NWMyODM0MjU4ZmVhNmU4OGNmYchvmk4=: 00:28:30.522 15:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:30.522 15:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:30.522 15:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2JiZmMxMWVmNzU5YmVjNDcxZWUzNDNjNjk5MWRhZmQm4JW9: 00:28:30.522 15:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzEwYmJkZWQ4ZjE4OWVjYjBmZWVlY2NlNGYwZWEyM2MwYzhiOWMyMzVhMDY5NWMyODM0MjU4ZmVhNmU4OGNmYchvmk4=: ]] 00:28:30.522 15:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzEwYmJkZWQ4ZjE4OWVjYjBmZWVlY2NlNGYwZWEyM2MwYzhiOWMyMzVhMDY5NWMyODM0MjU4ZmVhNmU4OGNmYchvmk4=: 00:28:30.522 15:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:28:30.522 15:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.522 15:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:30.522 15:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:30.522 15:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:30.522 15:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.522 15:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:30.522 15:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.522 15:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.522 15:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.522 15:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.522 15:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:30.522 15:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:30.522 15:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:30.522 15:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.522 15:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.522 15:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:30.522 15:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.522 15:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:30.522 15:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:30.522 15:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:30.522 15:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:30.522 15:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.522 15:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.781 nvme0n1 00:28:30.781 15:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.781 15:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.781 15:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.781 15:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.781 15:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.781 15:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.781 15:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.781 15:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.781 15:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.781 15:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.781 15:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.781 15:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.781 15:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:28:30.781 15:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.781 15:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:30.781 15:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:30.781 15:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:30.781 15:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJmNzdiZWE2Zjk1MzNkNDk4MDBlNTZhNTkxMjNiN2Q5OTU0NjU2MmQxM2RmZWE0LFoJ3g==: 00:28:30.781 15:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjU4ZmQ2MDNlZjYwNjNjMjc5ZjhlMzQzMGUzMWQ4OGVhZjkyNjRjZjMzZjA0MDYwiYd32w==: 00:28:30.781 15:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:30.781 15:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:30.781 15:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJmNzdiZWE2Zjk1MzNkNDk4MDBlNTZhNTkxMjNiN2Q5OTU0NjU2MmQxM2RmZWE0LFoJ3g==: 00:28:30.781 15:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjU4ZmQ2MDNlZjYwNjNjMjc5ZjhlMzQzMGUzMWQ4OGVhZjkyNjRjZjMzZjA0MDYwiYd32w==: ]] 00:28:30.781 15:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjU4ZmQ2MDNlZjYwNjNjMjc5ZjhlMzQzMGUzMWQ4OGVhZjkyNjRjZjMzZjA0MDYwiYd32w==: 00:28:30.781 15:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:28:30.781 15:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.781 15:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:30.781 15:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:30.781 15:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:30.781 15:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.781 15:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:30.781 15:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.781 15:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.781 15:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.040 15:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:31.040 15:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:31.040 15:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:31.040 15:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:31.040 15:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.040 15:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.040 15:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:31.040 15:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.040 15:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:31.040 15:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:31.040 15:33:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:31.040 15:33:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:31.040 15:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.040 15:33:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.300 nvme0n1 00:28:31.300 15:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.300 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:31.300 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:31.300 15:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.300 15:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.300 15:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.300 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.300 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.300 15:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.300 15:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.300 15:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.300 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:31.300 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:28:31.300 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:31.300 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:31.300 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:31.300 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:31.300 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTliYjNkODY4ZDRhZTViMDU1NDBlMDhiMzY2OWRkOTdpGiRa: 00:28:31.300 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGMyYmUyZGZjOTAxZDQ2NjcwMWE2YWY0NjFlZTVlNjne8XT0: 00:28:31.300 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:31.300 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:31.300 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTliYjNkODY4ZDRhZTViMDU1NDBlMDhiMzY2OWRkOTdpGiRa: 00:28:31.300 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGMyYmUyZGZjOTAxZDQ2NjcwMWE2YWY0NjFlZTVlNjne8XT0: ]] 00:28:31.300 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGMyYmUyZGZjOTAxZDQ2NjcwMWE2YWY0NjFlZTVlNjne8XT0: 00:28:31.300 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:28:31.300 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:31.300 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:31.300 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:31.300 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:31.300 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:31.300 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:31.300 15:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.300 15:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.300 15:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.300 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:31.300 15:33:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:31.300 15:33:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:31.300 15:33:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:31.300 15:33:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.300 15:33:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.300 15:33:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:31.300 15:33:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.300 15:33:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:31.300 15:33:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:31.300 15:33:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:31.300 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:31.300 15:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.300 15:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.559 nvme0n1 00:28:31.559 15:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.559 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:31.559 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:31.559 15:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.559 15:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.818 15:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.819 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.819 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.819 15:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.819 15:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.819 15:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.819 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:31.819 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:28:31.819 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:31.819 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:31.819 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:31.819 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:31.819 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWNlZTMyZjc0MGU1MWI3OTU4NzhjOTFkYWIxZjBiMzFiYzkyZDY1OGFhNzM2MjAyGACbjA==: 00:28:31.819 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2IzMTc4MDJmNzkyMGY3ZGQwNGUxZmNkOTVjZTZjOTj8C3Du: 00:28:31.819 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:31.819 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:31.819 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWNlZTMyZjc0MGU1MWI3OTU4NzhjOTFkYWIxZjBiMzFiYzkyZDY1OGFhNzM2MjAyGACbjA==: 00:28:31.819 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2IzMTc4MDJmNzkyMGY3ZGQwNGUxZmNkOTVjZTZjOTj8C3Du: ]] 00:28:31.819 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2IzMTc4MDJmNzkyMGY3ZGQwNGUxZmNkOTVjZTZjOTj8C3Du: 00:28:31.819 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:28:31.819 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:31.819 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:31.819 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:31.819 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:31.819 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:31.819 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:31.819 15:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.819 15:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.819 15:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.819 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:31.819 15:33:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:31.819 15:33:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:31.819 15:33:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:31.819 15:33:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.819 15:33:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.819 15:33:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:31.819 15:33:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.819 15:33:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:31.819 15:33:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:31.819 15:33:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:31.819 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:31.819 15:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.819 15:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.078 nvme0n1 00:28:32.078 15:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.078 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:32.078 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:32.078 15:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.078 15:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.078 15:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.078 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:32.078 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:32.078 15:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.078 15:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.078 15:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.078 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:32.078 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:28:32.078 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:32.078 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:32.078 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:32.078 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:32.078 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODM3MmMyMGVjMzVhZmQ1MDI4MTMzMmZjOTM2NTExYWVmOWFlOTI4MTc1Y2M5Njc3N2NjZjcyMDIwOTViYzU3YTMs+MM=: 00:28:32.078 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:32.078 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:32.078 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:32.078 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODM3MmMyMGVjMzVhZmQ1MDI4MTMzMmZjOTM2NTExYWVmOWFlOTI4MTc1Y2M5Njc3N2NjZjcyMDIwOTViYzU3YTMs+MM=: 00:28:32.078 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:32.078 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:28:32.078 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:32.078 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:32.078 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:32.078 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:32.078 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:32.078 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:32.078 15:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.078 15:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.078 15:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.078 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:32.078 15:33:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:32.079 15:33:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:32.079 15:33:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:32.079 15:33:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:32.079 15:33:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:32.079 15:33:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:32.079 15:33:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:32.079 15:33:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:32.079 15:33:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:32.079 15:33:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:32.079 15:33:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:32.079 15:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.079 15:33:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.647 nvme0n1 00:28:32.647 15:33:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.647 15:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:32.647 15:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:32.647 15:33:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.647 15:33:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.647 15:33:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.647 15:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:32.647 15:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:32.647 15:33:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.647 15:33:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.647 15:33:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.647 15:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:32.647 15:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:32.647 15:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:28:32.647 15:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:32.647 15:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:32.647 15:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:32.647 15:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:32.647 15:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2JiZmMxMWVmNzU5YmVjNDcxZWUzNDNjNjk5MWRhZmQm4JW9: 00:28:32.647 15:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzEwYmJkZWQ4ZjE4OWVjYjBmZWVlY2NlNGYwZWEyM2MwYzhiOWMyMzVhMDY5NWMyODM0MjU4ZmVhNmU4OGNmYchvmk4=: 00:28:32.647 15:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:32.647 15:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:32.647 15:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2JiZmMxMWVmNzU5YmVjNDcxZWUzNDNjNjk5MWRhZmQm4JW9: 00:28:32.647 15:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzEwYmJkZWQ4ZjE4OWVjYjBmZWVlY2NlNGYwZWEyM2MwYzhiOWMyMzVhMDY5NWMyODM0MjU4ZmVhNmU4OGNmYchvmk4=: ]] 00:28:32.647 15:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzEwYmJkZWQ4ZjE4OWVjYjBmZWVlY2NlNGYwZWEyM2MwYzhiOWMyMzVhMDY5NWMyODM0MjU4ZmVhNmU4OGNmYchvmk4=: 00:28:32.647 15:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:28:32.647 15:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:32.647 15:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:32.647 15:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:32.647 15:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:32.647 15:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:32.647 15:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:32.647 15:33:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.647 15:33:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.647 15:33:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.647 15:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:32.647 15:33:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:32.647 15:33:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:32.647 15:33:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:32.647 15:33:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:32.647 15:33:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:32.647 15:33:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:32.647 15:33:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:32.647 15:33:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:32.647 15:33:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:32.647 15:33:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:32.647 15:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:32.647 15:33:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.647 15:33:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.214 nvme0n1 00:28:33.214 15:33:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.214 15:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:33.214 15:33:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.214 15:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:33.214 15:33:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.214 15:33:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.214 15:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:33.214 15:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:33.214 15:33:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.214 15:33:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.214 15:33:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.214 15:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:33.214 15:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:28:33.214 15:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:33.214 15:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:33.214 15:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:33.214 15:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:33.214 15:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJmNzdiZWE2Zjk1MzNkNDk4MDBlNTZhNTkxMjNiN2Q5OTU0NjU2MmQxM2RmZWE0LFoJ3g==: 00:28:33.214 15:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjU4ZmQ2MDNlZjYwNjNjMjc5ZjhlMzQzMGUzMWQ4OGVhZjkyNjRjZjMzZjA0MDYwiYd32w==: 00:28:33.214 15:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:33.214 15:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:33.214 15:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJmNzdiZWE2Zjk1MzNkNDk4MDBlNTZhNTkxMjNiN2Q5OTU0NjU2MmQxM2RmZWE0LFoJ3g==: 00:28:33.214 15:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjU4ZmQ2MDNlZjYwNjNjMjc5ZjhlMzQzMGUzMWQ4OGVhZjkyNjRjZjMzZjA0MDYwiYd32w==: ]] 00:28:33.214 15:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjU4ZmQ2MDNlZjYwNjNjMjc5ZjhlMzQzMGUzMWQ4OGVhZjkyNjRjZjMzZjA0MDYwiYd32w==: 00:28:33.214 15:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:28:33.214 15:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:33.214 15:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:33.214 15:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:33.214 15:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:33.214 15:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:33.214 15:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:33.214 15:33:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.214 15:33:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.214 15:33:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.214 15:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:33.214 15:33:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:33.214 15:33:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:33.214 15:33:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:33.214 15:33:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:33.215 15:33:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:33.215 15:33:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:33.215 15:33:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:33.215 15:33:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:33.215 15:33:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:33.215 15:33:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:33.215 15:33:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:33.215 15:33:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.215 15:33:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.783 nvme0n1 00:28:33.783 15:33:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.783 15:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:33.783 15:33:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.783 15:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:33.783 15:33:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.783 15:33:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.783 15:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:33.783 15:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:33.783 15:33:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.783 15:33:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.783 15:33:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.783 15:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:33.783 15:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:28:33.783 15:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:33.783 15:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:33.783 15:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:33.783 15:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:33.783 15:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTliYjNkODY4ZDRhZTViMDU1NDBlMDhiMzY2OWRkOTdpGiRa: 00:28:33.783 15:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGMyYmUyZGZjOTAxZDQ2NjcwMWE2YWY0NjFlZTVlNjne8XT0: 00:28:33.783 15:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:33.783 15:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:33.783 15:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTliYjNkODY4ZDRhZTViMDU1NDBlMDhiMzY2OWRkOTdpGiRa: 00:28:33.783 15:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGMyYmUyZGZjOTAxZDQ2NjcwMWE2YWY0NjFlZTVlNjne8XT0: ]] 00:28:33.783 15:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGMyYmUyZGZjOTAxZDQ2NjcwMWE2YWY0NjFlZTVlNjne8XT0: 00:28:33.783 15:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:28:33.783 15:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:33.783 15:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:33.783 15:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:33.783 15:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:33.783 15:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:33.783 15:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:33.783 15:33:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.783 15:33:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.783 15:33:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.783 15:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:33.783 15:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:33.783 15:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:33.783 15:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:33.783 15:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:33.783 15:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:33.783 15:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:33.783 15:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:33.783 15:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:33.783 15:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:33.783 15:33:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:33.783 15:33:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:33.783 15:33:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.783 15:33:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.350 nvme0n1 00:28:34.350 15:33:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.350 15:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.350 15:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.350 15:33:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.350 15:33:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.350 15:33:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.350 15:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.350 15:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.350 15:33:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.350 15:33:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.350 15:33:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.350 15:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:34.350 15:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:28:34.350 15:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.350 15:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:34.350 15:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:34.350 15:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:34.350 15:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWNlZTMyZjc0MGU1MWI3OTU4NzhjOTFkYWIxZjBiMzFiYzkyZDY1OGFhNzM2MjAyGACbjA==: 00:28:34.350 15:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2IzMTc4MDJmNzkyMGY3ZGQwNGUxZmNkOTVjZTZjOTj8C3Du: 00:28:34.350 15:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:34.350 15:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:34.350 15:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWNlZTMyZjc0MGU1MWI3OTU4NzhjOTFkYWIxZjBiMzFiYzkyZDY1OGFhNzM2MjAyGACbjA==: 00:28:34.350 15:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2IzMTc4MDJmNzkyMGY3ZGQwNGUxZmNkOTVjZTZjOTj8C3Du: ]] 00:28:34.350 15:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2IzMTc4MDJmNzkyMGY3ZGQwNGUxZmNkOTVjZTZjOTj8C3Du: 00:28:34.350 15:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:28:34.350 15:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.350 15:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:34.350 15:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:34.350 15:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:34.350 15:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.350 15:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:34.350 15:33:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.350 15:33:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.350 15:33:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.350 15:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.350 15:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:34.350 15:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:34.350 15:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:34.350 15:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.350 15:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.350 15:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:34.350 15:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.350 15:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:34.350 15:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:34.350 15:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:34.350 15:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:34.350 15:33:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.350 15:33:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.918 nvme0n1 00:28:34.918 15:33:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.918 15:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.918 15:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.918 15:33:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.918 15:33:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.918 15:33:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.918 15:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.918 15:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.918 15:33:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.918 15:33:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.918 15:33:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.918 15:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:34.918 15:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:34.918 15:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.918 15:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:34.918 15:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:34.918 15:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:34.918 15:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODM3MmMyMGVjMzVhZmQ1MDI4MTMzMmZjOTM2NTExYWVmOWFlOTI4MTc1Y2M5Njc3N2NjZjcyMDIwOTViYzU3YTMs+MM=: 00:28:34.918 15:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:34.918 15:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:34.918 15:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:34.918 15:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODM3MmMyMGVjMzVhZmQ1MDI4MTMzMmZjOTM2NTExYWVmOWFlOTI4MTc1Y2M5Njc3N2NjZjcyMDIwOTViYzU3YTMs+MM=: 00:28:34.918 15:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:34.918 15:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:28:34.918 15:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.918 15:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:34.918 15:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:34.918 15:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:34.918 15:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.918 15:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:34.918 15:33:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.918 15:33:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.918 15:33:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.918 15:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.918 15:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:34.918 15:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:34.918 15:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:34.918 15:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.918 15:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.918 15:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:34.918 15:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.918 15:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:34.918 15:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:34.918 15:33:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:34.918 15:33:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:34.918 15:33:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.918 15:33:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.486 nvme0n1 00:28:35.486 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.486 15:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.486 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.486 15:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:35.486 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.486 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.486 15:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.486 15:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.486 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.486 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.746 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.746 15:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:35.746 15:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.746 15:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:35.746 15:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:35.746 15:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:35.746 15:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjJmNzdiZWE2Zjk1MzNkNDk4MDBlNTZhNTkxMjNiN2Q5OTU0NjU2MmQxM2RmZWE0LFoJ3g==: 00:28:35.746 15:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjU4ZmQ2MDNlZjYwNjNjMjc5ZjhlMzQzMGUzMWQ4OGVhZjkyNjRjZjMzZjA0MDYwiYd32w==: 00:28:35.746 15:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:35.746 15:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:35.746 15:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjJmNzdiZWE2Zjk1MzNkNDk4MDBlNTZhNTkxMjNiN2Q5OTU0NjU2MmQxM2RmZWE0LFoJ3g==: 00:28:35.746 15:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjU4ZmQ2MDNlZjYwNjNjMjc5ZjhlMzQzMGUzMWQ4OGVhZjkyNjRjZjMzZjA0MDYwiYd32w==: ]] 00:28:35.746 15:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjU4ZmQ2MDNlZjYwNjNjMjc5ZjhlMzQzMGUzMWQ4OGVhZjkyNjRjZjMzZjA0MDYwiYd32w==: 00:28:35.746 15:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:35.746 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.746 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.746 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.746 15:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:28:35.746 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:35.746 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:35.746 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:35.746 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.746 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.746 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:35.746 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.746 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:35.746 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:35.746 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:35.746 15:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:35.746 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:28:35.746 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:35.746 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:35.746 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:35.746 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:35.746 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:35.746 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:35.746 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.746 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.746 request: 00:28:35.746 { 00:28:35.746 "name": "nvme0", 00:28:35.746 "trtype": "tcp", 00:28:35.746 "traddr": "10.0.0.1", 00:28:35.746 "adrfam": "ipv4", 00:28:35.746 "trsvcid": "4420", 00:28:35.747 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:35.747 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:35.747 "prchk_reftag": false, 00:28:35.747 "prchk_guard": false, 00:28:35.747 "hdgst": false, 00:28:35.747 "ddgst": false, 00:28:35.747 "method": "bdev_nvme_attach_controller", 00:28:35.747 "req_id": 1 00:28:35.747 } 00:28:35.747 Got JSON-RPC error response 00:28:35.747 response: 00:28:35.747 { 00:28:35.747 "code": -5, 00:28:35.747 "message": "Input/output error" 00:28:35.747 } 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.747 request: 00:28:35.747 { 00:28:35.747 "name": "nvme0", 00:28:35.747 "trtype": "tcp", 00:28:35.747 "traddr": "10.0.0.1", 00:28:35.747 "adrfam": "ipv4", 00:28:35.747 "trsvcid": "4420", 00:28:35.747 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:35.747 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:35.747 "prchk_reftag": false, 00:28:35.747 "prchk_guard": false, 00:28:35.747 "hdgst": false, 00:28:35.747 "ddgst": false, 00:28:35.747 "dhchap_key": "key2", 00:28:35.747 "method": "bdev_nvme_attach_controller", 00:28:35.747 "req_id": 1 00:28:35.747 } 00:28:35.747 Got JSON-RPC error response 00:28:35.747 response: 00:28:35.747 { 00:28:35.747 "code": -5, 00:28:35.747 "message": "Input/output error" 00:28:35.747 } 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.747 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.006 request: 00:28:36.006 { 00:28:36.007 "name": "nvme0", 00:28:36.007 "trtype": "tcp", 00:28:36.007 "traddr": "10.0.0.1", 00:28:36.007 "adrfam": "ipv4", 00:28:36.007 "trsvcid": "4420", 00:28:36.007 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:36.007 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:36.007 "prchk_reftag": false, 00:28:36.007 "prchk_guard": false, 00:28:36.007 "hdgst": false, 00:28:36.007 "ddgst": false, 00:28:36.007 "dhchap_key": "key1", 00:28:36.007 "dhchap_ctrlr_key": "ckey2", 00:28:36.007 "method": "bdev_nvme_attach_controller", 00:28:36.007 "req_id": 1 00:28:36.007 } 00:28:36.007 Got JSON-RPC error response 00:28:36.007 response: 00:28:36.007 { 00:28:36.007 "code": -5, 00:28:36.007 "message": "Input/output error" 00:28:36.007 } 00:28:36.007 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:36.007 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:28:36.007 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:36.007 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:36.007 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:36.007 15:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:28:36.007 15:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:28:36.007 15:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:28:36.007 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:36.007 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:28:36.007 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:36.007 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:28:36.007 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:36.007 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:36.007 rmmod nvme_tcp 00:28:36.007 rmmod nvme_fabrics 00:28:36.007 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:36.007 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:28:36.007 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:28:36.007 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 3196173 ']' 00:28:36.007 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 3196173 00:28:36.007 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 3196173 ']' 00:28:36.007 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 3196173 00:28:36.007 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:28:36.007 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:36.007 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3196173 00:28:36.007 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:36.007 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:36.007 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3196173' 00:28:36.007 killing process with pid 3196173 00:28:36.007 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 3196173 00:28:36.007 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 3196173 00:28:36.266 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:36.266 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:36.266 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:36.266 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:36.266 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:36.266 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:36.266 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:36.266 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:38.171 15:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:38.171 15:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:38.171 15:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:38.171 15:33:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:28:38.171 15:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:28:38.171 15:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:28:38.171 15:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:38.171 15:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:38.171 15:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:38.171 15:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:38.171 15:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:28:38.171 15:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:28:38.429 15:33:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:41.710 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:28:41.710 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:28:41.710 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:28:41.710 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:28:41.710 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:28:41.710 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:28:41.710 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:28:41.710 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:28:41.710 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:28:41.710 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:28:41.710 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:28:41.710 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:28:41.710 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:28:41.710 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:28:41.710 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:28:41.710 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:28:43.087 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:28:43.346 15:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.1Wh /tmp/spdk.key-null.Oqa /tmp/spdk.key-sha256.FWN /tmp/spdk.key-sha384.LPz /tmp/spdk.key-sha512.wn2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:28:43.346 15:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:46.637 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:28:46.637 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:28:46.637 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:28:46.637 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:28:46.637 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:28:46.637 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:28:46.637 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:28:46.637 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:28:46.637 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:28:46.637 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:28:46.637 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:28:46.637 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:28:46.637 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:28:46.637 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:28:46.637 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:28:46.637 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:28:46.637 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:28:46.637 00:28:46.637 real 0m51.918s 00:28:46.637 user 0m43.819s 00:28:46.637 sys 0m14.762s 00:28:46.637 15:33:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:46.637 15:33:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.637 ************************************ 00:28:46.637 END TEST nvmf_auth_host 00:28:46.637 ************************************ 00:28:46.637 15:33:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:46.637 15:33:50 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:28:46.637 15:33:50 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:46.637 15:33:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:46.637 15:33:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:46.637 15:33:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:46.637 ************************************ 00:28:46.637 START TEST nvmf_digest 00:28:46.637 ************************************ 00:28:46.637 15:33:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:46.637 * Looking for test storage... 00:28:46.637 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:46.638 15:33:50 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:46.638 15:33:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:28:46.638 15:33:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:46.638 15:33:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:46.638 15:33:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:46.638 15:33:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:46.638 15:33:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:46.638 15:33:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:46.638 15:33:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:46.638 15:33:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:46.638 15:33:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:46.638 15:33:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:46.638 15:33:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:28:46.638 15:33:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:28:46.638 15:33:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:46.638 15:33:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:46.638 15:33:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:46.638 15:33:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:46.638 15:33:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:46.638 15:33:50 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:46.638 15:33:50 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:46.638 15:33:50 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:46.638 15:33:50 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.638 15:33:50 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.638 15:33:50 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.638 15:33:50 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:28:46.638 15:33:50 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.638 15:33:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:28:46.638 15:33:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:46.638 15:33:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:46.638 15:33:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:46.638 15:33:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:46.638 15:33:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:46.638 15:33:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:46.638 15:33:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:46.638 15:33:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:46.638 15:33:50 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:46.638 15:33:50 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:46.638 15:33:50 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:28:46.638 15:33:50 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:46.638 15:33:50 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:28:46.638 15:33:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:46.638 15:33:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:46.638 15:33:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:46.638 15:33:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:46.638 15:33:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:46.638 15:33:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:46.638 15:33:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:46.638 15:33:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:46.638 15:33:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:46.638 15:33:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:46.638 15:33:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:28:46.638 15:33:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:53.238 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:53.238 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:53.238 Found net devices under 0000:af:00.0: cvl_0_0 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:53.238 Found net devices under 0000:af:00.1: cvl_0_1 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:53.238 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:53.497 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:53.497 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:53.497 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:53.497 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:53.497 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:53.497 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:53.497 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:53.497 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:53.497 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:28:53.497 00:28:53.497 --- 10.0.0.2 ping statistics --- 00:28:53.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:53.497 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:28:53.497 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:53.497 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:53.497 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:28:53.497 00:28:53.497 --- 10.0.0.1 ping statistics --- 00:28:53.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:53.497 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:28:53.497 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:53.497 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:28:53.497 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:53.497 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:53.497 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:53.497 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:53.497 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:53.497 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:53.497 15:33:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:53.757 15:33:57 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:53.757 15:33:57 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:53.757 15:33:57 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:53.757 15:33:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:53.757 15:33:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:53.757 15:33:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:53.757 ************************************ 00:28:53.757 START TEST nvmf_digest_clean 00:28:53.757 ************************************ 00:28:53.757 15:33:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:28:53.757 15:33:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:28:53.757 15:33:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:53.757 15:33:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:28:53.757 15:33:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:53.757 15:33:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:53.757 15:33:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:53.757 15:33:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:53.757 15:33:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:53.757 15:33:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=3209855 00:28:53.757 15:33:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 3209855 00:28:53.757 15:33:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:53.757 15:33:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3209855 ']' 00:28:53.757 15:33:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:53.757 15:33:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:53.757 15:33:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:53.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:53.757 15:33:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:53.757 15:33:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:53.757 [2024-07-15 15:33:57.503467] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:28:53.757 [2024-07-15 15:33:57.503514] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:53.757 EAL: No free 2048 kB hugepages reported on node 1 00:28:53.757 [2024-07-15 15:33:57.577913] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:53.757 [2024-07-15 15:33:57.650084] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:53.757 [2024-07-15 15:33:57.650120] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:53.757 [2024-07-15 15:33:57.650129] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:53.757 [2024-07-15 15:33:57.650137] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:53.757 [2024-07-15 15:33:57.650144] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:53.757 [2024-07-15 15:33:57.650164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:54.694 15:33:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:54.694 15:33:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:54.694 15:33:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:54.694 15:33:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:54.694 15:33:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:54.694 15:33:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:54.694 15:33:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:54.694 15:33:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:28:54.694 15:33:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:28:54.694 15:33:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.694 15:33:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:54.694 null0 00:28:54.694 [2024-07-15 15:33:58.429752] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:54.694 [2024-07-15 15:33:58.453936] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:54.694 15:33:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.694 15:33:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:54.694 15:33:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:54.694 15:33:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:54.694 15:33:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:54.694 15:33:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:54.694 15:33:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:54.694 15:33:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:54.694 15:33:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3210011 00:28:54.694 15:33:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3210011 /var/tmp/bperf.sock 00:28:54.694 15:33:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:54.694 15:33:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3210011 ']' 00:28:54.694 15:33:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:54.694 15:33:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:54.694 15:33:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:54.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:54.694 15:33:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:54.694 15:33:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:54.694 [2024-07-15 15:33:58.509040] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:28:54.694 [2024-07-15 15:33:58.509087] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3210011 ] 00:28:54.694 EAL: No free 2048 kB hugepages reported on node 1 00:28:54.694 [2024-07-15 15:33:58.579894] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:54.952 [2024-07-15 15:33:58.655208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:55.520 15:33:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:55.520 15:33:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:55.520 15:33:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:55.520 15:33:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:55.520 15:33:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:55.779 15:33:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:55.779 15:33:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:56.038 nvme0n1 00:28:56.038 15:33:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:56.038 15:33:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:56.038 Running I/O for 2 seconds... 00:28:58.574 00:28:58.574 Latency(us) 00:28:58.574 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:58.574 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:58.574 nvme0n1 : 2.00 28348.66 110.74 0.00 0.00 4510.54 2057.83 12582.91 00:28:58.574 =================================================================================================================== 00:28:58.574 Total : 28348.66 110.74 0.00 0.00 4510.54 2057.83 12582.91 00:28:58.574 0 00:28:58.574 15:34:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:58.574 15:34:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:58.574 15:34:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:58.574 15:34:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:58.574 | select(.opcode=="crc32c") 00:28:58.574 | "\(.module_name) \(.executed)"' 00:28:58.574 15:34:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:58.574 15:34:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:58.574 15:34:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:58.574 15:34:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:58.574 15:34:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:58.574 15:34:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3210011 00:28:58.574 15:34:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3210011 ']' 00:28:58.574 15:34:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3210011 00:28:58.574 15:34:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:58.574 15:34:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:58.574 15:34:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3210011 00:28:58.574 15:34:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:58.574 15:34:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:58.574 15:34:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3210011' 00:28:58.574 killing process with pid 3210011 00:28:58.574 15:34:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3210011 00:28:58.574 Received shutdown signal, test time was about 2.000000 seconds 00:28:58.574 00:28:58.574 Latency(us) 00:28:58.574 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:58.574 =================================================================================================================== 00:28:58.574 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:58.574 15:34:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3210011 00:28:58.574 15:34:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:58.574 15:34:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:58.574 15:34:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:58.574 15:34:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:58.574 15:34:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:58.574 15:34:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:58.574 15:34:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:58.574 15:34:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3210680 00:28:58.574 15:34:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3210680 /var/tmp/bperf.sock 00:28:58.574 15:34:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:58.574 15:34:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3210680 ']' 00:28:58.574 15:34:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:58.574 15:34:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:58.574 15:34:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:58.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:58.574 15:34:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:58.574 15:34:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:58.574 [2024-07-15 15:34:02.380352] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:28:58.574 [2024-07-15 15:34:02.380407] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3210680 ] 00:28:58.574 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:58.574 Zero copy mechanism will not be used. 00:28:58.574 EAL: No free 2048 kB hugepages reported on node 1 00:28:58.574 [2024-07-15 15:34:02.449421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.833 [2024-07-15 15:34:02.524849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:59.398 15:34:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:59.398 15:34:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:59.398 15:34:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:59.398 15:34:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:59.398 15:34:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:59.657 15:34:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:59.657 15:34:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:59.915 nvme0n1 00:28:59.915 15:34:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:59.915 15:34:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:59.915 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:59.915 Zero copy mechanism will not be used. 00:28:59.915 Running I/O for 2 seconds... 00:29:02.443 00:29:02.443 Latency(us) 00:29:02.443 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:02.443 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:02.444 nvme0n1 : 2.00 3860.19 482.52 0.00 0.00 4141.75 996.15 13736.35 00:29:02.444 =================================================================================================================== 00:29:02.444 Total : 3860.19 482.52 0.00 0.00 4141.75 996.15 13736.35 00:29:02.444 0 00:29:02.444 15:34:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:02.444 15:34:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:02.444 15:34:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:02.444 15:34:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:02.444 15:34:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:02.444 | select(.opcode=="crc32c") 00:29:02.444 | "\(.module_name) \(.executed)"' 00:29:02.444 15:34:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:02.444 15:34:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:02.444 15:34:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:02.444 15:34:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:02.444 15:34:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3210680 00:29:02.444 15:34:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3210680 ']' 00:29:02.444 15:34:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3210680 00:29:02.444 15:34:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:29:02.444 15:34:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:02.444 15:34:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3210680 00:29:02.444 15:34:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:02.444 15:34:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:02.444 15:34:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3210680' 00:29:02.444 killing process with pid 3210680 00:29:02.444 15:34:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3210680 00:29:02.444 Received shutdown signal, test time was about 2.000000 seconds 00:29:02.444 00:29:02.444 Latency(us) 00:29:02.444 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:02.444 =================================================================================================================== 00:29:02.444 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:02.444 15:34:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3210680 00:29:02.444 15:34:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:29:02.444 15:34:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:02.444 15:34:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:02.444 15:34:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:02.444 15:34:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:02.444 15:34:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:02.444 15:34:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:02.444 15:34:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3211227 00:29:02.444 15:34:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3211227 /var/tmp/bperf.sock 00:29:02.444 15:34:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:02.444 15:34:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3211227 ']' 00:29:02.444 15:34:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:02.444 15:34:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:02.444 15:34:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:02.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:02.444 15:34:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:02.444 15:34:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:02.444 [2024-07-15 15:34:06.222916] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:29:02.444 [2024-07-15 15:34:06.222969] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3211227 ] 00:29:02.444 EAL: No free 2048 kB hugepages reported on node 1 00:29:02.444 [2024-07-15 15:34:06.293368] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:02.702 [2024-07-15 15:34:06.360190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:03.268 15:34:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:03.268 15:34:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:29:03.268 15:34:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:03.268 15:34:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:03.268 15:34:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:03.527 15:34:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:03.527 15:34:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:03.784 nvme0n1 00:29:03.784 15:34:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:03.784 15:34:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:03.784 Running I/O for 2 seconds... 00:29:05.686 00:29:05.686 Latency(us) 00:29:05.686 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:05.686 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:05.686 nvme0n1 : 2.00 28158.70 109.99 0.00 0.00 4538.11 3853.52 13841.20 00:29:05.686 =================================================================================================================== 00:29:05.686 Total : 28158.70 109.99 0.00 0.00 4538.11 3853.52 13841.20 00:29:05.686 0 00:29:05.686 15:34:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:05.686 15:34:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:05.686 15:34:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:05.686 15:34:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:05.686 15:34:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:05.686 | select(.opcode=="crc32c") 00:29:05.686 | "\(.module_name) \(.executed)"' 00:29:05.945 15:34:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:05.945 15:34:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:05.945 15:34:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:05.945 15:34:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:05.945 15:34:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3211227 00:29:05.945 15:34:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3211227 ']' 00:29:05.945 15:34:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3211227 00:29:05.945 15:34:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:29:05.945 15:34:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:05.945 15:34:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3211227 00:29:05.945 15:34:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:05.945 15:34:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:05.945 15:34:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3211227' 00:29:05.945 killing process with pid 3211227 00:29:05.945 15:34:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3211227 00:29:05.945 Received shutdown signal, test time was about 2.000000 seconds 00:29:05.945 00:29:05.945 Latency(us) 00:29:05.945 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:05.945 =================================================================================================================== 00:29:05.945 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:05.945 15:34:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3211227 00:29:06.203 15:34:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:29:06.203 15:34:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:06.203 15:34:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:06.203 15:34:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:06.203 15:34:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:06.203 15:34:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:06.203 15:34:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:06.203 15:34:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3212008 00:29:06.203 15:34:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3212008 /var/tmp/bperf.sock 00:29:06.203 15:34:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:06.203 15:34:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3212008 ']' 00:29:06.203 15:34:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:06.203 15:34:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:06.203 15:34:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:06.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:06.203 15:34:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:06.203 15:34:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:06.203 [2024-07-15 15:34:10.032022] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:29:06.203 [2024-07-15 15:34:10.032080] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3212008 ] 00:29:06.203 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:06.203 Zero copy mechanism will not be used. 00:29:06.203 EAL: No free 2048 kB hugepages reported on node 1 00:29:06.203 [2024-07-15 15:34:10.104213] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:06.461 [2024-07-15 15:34:10.179797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:07.028 15:34:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:07.028 15:34:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:29:07.028 15:34:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:07.028 15:34:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:07.028 15:34:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:07.287 15:34:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:07.287 15:34:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:07.546 nvme0n1 00:29:07.546 15:34:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:07.546 15:34:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:07.546 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:07.546 Zero copy mechanism will not be used. 00:29:07.546 Running I/O for 2 seconds... 00:29:10.084 00:29:10.084 Latency(us) 00:29:10.084 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:10.084 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:10.084 nvme0n1 : 2.00 4102.81 512.85 0.00 0.00 3894.55 2267.55 16252.93 00:29:10.084 =================================================================================================================== 00:29:10.084 Total : 4102.81 512.85 0.00 0.00 3894.55 2267.55 16252.93 00:29:10.084 0 00:29:10.084 15:34:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:10.084 15:34:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:10.084 15:34:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:10.084 15:34:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:10.084 | select(.opcode=="crc32c") 00:29:10.084 | "\(.module_name) \(.executed)"' 00:29:10.084 15:34:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:10.084 15:34:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:10.084 15:34:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:10.084 15:34:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:10.084 15:34:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:10.084 15:34:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3212008 00:29:10.084 15:34:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3212008 ']' 00:29:10.084 15:34:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3212008 00:29:10.084 15:34:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:29:10.084 15:34:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:10.084 15:34:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3212008 00:29:10.084 15:34:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:10.084 15:34:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:10.084 15:34:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3212008' 00:29:10.084 killing process with pid 3212008 00:29:10.084 15:34:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3212008 00:29:10.084 Received shutdown signal, test time was about 2.000000 seconds 00:29:10.084 00:29:10.084 Latency(us) 00:29:10.084 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:10.084 =================================================================================================================== 00:29:10.084 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:10.084 15:34:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3212008 00:29:10.084 15:34:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3209855 00:29:10.084 15:34:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3209855 ']' 00:29:10.084 15:34:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3209855 00:29:10.084 15:34:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:29:10.084 15:34:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:10.084 15:34:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3209855 00:29:10.084 15:34:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:10.084 15:34:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:10.084 15:34:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3209855' 00:29:10.084 killing process with pid 3209855 00:29:10.084 15:34:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3209855 00:29:10.084 15:34:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3209855 00:29:10.343 00:29:10.343 real 0m16.615s 00:29:10.343 user 0m31.229s 00:29:10.343 sys 0m4.987s 00:29:10.343 15:34:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:10.343 15:34:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:10.343 ************************************ 00:29:10.343 END TEST nvmf_digest_clean 00:29:10.343 ************************************ 00:29:10.343 15:34:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:29:10.343 15:34:14 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:29:10.343 15:34:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:10.343 15:34:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:10.343 15:34:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:10.343 ************************************ 00:29:10.343 START TEST nvmf_digest_error 00:29:10.343 ************************************ 00:29:10.343 15:34:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:29:10.343 15:34:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:29:10.343 15:34:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:10.343 15:34:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:10.343 15:34:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:10.343 15:34:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=3212593 00:29:10.343 15:34:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 3212593 00:29:10.343 15:34:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:10.343 15:34:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3212593 ']' 00:29:10.343 15:34:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:10.343 15:34:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:10.343 15:34:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:10.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:10.343 15:34:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:10.343 15:34:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:10.343 [2024-07-15 15:34:14.198969] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:29:10.343 [2024-07-15 15:34:14.199017] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:10.343 EAL: No free 2048 kB hugepages reported on node 1 00:29:10.603 [2024-07-15 15:34:14.274028] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:10.603 [2024-07-15 15:34:14.346962] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:10.603 [2024-07-15 15:34:14.346999] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:10.603 [2024-07-15 15:34:14.347008] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:10.603 [2024-07-15 15:34:14.347017] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:10.603 [2024-07-15 15:34:14.347024] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:10.603 [2024-07-15 15:34:14.347047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:11.172 15:34:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:11.172 15:34:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:29:11.172 15:34:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:11.172 15:34:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:11.172 15:34:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:11.172 15:34:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:11.172 15:34:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:29:11.172 15:34:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:11.172 15:34:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:11.172 [2024-07-15 15:34:15.045101] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:29:11.172 15:34:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:11.172 15:34:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:29:11.172 15:34:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:29:11.172 15:34:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:11.172 15:34:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:11.432 null0 00:29:11.432 [2024-07-15 15:34:15.138845] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:11.432 [2024-07-15 15:34:15.163055] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:11.432 15:34:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:11.432 15:34:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:29:11.432 15:34:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:11.432 15:34:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:11.432 15:34:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:11.432 15:34:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:11.432 15:34:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3212871 00:29:11.432 15:34:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3212871 /var/tmp/bperf.sock 00:29:11.432 15:34:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:29:11.432 15:34:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3212871 ']' 00:29:11.432 15:34:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:11.432 15:34:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:11.432 15:34:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:11.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:11.432 15:34:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:11.432 15:34:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:11.432 [2024-07-15 15:34:15.216987] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:29:11.432 [2024-07-15 15:34:15.217033] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3212871 ] 00:29:11.432 EAL: No free 2048 kB hugepages reported on node 1 00:29:11.432 [2024-07-15 15:34:15.286438] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:11.691 [2024-07-15 15:34:15.361306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:12.260 15:34:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:12.260 15:34:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:29:12.260 15:34:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:12.260 15:34:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:12.520 15:34:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:12.520 15:34:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:12.520 15:34:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:12.520 15:34:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:12.520 15:34:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:12.520 15:34:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:12.852 nvme0n1 00:29:12.852 15:34:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:12.852 15:34:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:12.852 15:34:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:12.852 15:34:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:12.852 15:34:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:12.852 15:34:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:12.852 Running I/O for 2 seconds... 00:29:12.852 [2024-07-15 15:34:16.708723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:12.852 [2024-07-15 15:34:16.708759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.853 [2024-07-15 15:34:16.708772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.853 [2024-07-15 15:34:16.718955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:12.853 [2024-07-15 15:34:16.718981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.853 [2024-07-15 15:34:16.718992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.853 [2024-07-15 15:34:16.727768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:12.853 [2024-07-15 15:34:16.727791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.853 [2024-07-15 15:34:16.727802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.853 [2024-07-15 15:34:16.736491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:12.853 [2024-07-15 15:34:16.736514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.853 [2024-07-15 15:34:16.736524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.115 [2024-07-15 15:34:16.746124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.115 [2024-07-15 15:34:16.746147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.115 [2024-07-15 15:34:16.746157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.115 [2024-07-15 15:34:16.755909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.115 [2024-07-15 15:34:16.755931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.115 [2024-07-15 15:34:16.755942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.115 [2024-07-15 15:34:16.764608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.115 [2024-07-15 15:34:16.764631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.115 [2024-07-15 15:34:16.764641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.115 [2024-07-15 15:34:16.773840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.115 [2024-07-15 15:34:16.773861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.115 [2024-07-15 15:34:16.773872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.115 [2024-07-15 15:34:16.783355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.115 [2024-07-15 15:34:16.783376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.115 [2024-07-15 15:34:16.783386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.115 [2024-07-15 15:34:16.791535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.115 [2024-07-15 15:34:16.791557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.115 [2024-07-15 15:34:16.791572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.115 [2024-07-15 15:34:16.801968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.115 [2024-07-15 15:34:16.801991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:24443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.115 [2024-07-15 15:34:16.802001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.115 [2024-07-15 15:34:16.809726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.115 [2024-07-15 15:34:16.809747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.115 [2024-07-15 15:34:16.809758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.115 [2024-07-15 15:34:16.819055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.115 [2024-07-15 15:34:16.819077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.115 [2024-07-15 15:34:16.819087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.115 [2024-07-15 15:34:16.828865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.115 [2024-07-15 15:34:16.828887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.115 [2024-07-15 15:34:16.828897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.115 [2024-07-15 15:34:16.836022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.115 [2024-07-15 15:34:16.836044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:16698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.115 [2024-07-15 15:34:16.836054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.115 [2024-07-15 15:34:16.845549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.115 [2024-07-15 15:34:16.845571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.115 [2024-07-15 15:34:16.845581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.115 [2024-07-15 15:34:16.855624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.115 [2024-07-15 15:34:16.855646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.115 [2024-07-15 15:34:16.855656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.115 [2024-07-15 15:34:16.864389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.115 [2024-07-15 15:34:16.864410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.115 [2024-07-15 15:34:16.864421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.115 [2024-07-15 15:34:16.873845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.115 [2024-07-15 15:34:16.873870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:13023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.115 [2024-07-15 15:34:16.873881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.115 [2024-07-15 15:34:16.882020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.115 [2024-07-15 15:34:16.882041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.115 [2024-07-15 15:34:16.882052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.115 [2024-07-15 15:34:16.890906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.115 [2024-07-15 15:34:16.890927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.115 [2024-07-15 15:34:16.890937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.115 [2024-07-15 15:34:16.899234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.115 [2024-07-15 15:34:16.899255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.115 [2024-07-15 15:34:16.899265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.115 [2024-07-15 15:34:16.909136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.115 [2024-07-15 15:34:16.909158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.115 [2024-07-15 15:34:16.909168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.115 [2024-07-15 15:34:16.918394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.115 [2024-07-15 15:34:16.918416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.115 [2024-07-15 15:34:16.918426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.115 [2024-07-15 15:34:16.926612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.115 [2024-07-15 15:34:16.926634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.116 [2024-07-15 15:34:16.926645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.116 [2024-07-15 15:34:16.935804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.116 [2024-07-15 15:34:16.935826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.116 [2024-07-15 15:34:16.935840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.116 [2024-07-15 15:34:16.945360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.116 [2024-07-15 15:34:16.945383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.116 [2024-07-15 15:34:16.945393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.116 [2024-07-15 15:34:16.953406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.116 [2024-07-15 15:34:16.953428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.116 [2024-07-15 15:34:16.953439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.116 [2024-07-15 15:34:16.963241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.116 [2024-07-15 15:34:16.963263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:7176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.116 [2024-07-15 15:34:16.963274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.116 [2024-07-15 15:34:16.972153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.116 [2024-07-15 15:34:16.972176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.116 [2024-07-15 15:34:16.972187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.116 [2024-07-15 15:34:16.981980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.116 [2024-07-15 15:34:16.982001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.116 [2024-07-15 15:34:16.982012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.116 [2024-07-15 15:34:16.989584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.116 [2024-07-15 15:34:16.989606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.116 [2024-07-15 15:34:16.989616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.116 [2024-07-15 15:34:16.999374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.116 [2024-07-15 15:34:16.999396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.116 [2024-07-15 15:34:16.999406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.116 [2024-07-15 15:34:17.008890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.116 [2024-07-15 15:34:17.008912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:17444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.116 [2024-07-15 15:34:17.008922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.116 [2024-07-15 15:34:17.017880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.116 [2024-07-15 15:34:17.017901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:17515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.116 [2024-07-15 15:34:17.017912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.376 [2024-07-15 15:34:17.026009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.376 [2024-07-15 15:34:17.026031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.376 [2024-07-15 15:34:17.026045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.376 [2024-07-15 15:34:17.035824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.376 [2024-07-15 15:34:17.035850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.376 [2024-07-15 15:34:17.035861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.376 [2024-07-15 15:34:17.044954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.376 [2024-07-15 15:34:17.044976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:24424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.376 [2024-07-15 15:34:17.044986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.376 [2024-07-15 15:34:17.053952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.376 [2024-07-15 15:34:17.053973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.376 [2024-07-15 15:34:17.053984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.376 [2024-07-15 15:34:17.062860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.376 [2024-07-15 15:34:17.062881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:25233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.376 [2024-07-15 15:34:17.062892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.376 [2024-07-15 15:34:17.072295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.376 [2024-07-15 15:34:17.072317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:7762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.376 [2024-07-15 15:34:17.072328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.376 [2024-07-15 15:34:17.079811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.376 [2024-07-15 15:34:17.079836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.376 [2024-07-15 15:34:17.079847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.376 [2024-07-15 15:34:17.089765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.376 [2024-07-15 15:34:17.089787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.376 [2024-07-15 15:34:17.089797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.376 [2024-07-15 15:34:17.099021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.376 [2024-07-15 15:34:17.099043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.376 [2024-07-15 15:34:17.099053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.376 [2024-07-15 15:34:17.107373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.376 [2024-07-15 15:34:17.107397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.376 [2024-07-15 15:34:17.107408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.376 [2024-07-15 15:34:17.117064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.376 [2024-07-15 15:34:17.117087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:8141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.376 [2024-07-15 15:34:17.117097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.376 [2024-07-15 15:34:17.125017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.376 [2024-07-15 15:34:17.125039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:20240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.376 [2024-07-15 15:34:17.125050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.376 [2024-07-15 15:34:17.134420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.376 [2024-07-15 15:34:17.134441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:10143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.376 [2024-07-15 15:34:17.134451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.376 [2024-07-15 15:34:17.142752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.376 [2024-07-15 15:34:17.142774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.376 [2024-07-15 15:34:17.142784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.376 [2024-07-15 15:34:17.152006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.376 [2024-07-15 15:34:17.152027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.376 [2024-07-15 15:34:17.152038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.376 [2024-07-15 15:34:17.161173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.376 [2024-07-15 15:34:17.161195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:22694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.376 [2024-07-15 15:34:17.161205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.376 [2024-07-15 15:34:17.170034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.376 [2024-07-15 15:34:17.170055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:18074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.376 [2024-07-15 15:34:17.170066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.376 [2024-07-15 15:34:17.179324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.376 [2024-07-15 15:34:17.179346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:8479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.376 [2024-07-15 15:34:17.179360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.376 [2024-07-15 15:34:17.187451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.376 [2024-07-15 15:34:17.187472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.376 [2024-07-15 15:34:17.187482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.376 [2024-07-15 15:34:17.196892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.376 [2024-07-15 15:34:17.196913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.376 [2024-07-15 15:34:17.196923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.376 [2024-07-15 15:34:17.205880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.376 [2024-07-15 15:34:17.205901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.376 [2024-07-15 15:34:17.205911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.376 [2024-07-15 15:34:17.214671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.376 [2024-07-15 15:34:17.214692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.376 [2024-07-15 15:34:17.214703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.376 [2024-07-15 15:34:17.224073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.376 [2024-07-15 15:34:17.224095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.376 [2024-07-15 15:34:17.224106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.376 [2024-07-15 15:34:17.232968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.376 [2024-07-15 15:34:17.232989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:17571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.376 [2024-07-15 15:34:17.233000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.376 [2024-07-15 15:34:17.241929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.376 [2024-07-15 15:34:17.241951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.376 [2024-07-15 15:34:17.241961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.376 [2024-07-15 15:34:17.250974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.376 [2024-07-15 15:34:17.250996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:23902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.376 [2024-07-15 15:34:17.251007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.376 [2024-07-15 15:34:17.260160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.376 [2024-07-15 15:34:17.260185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.376 [2024-07-15 15:34:17.260195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.376 [2024-07-15 15:34:17.268953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.376 [2024-07-15 15:34:17.268975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:23484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.377 [2024-07-15 15:34:17.268985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.377 [2024-07-15 15:34:17.278109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.377 [2024-07-15 15:34:17.278130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.377 [2024-07-15 15:34:17.278141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.636 [2024-07-15 15:34:17.286532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.637 [2024-07-15 15:34:17.286553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.637 [2024-07-15 15:34:17.286564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.637 [2024-07-15 15:34:17.296466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.637 [2024-07-15 15:34:17.296487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.637 [2024-07-15 15:34:17.296497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.637 [2024-07-15 15:34:17.305567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.637 [2024-07-15 15:34:17.305588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.637 [2024-07-15 15:34:17.305598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.637 [2024-07-15 15:34:17.313479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.637 [2024-07-15 15:34:17.313501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.637 [2024-07-15 15:34:17.313511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.637 [2024-07-15 15:34:17.324164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.637 [2024-07-15 15:34:17.324185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.637 [2024-07-15 15:34:17.324195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.637 [2024-07-15 15:34:17.331494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.637 [2024-07-15 15:34:17.331516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.637 [2024-07-15 15:34:17.331527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.637 [2024-07-15 15:34:17.340796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.637 [2024-07-15 15:34:17.340818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.637 [2024-07-15 15:34:17.340829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.637 [2024-07-15 15:34:17.350470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.637 [2024-07-15 15:34:17.350493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.637 [2024-07-15 15:34:17.350504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.637 [2024-07-15 15:34:17.358472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.637 [2024-07-15 15:34:17.358494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.637 [2024-07-15 15:34:17.358504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.637 [2024-07-15 15:34:17.368127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.637 [2024-07-15 15:34:17.368149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.637 [2024-07-15 15:34:17.368160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.637 [2024-07-15 15:34:17.377608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.637 [2024-07-15 15:34:17.377629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:7766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.637 [2024-07-15 15:34:17.377640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.637 [2024-07-15 15:34:17.385569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.637 [2024-07-15 15:34:17.385590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:19834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.637 [2024-07-15 15:34:17.385600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.637 [2024-07-15 15:34:17.394605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.637 [2024-07-15 15:34:17.394626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:24185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.637 [2024-07-15 15:34:17.394637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.637 [2024-07-15 15:34:17.403648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.637 [2024-07-15 15:34:17.403669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.637 [2024-07-15 15:34:17.403679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.637 [2024-07-15 15:34:17.412866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.637 [2024-07-15 15:34:17.412887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:24766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.637 [2024-07-15 15:34:17.412901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.637 [2024-07-15 15:34:17.421326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.637 [2024-07-15 15:34:17.421347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:19581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.637 [2024-07-15 15:34:17.421358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.637 [2024-07-15 15:34:17.429692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.637 [2024-07-15 15:34:17.429713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.637 [2024-07-15 15:34:17.429723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.637 [2024-07-15 15:34:17.439112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.637 [2024-07-15 15:34:17.439133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.637 [2024-07-15 15:34:17.439144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.637 [2024-07-15 15:34:17.448276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.637 [2024-07-15 15:34:17.448298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.637 [2024-07-15 15:34:17.448308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.637 [2024-07-15 15:34:17.456072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.637 [2024-07-15 15:34:17.456094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:10824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.637 [2024-07-15 15:34:17.456104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.637 [2024-07-15 15:34:17.465479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.637 [2024-07-15 15:34:17.465501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.637 [2024-07-15 15:34:17.465512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.637 [2024-07-15 15:34:17.475265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.637 [2024-07-15 15:34:17.475288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.637 [2024-07-15 15:34:17.475298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.637 [2024-07-15 15:34:17.483637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.637 [2024-07-15 15:34:17.483659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.637 [2024-07-15 15:34:17.483670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.637 [2024-07-15 15:34:17.493947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.637 [2024-07-15 15:34:17.493973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.637 [2024-07-15 15:34:17.493983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.637 [2024-07-15 15:34:17.502049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.637 [2024-07-15 15:34:17.502071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:8180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.637 [2024-07-15 15:34:17.502081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.637 [2024-07-15 15:34:17.511685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.637 [2024-07-15 15:34:17.511707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.637 [2024-07-15 15:34:17.511718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.637 [2024-07-15 15:34:17.520949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.637 [2024-07-15 15:34:17.520972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.637 [2024-07-15 15:34:17.520983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.637 [2024-07-15 15:34:17.529499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.637 [2024-07-15 15:34:17.529521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:7969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.637 [2024-07-15 15:34:17.529532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.637 [2024-07-15 15:34:17.538613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.637 [2024-07-15 15:34:17.538636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.637 [2024-07-15 15:34:17.538647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.898 [2024-07-15 15:34:17.547967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.898 [2024-07-15 15:34:17.547990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.898 [2024-07-15 15:34:17.548001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.898 [2024-07-15 15:34:17.557048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.898 [2024-07-15 15:34:17.557070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.898 [2024-07-15 15:34:17.557080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.898 [2024-07-15 15:34:17.565014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.898 [2024-07-15 15:34:17.565036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.898 [2024-07-15 15:34:17.565046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.898 [2024-07-15 15:34:17.574936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.898 [2024-07-15 15:34:17.574958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.898 [2024-07-15 15:34:17.574969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.898 [2024-07-15 15:34:17.583090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.898 [2024-07-15 15:34:17.583112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.898 [2024-07-15 15:34:17.583122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.898 [2024-07-15 15:34:17.592616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.898 [2024-07-15 15:34:17.592638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.898 [2024-07-15 15:34:17.592649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.898 [2024-07-15 15:34:17.600569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.898 [2024-07-15 15:34:17.600590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.898 [2024-07-15 15:34:17.600600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.898 [2024-07-15 15:34:17.610188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.898 [2024-07-15 15:34:17.610210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:20439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.898 [2024-07-15 15:34:17.610221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.898 [2024-07-15 15:34:17.618474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.898 [2024-07-15 15:34:17.618496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:25381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.898 [2024-07-15 15:34:17.618507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.898 [2024-07-15 15:34:17.627983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.898 [2024-07-15 15:34:17.628005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.898 [2024-07-15 15:34:17.628015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.898 [2024-07-15 15:34:17.637235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.898 [2024-07-15 15:34:17.637256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.898 [2024-07-15 15:34:17.637267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.898 [2024-07-15 15:34:17.645230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.898 [2024-07-15 15:34:17.645255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:8143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.898 [2024-07-15 15:34:17.645265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.898 [2024-07-15 15:34:17.654236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.898 [2024-07-15 15:34:17.654258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:24906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.898 [2024-07-15 15:34:17.654268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.898 [2024-07-15 15:34:17.663061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.898 [2024-07-15 15:34:17.663083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.898 [2024-07-15 15:34:17.663094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.898 [2024-07-15 15:34:17.671929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.898 [2024-07-15 15:34:17.671950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.898 [2024-07-15 15:34:17.671961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.898 [2024-07-15 15:34:17.681912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.898 [2024-07-15 15:34:17.681934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.898 [2024-07-15 15:34:17.681944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.898 [2024-07-15 15:34:17.690525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.898 [2024-07-15 15:34:17.690547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.898 [2024-07-15 15:34:17.690558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.898 [2024-07-15 15:34:17.699993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.898 [2024-07-15 15:34:17.700015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:16707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.898 [2024-07-15 15:34:17.700025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.898 [2024-07-15 15:34:17.708870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.898 [2024-07-15 15:34:17.708892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.898 [2024-07-15 15:34:17.708902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.898 [2024-07-15 15:34:17.716874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.898 [2024-07-15 15:34:17.716896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:13424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.898 [2024-07-15 15:34:17.716907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.898 [2024-07-15 15:34:17.726154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.898 [2024-07-15 15:34:17.726178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:15481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.898 [2024-07-15 15:34:17.726188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.898 [2024-07-15 15:34:17.734044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.898 [2024-07-15 15:34:17.734067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.898 [2024-07-15 15:34:17.734077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.898 [2024-07-15 15:34:17.744380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.898 [2024-07-15 15:34:17.744402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.898 [2024-07-15 15:34:17.744413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.898 [2024-07-15 15:34:17.753237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.898 [2024-07-15 15:34:17.753258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:8606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.898 [2024-07-15 15:34:17.753268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.898 [2024-07-15 15:34:17.761692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.898 [2024-07-15 15:34:17.761714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:9440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.898 [2024-07-15 15:34:17.761724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.898 [2024-07-15 15:34:17.770986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.898 [2024-07-15 15:34:17.771008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:11568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.898 [2024-07-15 15:34:17.771019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.898 [2024-07-15 15:34:17.779026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.899 [2024-07-15 15:34:17.779047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:16627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.899 [2024-07-15 15:34:17.779058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.899 [2024-07-15 15:34:17.788858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.899 [2024-07-15 15:34:17.788879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.899 [2024-07-15 15:34:17.788890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.899 [2024-07-15 15:34:17.798110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:13.899 [2024-07-15 15:34:17.798132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.899 [2024-07-15 15:34:17.798147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.158 [2024-07-15 15:34:17.805841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.159 [2024-07-15 15:34:17.805862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.159 [2024-07-15 15:34:17.805873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.159 [2024-07-15 15:34:17.816194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.159 [2024-07-15 15:34:17.816217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.159 [2024-07-15 15:34:17.816227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.159 [2024-07-15 15:34:17.824439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.159 [2024-07-15 15:34:17.824461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:15361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.159 [2024-07-15 15:34:17.824471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.159 [2024-07-15 15:34:17.834086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.159 [2024-07-15 15:34:17.834108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.159 [2024-07-15 15:34:17.834119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.159 [2024-07-15 15:34:17.843184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.159 [2024-07-15 15:34:17.843206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:13702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.159 [2024-07-15 15:34:17.843216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.159 [2024-07-15 15:34:17.851373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.159 [2024-07-15 15:34:17.851394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.159 [2024-07-15 15:34:17.851404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.159 [2024-07-15 15:34:17.860959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.159 [2024-07-15 15:34:17.860981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.159 [2024-07-15 15:34:17.860992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.159 [2024-07-15 15:34:17.868766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.159 [2024-07-15 15:34:17.868788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:17565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.159 [2024-07-15 15:34:17.868798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.159 [2024-07-15 15:34:17.878119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.159 [2024-07-15 15:34:17.878145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.159 [2024-07-15 15:34:17.878155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.159 [2024-07-15 15:34:17.887043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.159 [2024-07-15 15:34:17.887065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.159 [2024-07-15 15:34:17.887075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.159 [2024-07-15 15:34:17.896285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.159 [2024-07-15 15:34:17.896307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.159 [2024-07-15 15:34:17.896317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.159 [2024-07-15 15:34:17.905182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.159 [2024-07-15 15:34:17.905203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.159 [2024-07-15 15:34:17.905214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.159 [2024-07-15 15:34:17.914283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.159 [2024-07-15 15:34:17.914305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:11410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.159 [2024-07-15 15:34:17.914315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.159 [2024-07-15 15:34:17.922944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.159 [2024-07-15 15:34:17.922966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.159 [2024-07-15 15:34:17.922976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.159 [2024-07-15 15:34:17.932324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.159 [2024-07-15 15:34:17.932346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:6840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.159 [2024-07-15 15:34:17.932357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.159 [2024-07-15 15:34:17.940289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.159 [2024-07-15 15:34:17.940311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:17769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.159 [2024-07-15 15:34:17.940322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.159 [2024-07-15 15:34:17.949776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.159 [2024-07-15 15:34:17.949797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.159 [2024-07-15 15:34:17.949808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.159 [2024-07-15 15:34:17.959060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.159 [2024-07-15 15:34:17.959082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:10427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.159 [2024-07-15 15:34:17.959092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.159 [2024-07-15 15:34:17.967180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.159 [2024-07-15 15:34:17.967201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:7809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.159 [2024-07-15 15:34:17.967211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.159 [2024-07-15 15:34:17.976845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.159 [2024-07-15 15:34:17.976878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.159 [2024-07-15 15:34:17.976889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.159 [2024-07-15 15:34:17.985384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.159 [2024-07-15 15:34:17.985407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.159 [2024-07-15 15:34:17.985417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.159 [2024-07-15 15:34:17.994612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.159 [2024-07-15 15:34:17.994634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.159 [2024-07-15 15:34:17.994644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.159 [2024-07-15 15:34:18.003062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.159 [2024-07-15 15:34:18.003085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.159 [2024-07-15 15:34:18.003095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.159 [2024-07-15 15:34:18.012548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.159 [2024-07-15 15:34:18.012569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.159 [2024-07-15 15:34:18.012580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.159 [2024-07-15 15:34:18.021414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.159 [2024-07-15 15:34:18.021436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:2888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.159 [2024-07-15 15:34:18.021446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.159 [2024-07-15 15:34:18.030143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.159 [2024-07-15 15:34:18.030168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:11953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.159 [2024-07-15 15:34:18.030179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.159 [2024-07-15 15:34:18.038616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.159 [2024-07-15 15:34:18.038638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.159 [2024-07-15 15:34:18.038648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.159 [2024-07-15 15:34:18.047975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.159 [2024-07-15 15:34:18.047997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.159 [2024-07-15 15:34:18.048007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.159 [2024-07-15 15:34:18.055883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.159 [2024-07-15 15:34:18.055905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:18169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.159 [2024-07-15 15:34:18.055915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.159 [2024-07-15 15:34:18.065048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.160 [2024-07-15 15:34:18.065070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:14651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.160 [2024-07-15 15:34:18.065080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.420 [2024-07-15 15:34:18.074995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.420 [2024-07-15 15:34:18.075017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:24199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.420 [2024-07-15 15:34:18.075027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.420 [2024-07-15 15:34:18.083072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.420 [2024-07-15 15:34:18.083094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.420 [2024-07-15 15:34:18.083104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.420 [2024-07-15 15:34:18.092493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.420 [2024-07-15 15:34:18.092515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:18838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.420 [2024-07-15 15:34:18.092525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.420 [2024-07-15 15:34:18.102312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.420 [2024-07-15 15:34:18.102334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.420 [2024-07-15 15:34:18.102344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.420 [2024-07-15 15:34:18.110572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.420 [2024-07-15 15:34:18.110595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.420 [2024-07-15 15:34:18.110605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.420 [2024-07-15 15:34:18.119490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.420 [2024-07-15 15:34:18.119511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.420 [2024-07-15 15:34:18.119521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.420 [2024-07-15 15:34:18.128527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.420 [2024-07-15 15:34:18.128548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.420 [2024-07-15 15:34:18.128560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.420 [2024-07-15 15:34:18.137566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.420 [2024-07-15 15:34:18.137588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.420 [2024-07-15 15:34:18.137598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.420 [2024-07-15 15:34:18.146028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.420 [2024-07-15 15:34:18.146052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:17423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.420 [2024-07-15 15:34:18.146062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.420 [2024-07-15 15:34:18.154622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.420 [2024-07-15 15:34:18.154644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:6338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.420 [2024-07-15 15:34:18.154654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.420 [2024-07-15 15:34:18.163634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.420 [2024-07-15 15:34:18.163655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:11459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.420 [2024-07-15 15:34:18.163666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.420 [2024-07-15 15:34:18.172438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.420 [2024-07-15 15:34:18.172459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.420 [2024-07-15 15:34:18.172469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.420 [2024-07-15 15:34:18.181543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.420 [2024-07-15 15:34:18.181564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:10954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.420 [2024-07-15 15:34:18.181578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.420 [2024-07-15 15:34:18.191067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.420 [2024-07-15 15:34:18.191088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.420 [2024-07-15 15:34:18.191099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.421 [2024-07-15 15:34:18.199730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.421 [2024-07-15 15:34:18.199751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.421 [2024-07-15 15:34:18.199761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.421 [2024-07-15 15:34:18.208340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.421 [2024-07-15 15:34:18.208361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.421 [2024-07-15 15:34:18.208371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.421 [2024-07-15 15:34:18.216794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.421 [2024-07-15 15:34:18.216816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:16488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.421 [2024-07-15 15:34:18.216826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.421 [2024-07-15 15:34:18.225816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.421 [2024-07-15 15:34:18.225841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.421 [2024-07-15 15:34:18.225852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.421 [2024-07-15 15:34:18.235569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.421 [2024-07-15 15:34:18.235591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:24220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.421 [2024-07-15 15:34:18.235601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.421 [2024-07-15 15:34:18.244070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.421 [2024-07-15 15:34:18.244091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.421 [2024-07-15 15:34:18.244102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.421 [2024-07-15 15:34:18.252867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.421 [2024-07-15 15:34:18.252889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:7872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.421 [2024-07-15 15:34:18.252899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.421 [2024-07-15 15:34:18.261821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.421 [2024-07-15 15:34:18.261851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.421 [2024-07-15 15:34:18.261861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.421 [2024-07-15 15:34:18.270057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.421 [2024-07-15 15:34:18.270078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.421 [2024-07-15 15:34:18.270088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.421 [2024-07-15 15:34:18.280018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.421 [2024-07-15 15:34:18.280040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:13040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.421 [2024-07-15 15:34:18.280050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.421 [2024-07-15 15:34:18.288015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.421 [2024-07-15 15:34:18.288037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:23197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.421 [2024-07-15 15:34:18.288047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.421 [2024-07-15 15:34:18.297537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.421 [2024-07-15 15:34:18.297558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:24453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.421 [2024-07-15 15:34:18.297568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.421 [2024-07-15 15:34:18.306557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.421 [2024-07-15 15:34:18.306579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.421 [2024-07-15 15:34:18.306589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.421 [2024-07-15 15:34:18.314441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.421 [2024-07-15 15:34:18.314463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:17772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.421 [2024-07-15 15:34:18.314473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.421 [2024-07-15 15:34:18.324376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.421 [2024-07-15 15:34:18.324399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.421 [2024-07-15 15:34:18.324410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.681 [2024-07-15 15:34:18.333146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.681 [2024-07-15 15:34:18.333168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:3127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.681 [2024-07-15 15:34:18.333179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.681 [2024-07-15 15:34:18.342892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.681 [2024-07-15 15:34:18.342913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:7771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.681 [2024-07-15 15:34:18.342924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.681 [2024-07-15 15:34:18.350423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.681 [2024-07-15 15:34:18.350444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.681 [2024-07-15 15:34:18.350455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.681 [2024-07-15 15:34:18.359968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.681 [2024-07-15 15:34:18.359990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:4282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.681 [2024-07-15 15:34:18.360001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.681 [2024-07-15 15:34:18.369878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.681 [2024-07-15 15:34:18.369900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.681 [2024-07-15 15:34:18.369910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.681 [2024-07-15 15:34:18.377801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.681 [2024-07-15 15:34:18.377822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:5247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.681 [2024-07-15 15:34:18.377838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.681 [2024-07-15 15:34:18.387259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.681 [2024-07-15 15:34:18.387281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.681 [2024-07-15 15:34:18.387291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.681 [2024-07-15 15:34:18.396577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.681 [2024-07-15 15:34:18.396599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.682 [2024-07-15 15:34:18.396609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.682 [2024-07-15 15:34:18.404372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.682 [2024-07-15 15:34:18.404393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.682 [2024-07-15 15:34:18.404404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.682 [2024-07-15 15:34:18.414473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.682 [2024-07-15 15:34:18.414494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:10412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.682 [2024-07-15 15:34:18.414508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.682 [2024-07-15 15:34:18.422749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.682 [2024-07-15 15:34:18.422771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.682 [2024-07-15 15:34:18.422781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.682 [2024-07-15 15:34:18.432343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.682 [2024-07-15 15:34:18.432365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.682 [2024-07-15 15:34:18.432375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.682 [2024-07-15 15:34:18.441576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.682 [2024-07-15 15:34:18.441598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:5607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.682 [2024-07-15 15:34:18.441608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.682 [2024-07-15 15:34:18.450150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.682 [2024-07-15 15:34:18.450171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.682 [2024-07-15 15:34:18.450181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.682 [2024-07-15 15:34:18.458221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.682 [2024-07-15 15:34:18.458242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.682 [2024-07-15 15:34:18.458252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.682 [2024-07-15 15:34:18.467909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.682 [2024-07-15 15:34:18.467930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:16863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.682 [2024-07-15 15:34:18.467940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.682 [2024-07-15 15:34:18.477125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.682 [2024-07-15 15:34:18.477147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.682 [2024-07-15 15:34:18.477157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.682 [2024-07-15 15:34:18.485495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.682 [2024-07-15 15:34:18.485516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.682 [2024-07-15 15:34:18.485527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.682 [2024-07-15 15:34:18.495018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.682 [2024-07-15 15:34:18.495040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.682 [2024-07-15 15:34:18.495050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.682 [2024-07-15 15:34:18.503319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.682 [2024-07-15 15:34:18.503340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:11231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.682 [2024-07-15 15:34:18.503351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.682 [2024-07-15 15:34:18.512431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.682 [2024-07-15 15:34:18.512452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.682 [2024-07-15 15:34:18.512463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.682 [2024-07-15 15:34:18.521708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.682 [2024-07-15 15:34:18.521729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:16801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.682 [2024-07-15 15:34:18.521740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.682 [2024-07-15 15:34:18.530889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.682 [2024-07-15 15:34:18.530910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.682 [2024-07-15 15:34:18.530920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.682 [2024-07-15 15:34:18.539307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.682 [2024-07-15 15:34:18.539328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.682 [2024-07-15 15:34:18.539338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.682 [2024-07-15 15:34:18.548714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.682 [2024-07-15 15:34:18.548736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.682 [2024-07-15 15:34:18.548746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.682 [2024-07-15 15:34:18.557465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.682 [2024-07-15 15:34:18.557486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.682 [2024-07-15 15:34:18.557497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.682 [2024-07-15 15:34:18.565451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.682 [2024-07-15 15:34:18.565472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:7946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.682 [2024-07-15 15:34:18.565486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.682 [2024-07-15 15:34:18.574703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.682 [2024-07-15 15:34:18.574724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.682 [2024-07-15 15:34:18.574735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.682 [2024-07-15 15:34:18.583785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.682 [2024-07-15 15:34:18.583807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:21322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.682 [2024-07-15 15:34:18.583818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.942 [2024-07-15 15:34:18.593012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.942 [2024-07-15 15:34:18.593034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.942 [2024-07-15 15:34:18.593045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.942 [2024-07-15 15:34:18.602201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.943 [2024-07-15 15:34:18.602222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:11733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.943 [2024-07-15 15:34:18.602233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.943 [2024-07-15 15:34:18.610408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.943 [2024-07-15 15:34:18.610429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.943 [2024-07-15 15:34:18.610439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.943 [2024-07-15 15:34:18.620592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.943 [2024-07-15 15:34:18.620613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.943 [2024-07-15 15:34:18.620623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.943 [2024-07-15 15:34:18.629760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.943 [2024-07-15 15:34:18.629783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.943 [2024-07-15 15:34:18.629793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.943 [2024-07-15 15:34:18.638508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.943 [2024-07-15 15:34:18.638529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.943 [2024-07-15 15:34:18.638540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.943 [2024-07-15 15:34:18.648351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.943 [2024-07-15 15:34:18.648375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.943 [2024-07-15 15:34:18.648386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.943 [2024-07-15 15:34:18.656435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.943 [2024-07-15 15:34:18.656457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.943 [2024-07-15 15:34:18.656468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.943 [2024-07-15 15:34:18.667279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.943 [2024-07-15 15:34:18.667301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:19552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.943 [2024-07-15 15:34:18.667311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.943 [2024-07-15 15:34:18.676597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.943 [2024-07-15 15:34:18.676618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.943 [2024-07-15 15:34:18.676628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.943 [2024-07-15 15:34:18.684612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.943 [2024-07-15 15:34:18.684634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.943 [2024-07-15 15:34:18.684644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.943 [2024-07-15 15:34:18.694151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aee270) 00:29:14.943 [2024-07-15 15:34:18.694172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.943 [2024-07-15 15:34:18.694182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.943 00:29:14.943 Latency(us) 00:29:14.943 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:14.943 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:14.943 nvme0n1 : 2.00 28280.33 110.47 0.00 0.00 4520.12 2254.44 15938.36 00:29:14.943 =================================================================================================================== 00:29:14.943 Total : 28280.33 110.47 0.00 0.00 4520.12 2254.44 15938.36 00:29:14.943 0 00:29:14.943 15:34:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:14.943 15:34:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:14.943 15:34:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:14.943 | .driver_specific 00:29:14.943 | .nvme_error 00:29:14.943 | .status_code 00:29:14.943 | .command_transient_transport_error' 00:29:14.943 15:34:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:15.202 15:34:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 222 > 0 )) 00:29:15.202 15:34:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3212871 00:29:15.202 15:34:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3212871 ']' 00:29:15.202 15:34:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3212871 00:29:15.202 15:34:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:15.203 15:34:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:15.203 15:34:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3212871 00:29:15.203 15:34:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:15.203 15:34:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:15.203 15:34:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3212871' 00:29:15.203 killing process with pid 3212871 00:29:15.203 15:34:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3212871 00:29:15.203 Received shutdown signal, test time was about 2.000000 seconds 00:29:15.203 00:29:15.203 Latency(us) 00:29:15.203 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:15.203 =================================================================================================================== 00:29:15.203 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:15.203 15:34:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3212871 00:29:15.462 15:34:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:29:15.462 15:34:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:15.462 15:34:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:15.462 15:34:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:15.462 15:34:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:15.462 15:34:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3213420 00:29:15.462 15:34:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3213420 /var/tmp/bperf.sock 00:29:15.462 15:34:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:15.462 15:34:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3213420 ']' 00:29:15.462 15:34:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:15.462 15:34:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:15.462 15:34:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:15.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:15.462 15:34:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:15.462 15:34:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:15.462 [2024-07-15 15:34:19.180933] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:29:15.462 [2024-07-15 15:34:19.180984] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3213420 ] 00:29:15.462 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:15.462 Zero copy mechanism will not be used. 00:29:15.462 EAL: No free 2048 kB hugepages reported on node 1 00:29:15.462 [2024-07-15 15:34:19.252163] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:15.462 [2024-07-15 15:34:19.318797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:16.399 15:34:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:16.399 15:34:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:29:16.399 15:34:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:16.399 15:34:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:16.399 15:34:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:16.399 15:34:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:16.399 15:34:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:16.399 15:34:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:16.399 15:34:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:16.399 15:34:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:16.658 nvme0n1 00:29:16.658 15:34:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:16.658 15:34:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:16.658 15:34:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:16.658 15:34:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:16.658 15:34:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:16.658 15:34:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:16.918 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:16.918 Zero copy mechanism will not be used. 00:29:16.918 Running I/O for 2 seconds... 00:29:16.918 [2024-07-15 15:34:20.655348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:16.918 [2024-07-15 15:34:20.655385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.918 [2024-07-15 15:34:20.655398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.918 [2024-07-15 15:34:20.666934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:16.918 [2024-07-15 15:34:20.666964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.918 [2024-07-15 15:34:20.666976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.918 [2024-07-15 15:34:20.676240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:16.918 [2024-07-15 15:34:20.676266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.918 [2024-07-15 15:34:20.676278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.918 [2024-07-15 15:34:20.684481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:16.918 [2024-07-15 15:34:20.684505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.918 [2024-07-15 15:34:20.684520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.918 [2024-07-15 15:34:20.691838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:16.918 [2024-07-15 15:34:20.691860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.918 [2024-07-15 15:34:20.691871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.918 [2024-07-15 15:34:20.698645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:16.918 [2024-07-15 15:34:20.698668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.918 [2024-07-15 15:34:20.698679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.918 [2024-07-15 15:34:20.705367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:16.918 [2024-07-15 15:34:20.705390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.918 [2024-07-15 15:34:20.705400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.918 [2024-07-15 15:34:20.712056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:16.918 [2024-07-15 15:34:20.712078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.918 [2024-07-15 15:34:20.712088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.918 [2024-07-15 15:34:20.718703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:16.918 [2024-07-15 15:34:20.718727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.919 [2024-07-15 15:34:20.718738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.919 [2024-07-15 15:34:20.725468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:16.919 [2024-07-15 15:34:20.725490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.919 [2024-07-15 15:34:20.725502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.919 [2024-07-15 15:34:20.732241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:16.919 [2024-07-15 15:34:20.732264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.919 [2024-07-15 15:34:20.732275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.919 [2024-07-15 15:34:20.738869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:16.919 [2024-07-15 15:34:20.738891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.919 [2024-07-15 15:34:20.738904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.919 [2024-07-15 15:34:20.745591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:16.919 [2024-07-15 15:34:20.745617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.919 [2024-07-15 15:34:20.745628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.919 [2024-07-15 15:34:20.752315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:16.919 [2024-07-15 15:34:20.752337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.919 [2024-07-15 15:34:20.752349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.919 [2024-07-15 15:34:20.759023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:16.919 [2024-07-15 15:34:20.759044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.919 [2024-07-15 15:34:20.759055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.919 [2024-07-15 15:34:20.765677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:16.919 [2024-07-15 15:34:20.765698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.919 [2024-07-15 15:34:20.765709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.919 [2024-07-15 15:34:20.772389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:16.919 [2024-07-15 15:34:20.772410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.919 [2024-07-15 15:34:20.772420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.919 [2024-07-15 15:34:20.779110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:16.919 [2024-07-15 15:34:20.779132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.919 [2024-07-15 15:34:20.779143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.919 [2024-07-15 15:34:20.785800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:16.919 [2024-07-15 15:34:20.785821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.919 [2024-07-15 15:34:20.785837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.919 [2024-07-15 15:34:20.792391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:16.919 [2024-07-15 15:34:20.792413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.919 [2024-07-15 15:34:20.792424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.919 [2024-07-15 15:34:20.799029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:16.919 [2024-07-15 15:34:20.799051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.919 [2024-07-15 15:34:20.799062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:16.919 [2024-07-15 15:34:20.805663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:16.919 [2024-07-15 15:34:20.805686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.919 [2024-07-15 15:34:20.805696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:16.919 [2024-07-15 15:34:20.812368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:16.919 [2024-07-15 15:34:20.812391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.919 [2024-07-15 15:34:20.812401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.919 [2024-07-15 15:34:20.819060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:16.919 [2024-07-15 15:34:20.819081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.919 [2024-07-15 15:34:20.819092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.179 [2024-07-15 15:34:20.825860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.179 [2024-07-15 15:34:20.825883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.179 [2024-07-15 15:34:20.825894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:17.179 [2024-07-15 15:34:20.832533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.179 [2024-07-15 15:34:20.832556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.179 [2024-07-15 15:34:20.832566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:17.179 [2024-07-15 15:34:20.839159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.180 [2024-07-15 15:34:20.839181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.180 [2024-07-15 15:34:20.839192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.180 [2024-07-15 15:34:20.845798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.180 [2024-07-15 15:34:20.845820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.180 [2024-07-15 15:34:20.845831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.180 [2024-07-15 15:34:20.852499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.180 [2024-07-15 15:34:20.852521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.180 [2024-07-15 15:34:20.852531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:17.180 [2024-07-15 15:34:20.859168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.180 [2024-07-15 15:34:20.859190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.180 [2024-07-15 15:34:20.859204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:17.180 [2024-07-15 15:34:20.865864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.180 [2024-07-15 15:34:20.865886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.180 [2024-07-15 15:34:20.865897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.180 [2024-07-15 15:34:20.872500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.180 [2024-07-15 15:34:20.872522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.180 [2024-07-15 15:34:20.872534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.180 [2024-07-15 15:34:20.879137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.180 [2024-07-15 15:34:20.879159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.180 [2024-07-15 15:34:20.879170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:17.180 [2024-07-15 15:34:20.885797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.180 [2024-07-15 15:34:20.885819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.180 [2024-07-15 15:34:20.885829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:17.180 [2024-07-15 15:34:20.892445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.180 [2024-07-15 15:34:20.892467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.180 [2024-07-15 15:34:20.892477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.180 [2024-07-15 15:34:20.899092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.180 [2024-07-15 15:34:20.899114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.180 [2024-07-15 15:34:20.899125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.180 [2024-07-15 15:34:20.905755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.180 [2024-07-15 15:34:20.905777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.180 [2024-07-15 15:34:20.905788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:17.180 [2024-07-15 15:34:20.912606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.180 [2024-07-15 15:34:20.912628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.180 [2024-07-15 15:34:20.912639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:17.180 [2024-07-15 15:34:20.919305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.180 [2024-07-15 15:34:20.919328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.180 [2024-07-15 15:34:20.919338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.180 [2024-07-15 15:34:20.925950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.180 [2024-07-15 15:34:20.925972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.180 [2024-07-15 15:34:20.925983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.180 [2024-07-15 15:34:20.932574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.180 [2024-07-15 15:34:20.932596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.180 [2024-07-15 15:34:20.932607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:17.180 [2024-07-15 15:34:20.939199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.180 [2024-07-15 15:34:20.939222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.180 [2024-07-15 15:34:20.939232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:17.180 [2024-07-15 15:34:20.945817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.180 [2024-07-15 15:34:20.945844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.180 [2024-07-15 15:34:20.945854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.180 [2024-07-15 15:34:20.952446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.180 [2024-07-15 15:34:20.952467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.180 [2024-07-15 15:34:20.952477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.180 [2024-07-15 15:34:20.959077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.180 [2024-07-15 15:34:20.959100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.180 [2024-07-15 15:34:20.959110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:17.180 [2024-07-15 15:34:20.967503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.180 [2024-07-15 15:34:20.967527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.180 [2024-07-15 15:34:20.967538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:17.180 [2024-07-15 15:34:20.976762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.180 [2024-07-15 15:34:20.976786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.180 [2024-07-15 15:34:20.976800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.180 [2024-07-15 15:34:20.985924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.180 [2024-07-15 15:34:20.985947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.180 [2024-07-15 15:34:20.985958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.180 [2024-07-15 15:34:20.995164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.180 [2024-07-15 15:34:20.995188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.180 [2024-07-15 15:34:20.995199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:17.180 [2024-07-15 15:34:21.004295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.180 [2024-07-15 15:34:21.004318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.180 [2024-07-15 15:34:21.004329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:17.180 [2024-07-15 15:34:21.013342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.180 [2024-07-15 15:34:21.013367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.180 [2024-07-15 15:34:21.013378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.180 [2024-07-15 15:34:21.022285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.180 [2024-07-15 15:34:21.022309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.180 [2024-07-15 15:34:21.022319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.180 [2024-07-15 15:34:21.032356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.180 [2024-07-15 15:34:21.032379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.180 [2024-07-15 15:34:21.032391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:17.180 [2024-07-15 15:34:21.041053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.180 [2024-07-15 15:34:21.041077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.180 [2024-07-15 15:34:21.041088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:17.180 [2024-07-15 15:34:21.050807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.180 [2024-07-15 15:34:21.050831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.180 [2024-07-15 15:34:21.050849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.181 [2024-07-15 15:34:21.060849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.181 [2024-07-15 15:34:21.060877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.181 [2024-07-15 15:34:21.060888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.181 [2024-07-15 15:34:21.071707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.181 [2024-07-15 15:34:21.071731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.181 [2024-07-15 15:34:21.071742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:17.181 [2024-07-15 15:34:21.081486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.181 [2024-07-15 15:34:21.081511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.181 [2024-07-15 15:34:21.081522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:17.441 [2024-07-15 15:34:21.092138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.441 [2024-07-15 15:34:21.092162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.441 [2024-07-15 15:34:21.092173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.441 [2024-07-15 15:34:21.102383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.441 [2024-07-15 15:34:21.102406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.441 [2024-07-15 15:34:21.102417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.441 [2024-07-15 15:34:21.111474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.441 [2024-07-15 15:34:21.111497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.441 [2024-07-15 15:34:21.111508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:17.442 [2024-07-15 15:34:21.120181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.442 [2024-07-15 15:34:21.120204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.442 [2024-07-15 15:34:21.120215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:17.442 [2024-07-15 15:34:21.128498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.442 [2024-07-15 15:34:21.128521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.442 [2024-07-15 15:34:21.128531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.442 [2024-07-15 15:34:21.135753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.442 [2024-07-15 15:34:21.135776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.442 [2024-07-15 15:34:21.135787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.442 [2024-07-15 15:34:21.142631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.442 [2024-07-15 15:34:21.142653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.442 [2024-07-15 15:34:21.142663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:17.442 [2024-07-15 15:34:21.149438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.442 [2024-07-15 15:34:21.149460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.442 [2024-07-15 15:34:21.149471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:17.442 [2024-07-15 15:34:21.157596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.442 [2024-07-15 15:34:21.157619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.442 [2024-07-15 15:34:21.157630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.442 [2024-07-15 15:34:21.166812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.442 [2024-07-15 15:34:21.166842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.442 [2024-07-15 15:34:21.166854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.442 [2024-07-15 15:34:21.176591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.442 [2024-07-15 15:34:21.176615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.442 [2024-07-15 15:34:21.176626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:17.442 [2024-07-15 15:34:21.185794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.442 [2024-07-15 15:34:21.185817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.442 [2024-07-15 15:34:21.185828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:17.442 [2024-07-15 15:34:21.193917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.442 [2024-07-15 15:34:21.193939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.442 [2024-07-15 15:34:21.193950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.442 [2024-07-15 15:34:21.201262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.442 [2024-07-15 15:34:21.201285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.442 [2024-07-15 15:34:21.201295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.442 [2024-07-15 15:34:21.207967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.442 [2024-07-15 15:34:21.207991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.442 [2024-07-15 15:34:21.208005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:17.442 [2024-07-15 15:34:21.214771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.442 [2024-07-15 15:34:21.214795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.442 [2024-07-15 15:34:21.214806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:17.442 [2024-07-15 15:34:21.221520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.442 [2024-07-15 15:34:21.221543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.442 [2024-07-15 15:34:21.221553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.442 [2024-07-15 15:34:21.228297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.442 [2024-07-15 15:34:21.228319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.442 [2024-07-15 15:34:21.228329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.442 [2024-07-15 15:34:21.234994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.442 [2024-07-15 15:34:21.235016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.442 [2024-07-15 15:34:21.235027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:17.442 [2024-07-15 15:34:21.241774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.442 [2024-07-15 15:34:21.241797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.442 [2024-07-15 15:34:21.241808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:17.442 [2024-07-15 15:34:21.250004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.442 [2024-07-15 15:34:21.250028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.442 [2024-07-15 15:34:21.250038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.442 [2024-07-15 15:34:21.258844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.442 [2024-07-15 15:34:21.258869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.442 [2024-07-15 15:34:21.258880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.442 [2024-07-15 15:34:21.267672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.442 [2024-07-15 15:34:21.267695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.442 [2024-07-15 15:34:21.267706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:17.442 [2024-07-15 15:34:21.274733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.442 [2024-07-15 15:34:21.274756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.442 [2024-07-15 15:34:21.274767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:17.442 [2024-07-15 15:34:21.281603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.442 [2024-07-15 15:34:21.281626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.442 [2024-07-15 15:34:21.281636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.442 [2024-07-15 15:34:21.289029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.442 [2024-07-15 15:34:21.289051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.442 [2024-07-15 15:34:21.289061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.442 [2024-07-15 15:34:21.296492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.442 [2024-07-15 15:34:21.296514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.442 [2024-07-15 15:34:21.296524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:17.442 [2024-07-15 15:34:21.304044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.442 [2024-07-15 15:34:21.304066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.442 [2024-07-15 15:34:21.304076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:17.442 [2024-07-15 15:34:21.310678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.442 [2024-07-15 15:34:21.310700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.442 [2024-07-15 15:34:21.310710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.442 [2024-07-15 15:34:21.317565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.442 [2024-07-15 15:34:21.317587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.442 [2024-07-15 15:34:21.317598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.442 [2024-07-15 15:34:21.324623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.442 [2024-07-15 15:34:21.324645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.442 [2024-07-15 15:34:21.324655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:17.442 [2024-07-15 15:34:21.331565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.442 [2024-07-15 15:34:21.331587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.443 [2024-07-15 15:34:21.331600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:17.443 [2024-07-15 15:34:21.338810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.443 [2024-07-15 15:34:21.338836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.443 [2024-07-15 15:34:21.338847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.443 [2024-07-15 15:34:21.345901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.443 [2024-07-15 15:34:21.345924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.443 [2024-07-15 15:34:21.345935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.703 [2024-07-15 15:34:21.352772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.703 [2024-07-15 15:34:21.352802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.703 [2024-07-15 15:34:21.352813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:17.703 [2024-07-15 15:34:21.359791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.703 [2024-07-15 15:34:21.359813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.703 [2024-07-15 15:34:21.359823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:17.703 [2024-07-15 15:34:21.366882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.703 [2024-07-15 15:34:21.366904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.703 [2024-07-15 15:34:21.366914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.703 [2024-07-15 15:34:21.373747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.703 [2024-07-15 15:34:21.373768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.703 [2024-07-15 15:34:21.373779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.703 [2024-07-15 15:34:21.380743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.703 [2024-07-15 15:34:21.380764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.703 [2024-07-15 15:34:21.380774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:17.703 [2024-07-15 15:34:21.387964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.703 [2024-07-15 15:34:21.387986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.703 [2024-07-15 15:34:21.387997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:17.703 [2024-07-15 15:34:21.394756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.703 [2024-07-15 15:34:21.394785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.703 [2024-07-15 15:34:21.394795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.703 [2024-07-15 15:34:21.401976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.703 [2024-07-15 15:34:21.401998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.703 [2024-07-15 15:34:21.402009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.703 [2024-07-15 15:34:21.415677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.703 [2024-07-15 15:34:21.415698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.703 [2024-07-15 15:34:21.415708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:17.703 [2024-07-15 15:34:21.426181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.703 [2024-07-15 15:34:21.426203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.703 [2024-07-15 15:34:21.426213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:17.703 [2024-07-15 15:34:21.435805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.703 [2024-07-15 15:34:21.435827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.703 [2024-07-15 15:34:21.435843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.703 [2024-07-15 15:34:21.443650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.703 [2024-07-15 15:34:21.443672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.703 [2024-07-15 15:34:21.443682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.703 [2024-07-15 15:34:21.456357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.703 [2024-07-15 15:34:21.456380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.703 [2024-07-15 15:34:21.456391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:17.703 [2024-07-15 15:34:21.467106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.703 [2024-07-15 15:34:21.467128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.703 [2024-07-15 15:34:21.467139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:17.703 [2024-07-15 15:34:21.476830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.703 [2024-07-15 15:34:21.476858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.703 [2024-07-15 15:34:21.476868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.703 [2024-07-15 15:34:21.484799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.703 [2024-07-15 15:34:21.484821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.703 [2024-07-15 15:34:21.484841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.703 [2024-07-15 15:34:21.492143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.703 [2024-07-15 15:34:21.492165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.703 [2024-07-15 15:34:21.492176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:17.703 [2024-07-15 15:34:21.499343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.703 [2024-07-15 15:34:21.499365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.703 [2024-07-15 15:34:21.499376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:17.703 [2024-07-15 15:34:21.506375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.703 [2024-07-15 15:34:21.506397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.703 [2024-07-15 15:34:21.506407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.703 [2024-07-15 15:34:21.513356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.703 [2024-07-15 15:34:21.513377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.703 [2024-07-15 15:34:21.513388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.703 [2024-07-15 15:34:21.526676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.703 [2024-07-15 15:34:21.526698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.703 [2024-07-15 15:34:21.526708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:17.703 [2024-07-15 15:34:21.536924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.703 [2024-07-15 15:34:21.536946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.703 [2024-07-15 15:34:21.536956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:17.703 [2024-07-15 15:34:21.546231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.703 [2024-07-15 15:34:21.546255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.703 [2024-07-15 15:34:21.546266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.703 [2024-07-15 15:34:21.553744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.703 [2024-07-15 15:34:21.553766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.703 [2024-07-15 15:34:21.553779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.703 [2024-07-15 15:34:21.560780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.703 [2024-07-15 15:34:21.560802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.703 [2024-07-15 15:34:21.560812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:17.703 [2024-07-15 15:34:21.567582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.703 [2024-07-15 15:34:21.567604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.703 [2024-07-15 15:34:21.567615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:17.703 [2024-07-15 15:34:21.574238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.703 [2024-07-15 15:34:21.574260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.703 [2024-07-15 15:34:21.574271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.704 [2024-07-15 15:34:21.580952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.704 [2024-07-15 15:34:21.580974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.704 [2024-07-15 15:34:21.580985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.704 [2024-07-15 15:34:21.587348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.704 [2024-07-15 15:34:21.587373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.704 [2024-07-15 15:34:21.587384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:17.704 [2024-07-15 15:34:21.596183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.704 [2024-07-15 15:34:21.596208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.704 [2024-07-15 15:34:21.596219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:17.704 [2024-07-15 15:34:21.605000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.704 [2024-07-15 15:34:21.605026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.704 [2024-07-15 15:34:21.605037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.963 [2024-07-15 15:34:21.613589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.963 [2024-07-15 15:34:21.613614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.963 [2024-07-15 15:34:21.613625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.963 [2024-07-15 15:34:21.622055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.963 [2024-07-15 15:34:21.622080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.963 [2024-07-15 15:34:21.622091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:17.963 [2024-07-15 15:34:21.632850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.963 [2024-07-15 15:34:21.632874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.963 [2024-07-15 15:34:21.632885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:17.963 [2024-07-15 15:34:21.641972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.963 [2024-07-15 15:34:21.641997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.964 [2024-07-15 15:34:21.642008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.964 [2024-07-15 15:34:21.654386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.964 [2024-07-15 15:34:21.654410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.964 [2024-07-15 15:34:21.654421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.964 [2024-07-15 15:34:21.667842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.964 [2024-07-15 15:34:21.667866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.964 [2024-07-15 15:34:21.667876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:17.964 [2024-07-15 15:34:21.678163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.964 [2024-07-15 15:34:21.678188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.964 [2024-07-15 15:34:21.678199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:17.964 [2024-07-15 15:34:21.686575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.964 [2024-07-15 15:34:21.686599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.964 [2024-07-15 15:34:21.686609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.964 [2024-07-15 15:34:21.699997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.964 [2024-07-15 15:34:21.700021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.964 [2024-07-15 15:34:21.700031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.964 [2024-07-15 15:34:21.712349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.964 [2024-07-15 15:34:21.712373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.964 [2024-07-15 15:34:21.712386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:17.964 [2024-07-15 15:34:21.724798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.964 [2024-07-15 15:34:21.724823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.964 [2024-07-15 15:34:21.724838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:17.964 [2024-07-15 15:34:21.735075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.964 [2024-07-15 15:34:21.735097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.964 [2024-07-15 15:34:21.735107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.964 [2024-07-15 15:34:21.743656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.964 [2024-07-15 15:34:21.743680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.964 [2024-07-15 15:34:21.743690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.964 [2024-07-15 15:34:21.751628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.964 [2024-07-15 15:34:21.751651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.964 [2024-07-15 15:34:21.751661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:17.964 [2024-07-15 15:34:21.759683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.964 [2024-07-15 15:34:21.759706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.964 [2024-07-15 15:34:21.759716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:17.964 [2024-07-15 15:34:21.766844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.964 [2024-07-15 15:34:21.766867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.964 [2024-07-15 15:34:21.766877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.964 [2024-07-15 15:34:21.780779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.964 [2024-07-15 15:34:21.780803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.964 [2024-07-15 15:34:21.780813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.964 [2024-07-15 15:34:21.791819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.964 [2024-07-15 15:34:21.791849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.964 [2024-07-15 15:34:21.791860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:17.964 [2024-07-15 15:34:21.801582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.964 [2024-07-15 15:34:21.801608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.964 [2024-07-15 15:34:21.801619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:17.964 [2024-07-15 15:34:21.809893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.964 [2024-07-15 15:34:21.809916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.964 [2024-07-15 15:34:21.809926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.964 [2024-07-15 15:34:21.817877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.964 [2024-07-15 15:34:21.817901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.964 [2024-07-15 15:34:21.817911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.964 [2024-07-15 15:34:21.826274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.964 [2024-07-15 15:34:21.826298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.964 [2024-07-15 15:34:21.826308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:17.964 [2024-07-15 15:34:21.835975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.964 [2024-07-15 15:34:21.835998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.964 [2024-07-15 15:34:21.836008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:17.964 [2024-07-15 15:34:21.845948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.964 [2024-07-15 15:34:21.845972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.964 [2024-07-15 15:34:21.845983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.964 [2024-07-15 15:34:21.854729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.964 [2024-07-15 15:34:21.854754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.964 [2024-07-15 15:34:21.854764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.964 [2024-07-15 15:34:21.868129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:17.964 [2024-07-15 15:34:21.868154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.964 [2024-07-15 15:34:21.868165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:18.223 [2024-07-15 15:34:21.880971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.223 [2024-07-15 15:34:21.880995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.223 [2024-07-15 15:34:21.881005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:18.223 [2024-07-15 15:34:21.891550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.223 [2024-07-15 15:34:21.891575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.223 [2024-07-15 15:34:21.891585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:18.223 [2024-07-15 15:34:21.901573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.223 [2024-07-15 15:34:21.901597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.223 [2024-07-15 15:34:21.901607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.223 [2024-07-15 15:34:21.910544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.223 [2024-07-15 15:34:21.910567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.223 [2024-07-15 15:34:21.910577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:18.223 [2024-07-15 15:34:21.920091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.223 [2024-07-15 15:34:21.920114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.223 [2024-07-15 15:34:21.920125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:18.223 [2024-07-15 15:34:21.928090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.223 [2024-07-15 15:34:21.928114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.223 [2024-07-15 15:34:21.928125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:18.223 [2024-07-15 15:34:21.937850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.223 [2024-07-15 15:34:21.937875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.223 [2024-07-15 15:34:21.937886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.223 [2024-07-15 15:34:21.947480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.223 [2024-07-15 15:34:21.947504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.223 [2024-07-15 15:34:21.947515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:18.223 [2024-07-15 15:34:21.956451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.223 [2024-07-15 15:34:21.956474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.223 [2024-07-15 15:34:21.956486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:18.224 [2024-07-15 15:34:21.965912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.224 [2024-07-15 15:34:21.965937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.224 [2024-07-15 15:34:21.965951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:18.224 [2024-07-15 15:34:21.974926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.224 [2024-07-15 15:34:21.974950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.224 [2024-07-15 15:34:21.974961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.224 [2024-07-15 15:34:21.984296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.224 [2024-07-15 15:34:21.984321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.224 [2024-07-15 15:34:21.984332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:18.224 [2024-07-15 15:34:21.994119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.224 [2024-07-15 15:34:21.994144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.224 [2024-07-15 15:34:21.994156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:18.224 [2024-07-15 15:34:22.004354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.224 [2024-07-15 15:34:22.004379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.224 [2024-07-15 15:34:22.004390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:18.224 [2024-07-15 15:34:22.014027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.224 [2024-07-15 15:34:22.014052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.224 [2024-07-15 15:34:22.014062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.224 [2024-07-15 15:34:22.024069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.224 [2024-07-15 15:34:22.024094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.224 [2024-07-15 15:34:22.024105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:18.224 [2024-07-15 15:34:22.034004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.224 [2024-07-15 15:34:22.034028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.224 [2024-07-15 15:34:22.034038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:18.224 [2024-07-15 15:34:22.044205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.224 [2024-07-15 15:34:22.044230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.224 [2024-07-15 15:34:22.044241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:18.224 [2024-07-15 15:34:22.054326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.224 [2024-07-15 15:34:22.054354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.224 [2024-07-15 15:34:22.054365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.224 [2024-07-15 15:34:22.062961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.224 [2024-07-15 15:34:22.062986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.224 [2024-07-15 15:34:22.062997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:18.224 [2024-07-15 15:34:22.072711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.224 [2024-07-15 15:34:22.072738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.224 [2024-07-15 15:34:22.072749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:18.224 [2024-07-15 15:34:22.083079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.224 [2024-07-15 15:34:22.083104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.224 [2024-07-15 15:34:22.083115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:18.224 [2024-07-15 15:34:22.093121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.224 [2024-07-15 15:34:22.093145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.224 [2024-07-15 15:34:22.093156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.224 [2024-07-15 15:34:22.102968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.224 [2024-07-15 15:34:22.102993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.224 [2024-07-15 15:34:22.103005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:18.224 [2024-07-15 15:34:22.115604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.224 [2024-07-15 15:34:22.115630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.224 [2024-07-15 15:34:22.115641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:18.224 [2024-07-15 15:34:22.126087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.224 [2024-07-15 15:34:22.126111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.224 [2024-07-15 15:34:22.126122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:18.484 [2024-07-15 15:34:22.135976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.484 [2024-07-15 15:34:22.135999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.484 [2024-07-15 15:34:22.136010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.484 [2024-07-15 15:34:22.144928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.484 [2024-07-15 15:34:22.144952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.484 [2024-07-15 15:34:22.144963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:18.484 [2024-07-15 15:34:22.154252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.484 [2024-07-15 15:34:22.154277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.484 [2024-07-15 15:34:22.154289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:18.484 [2024-07-15 15:34:22.164819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.484 [2024-07-15 15:34:22.164848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.484 [2024-07-15 15:34:22.164859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:18.484 [2024-07-15 15:34:22.174773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.484 [2024-07-15 15:34:22.174796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.484 [2024-07-15 15:34:22.174806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.484 [2024-07-15 15:34:22.183381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.484 [2024-07-15 15:34:22.183405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.484 [2024-07-15 15:34:22.183417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:18.484 [2024-07-15 15:34:22.191102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.484 [2024-07-15 15:34:22.191126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.484 [2024-07-15 15:34:22.191137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:18.484 [2024-07-15 15:34:22.198308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.484 [2024-07-15 15:34:22.198332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.484 [2024-07-15 15:34:22.198343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:18.484 [2024-07-15 15:34:22.205336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.484 [2024-07-15 15:34:22.205360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.484 [2024-07-15 15:34:22.205370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.484 [2024-07-15 15:34:22.212057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.485 [2024-07-15 15:34:22.212085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.485 [2024-07-15 15:34:22.212096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:18.485 [2024-07-15 15:34:22.218761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.485 [2024-07-15 15:34:22.218785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.485 [2024-07-15 15:34:22.218797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:18.485 [2024-07-15 15:34:22.227124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.485 [2024-07-15 15:34:22.227148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.485 [2024-07-15 15:34:22.227159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:18.485 [2024-07-15 15:34:22.236609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.485 [2024-07-15 15:34:22.236634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.485 [2024-07-15 15:34:22.236645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.485 [2024-07-15 15:34:22.245919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.485 [2024-07-15 15:34:22.245942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.485 [2024-07-15 15:34:22.245954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:18.485 [2024-07-15 15:34:22.255148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.485 [2024-07-15 15:34:22.255173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.485 [2024-07-15 15:34:22.255184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:18.485 [2024-07-15 15:34:22.264797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.485 [2024-07-15 15:34:22.264821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.485 [2024-07-15 15:34:22.264837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:18.485 [2024-07-15 15:34:22.274048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.485 [2024-07-15 15:34:22.274072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.485 [2024-07-15 15:34:22.274083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.485 [2024-07-15 15:34:22.283909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.485 [2024-07-15 15:34:22.283933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.485 [2024-07-15 15:34:22.283944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:18.485 [2024-07-15 15:34:22.293757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.485 [2024-07-15 15:34:22.293782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.485 [2024-07-15 15:34:22.293793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:18.485 [2024-07-15 15:34:22.303851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.485 [2024-07-15 15:34:22.303877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.485 [2024-07-15 15:34:22.303888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:18.485 [2024-07-15 15:34:22.313480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.485 [2024-07-15 15:34:22.313506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.485 [2024-07-15 15:34:22.313517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.485 [2024-07-15 15:34:22.322473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.485 [2024-07-15 15:34:22.322498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.485 [2024-07-15 15:34:22.322509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:18.485 [2024-07-15 15:34:22.332623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.485 [2024-07-15 15:34:22.332649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.485 [2024-07-15 15:34:22.332660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:18.485 [2024-07-15 15:34:22.343767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.485 [2024-07-15 15:34:22.343793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.485 [2024-07-15 15:34:22.343803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:18.485 [2024-07-15 15:34:22.353647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.485 [2024-07-15 15:34:22.353673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.485 [2024-07-15 15:34:22.353685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.485 [2024-07-15 15:34:22.363318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.485 [2024-07-15 15:34:22.363343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.485 [2024-07-15 15:34:22.363355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:18.485 [2024-07-15 15:34:22.373457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.485 [2024-07-15 15:34:22.373482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.485 [2024-07-15 15:34:22.373496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:18.485 [2024-07-15 15:34:22.382477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.485 [2024-07-15 15:34:22.382501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.485 [2024-07-15 15:34:22.382512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:18.485 [2024-07-15 15:34:22.390804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.485 [2024-07-15 15:34:22.390828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.485 [2024-07-15 15:34:22.390845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.745 [2024-07-15 15:34:22.398060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.745 [2024-07-15 15:34:22.398084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.745 [2024-07-15 15:34:22.398094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:18.745 [2024-07-15 15:34:22.404945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.745 [2024-07-15 15:34:22.404968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.745 [2024-07-15 15:34:22.404979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:18.745 [2024-07-15 15:34:22.411680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.745 [2024-07-15 15:34:22.411704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.745 [2024-07-15 15:34:22.411715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:18.745 [2024-07-15 15:34:22.418432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.745 [2024-07-15 15:34:22.418456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.745 [2024-07-15 15:34:22.418467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.745 [2024-07-15 15:34:22.425966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.745 [2024-07-15 15:34:22.425989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.745 [2024-07-15 15:34:22.426000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:18.745 [2024-07-15 15:34:22.434510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.745 [2024-07-15 15:34:22.434535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.745 [2024-07-15 15:34:22.434546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:18.745 [2024-07-15 15:34:22.442787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.745 [2024-07-15 15:34:22.442814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.745 [2024-07-15 15:34:22.442825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:18.745 [2024-07-15 15:34:22.450946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.745 [2024-07-15 15:34:22.450970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.745 [2024-07-15 15:34:22.450980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.745 [2024-07-15 15:34:22.459350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.745 [2024-07-15 15:34:22.459373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.745 [2024-07-15 15:34:22.459384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:18.745 [2024-07-15 15:34:22.467603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.745 [2024-07-15 15:34:22.467627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.745 [2024-07-15 15:34:22.467637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:18.745 [2024-07-15 15:34:22.475986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.745 [2024-07-15 15:34:22.476010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.745 [2024-07-15 15:34:22.476021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:18.745 [2024-07-15 15:34:22.483914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.745 [2024-07-15 15:34:22.483938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.745 [2024-07-15 15:34:22.483948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.745 [2024-07-15 15:34:22.490867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.746 [2024-07-15 15:34:22.490890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.746 [2024-07-15 15:34:22.490901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:18.746 [2024-07-15 15:34:22.497812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.746 [2024-07-15 15:34:22.497841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.746 [2024-07-15 15:34:22.497852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:18.746 [2024-07-15 15:34:22.504553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.746 [2024-07-15 15:34:22.504577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.746 [2024-07-15 15:34:22.504588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:18.746 [2024-07-15 15:34:22.511304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.746 [2024-07-15 15:34:22.511328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.746 [2024-07-15 15:34:22.511339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.746 [2024-07-15 15:34:22.518502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.746 [2024-07-15 15:34:22.518527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.746 [2024-07-15 15:34:22.518538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:18.746 [2024-07-15 15:34:22.526985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.746 [2024-07-15 15:34:22.527008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.746 [2024-07-15 15:34:22.527020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:18.746 [2024-07-15 15:34:22.536010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.746 [2024-07-15 15:34:22.536034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.746 [2024-07-15 15:34:22.536044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:18.746 [2024-07-15 15:34:22.544441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.746 [2024-07-15 15:34:22.544466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.746 [2024-07-15 15:34:22.544477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.746 [2024-07-15 15:34:22.552893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.746 [2024-07-15 15:34:22.552917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.746 [2024-07-15 15:34:22.552928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:18.746 [2024-07-15 15:34:22.561478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.746 [2024-07-15 15:34:22.561504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.746 [2024-07-15 15:34:22.561515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:18.746 [2024-07-15 15:34:22.569953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.746 [2024-07-15 15:34:22.569976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.746 [2024-07-15 15:34:22.569986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:18.746 [2024-07-15 15:34:22.577351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.746 [2024-07-15 15:34:22.577375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.746 [2024-07-15 15:34:22.577391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.746 [2024-07-15 15:34:22.585846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.746 [2024-07-15 15:34:22.585869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.746 [2024-07-15 15:34:22.585881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:18.746 [2024-07-15 15:34:22.595001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.746 [2024-07-15 15:34:22.595026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.746 [2024-07-15 15:34:22.595037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:18.746 [2024-07-15 15:34:22.603821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.746 [2024-07-15 15:34:22.603853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.746 [2024-07-15 15:34:22.603865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:18.746 [2024-07-15 15:34:22.613283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.746 [2024-07-15 15:34:22.613309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.746 [2024-07-15 15:34:22.613320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:18.746 [2024-07-15 15:34:22.622672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.746 [2024-07-15 15:34:22.622697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.746 [2024-07-15 15:34:22.622707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:18.746 [2024-07-15 15:34:22.632105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.746 [2024-07-15 15:34:22.632131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.746 [2024-07-15 15:34:22.632144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:18.746 [2024-07-15 15:34:22.641194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x110d9d0) 00:29:18.746 [2024-07-15 15:34:22.641219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.746 [2024-07-15 15:34:22.641231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:18.746 00:29:18.746 Latency(us) 00:29:18.746 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:18.746 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:18.746 nvme0n1 : 2.00 3646.24 455.78 0.00 0.00 4385.56 1061.68 15204.35 00:29:18.746 =================================================================================================================== 00:29:18.746 Total : 3646.24 455.78 0.00 0.00 4385.56 1061.68 15204.35 00:29:18.746 0 00:29:19.006 15:34:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:19.006 15:34:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:19.006 15:34:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:19.006 15:34:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:19.006 | .driver_specific 00:29:19.006 | .nvme_error 00:29:19.006 | .status_code 00:29:19.006 | .command_transient_transport_error' 00:29:19.006 15:34:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 235 > 0 )) 00:29:19.006 15:34:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3213420 00:29:19.006 15:34:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3213420 ']' 00:29:19.006 15:34:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3213420 00:29:19.006 15:34:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:19.006 15:34:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:19.006 15:34:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3213420 00:29:19.006 15:34:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:19.006 15:34:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:19.006 15:34:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3213420' 00:29:19.006 killing process with pid 3213420 00:29:19.006 15:34:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3213420 00:29:19.006 Received shutdown signal, test time was about 2.000000 seconds 00:29:19.006 00:29:19.006 Latency(us) 00:29:19.006 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:19.006 =================================================================================================================== 00:29:19.006 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:19.006 15:34:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3213420 00:29:19.264 15:34:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:29:19.264 15:34:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:19.264 15:34:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:19.264 15:34:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:19.264 15:34:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:19.264 15:34:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3214204 00:29:19.264 15:34:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3214204 /var/tmp/bperf.sock 00:29:19.264 15:34:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:19.264 15:34:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3214204 ']' 00:29:19.264 15:34:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:19.264 15:34:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:19.264 15:34:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:19.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:19.264 15:34:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:19.264 15:34:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:19.264 [2024-07-15 15:34:23.129929] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:29:19.264 [2024-07-15 15:34:23.129995] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3214204 ] 00:29:19.264 EAL: No free 2048 kB hugepages reported on node 1 00:29:19.522 [2024-07-15 15:34:23.199438] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:19.522 [2024-07-15 15:34:23.274087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:20.089 15:34:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:20.089 15:34:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:29:20.089 15:34:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:20.089 15:34:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:20.348 15:34:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:20.348 15:34:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.348 15:34:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:20.348 15:34:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:20.348 15:34:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:20.348 15:34:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:20.608 nvme0n1 00:29:20.608 15:34:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:20.608 15:34:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.608 15:34:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:20.608 15:34:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:20.608 15:34:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:20.608 15:34:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:20.608 Running I/O for 2 seconds... 00:29:20.608 [2024-07-15 15:34:24.456518] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:20.608 [2024-07-15 15:34:24.457096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.608 [2024-07-15 15:34:24.457126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:20.608 [2024-07-15 15:34:24.465660] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:20.608 [2024-07-15 15:34:24.465846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.608 [2024-07-15 15:34:24.465869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:20.608 [2024-07-15 15:34:24.474856] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:20.608 [2024-07-15 15:34:24.475052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.608 [2024-07-15 15:34:24.475074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:20.608 [2024-07-15 15:34:24.484154] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:20.608 [2024-07-15 15:34:24.484363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.608 [2024-07-15 15:34:24.484384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:20.608 [2024-07-15 15:34:24.493677] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:20.608 [2024-07-15 15:34:24.493901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.608 [2024-07-15 15:34:24.493923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:20.608 [2024-07-15 15:34:24.502880] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:20.608 [2024-07-15 15:34:24.503082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.608 [2024-07-15 15:34:24.503102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:20.608 [2024-07-15 15:34:24.512180] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:20.608 [2024-07-15 15:34:24.512383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.608 [2024-07-15 15:34:24.512404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:20.868 [2024-07-15 15:34:24.521490] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:20.868 [2024-07-15 15:34:24.521689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.868 [2024-07-15 15:34:24.521709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:20.868 [2024-07-15 15:34:24.530700] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:20.868 [2024-07-15 15:34:24.530915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.868 [2024-07-15 15:34:24.530935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:20.868 [2024-07-15 15:34:24.539759] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:20.868 [2024-07-15 15:34:24.539971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.868 [2024-07-15 15:34:24.539991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:20.868 [2024-07-15 15:34:24.548893] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:20.868 [2024-07-15 15:34:24.549092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:8454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.868 [2024-07-15 15:34:24.549112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:20.868 [2024-07-15 15:34:24.558057] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:20.868 [2024-07-15 15:34:24.558259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.868 [2024-07-15 15:34:24.558279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:20.868 [2024-07-15 15:34:24.567201] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:20.868 [2024-07-15 15:34:24.567407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.868 [2024-07-15 15:34:24.567427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:20.868 [2024-07-15 15:34:24.576368] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:20.868 [2024-07-15 15:34:24.576574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.868 [2024-07-15 15:34:24.576594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:20.868 [2024-07-15 15:34:24.585539] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:20.868 [2024-07-15 15:34:24.585746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.868 [2024-07-15 15:34:24.585766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:20.868 [2024-07-15 15:34:24.594673] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:20.868 [2024-07-15 15:34:24.594877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.868 [2024-07-15 15:34:24.594897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:20.868 [2024-07-15 15:34:24.603798] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:20.868 [2024-07-15 15:34:24.604010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.868 [2024-07-15 15:34:24.604031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:20.868 [2024-07-15 15:34:24.613154] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:20.868 [2024-07-15 15:34:24.613359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.868 [2024-07-15 15:34:24.613380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:20.868 [2024-07-15 15:34:24.622252] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:20.868 [2024-07-15 15:34:24.622455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.868 [2024-07-15 15:34:24.622475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:20.868 [2024-07-15 15:34:24.631394] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:20.868 [2024-07-15 15:34:24.631596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.868 [2024-07-15 15:34:24.631616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:20.868 [2024-07-15 15:34:24.640507] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:20.868 [2024-07-15 15:34:24.640711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.868 [2024-07-15 15:34:24.640732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:20.868 [2024-07-15 15:34:24.649643] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:20.868 [2024-07-15 15:34:24.649849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.868 [2024-07-15 15:34:24.649870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:20.868 [2024-07-15 15:34:24.658764] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:20.868 [2024-07-15 15:34:24.658974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.868 [2024-07-15 15:34:24.658994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:20.868 [2024-07-15 15:34:24.667875] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:20.868 [2024-07-15 15:34:24.668080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.868 [2024-07-15 15:34:24.668100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:20.868 [2024-07-15 15:34:24.676982] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:20.868 [2024-07-15 15:34:24.677188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.868 [2024-07-15 15:34:24.677208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:20.868 [2024-07-15 15:34:24.686092] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:20.868 [2024-07-15 15:34:24.686296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.868 [2024-07-15 15:34:24.686316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:20.868 [2024-07-15 15:34:24.695228] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:20.868 [2024-07-15 15:34:24.695434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.868 [2024-07-15 15:34:24.695454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:20.868 [2024-07-15 15:34:24.704374] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:20.868 [2024-07-15 15:34:24.704581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.868 [2024-07-15 15:34:24.704601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:20.868 [2024-07-15 15:34:24.713494] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:20.868 [2024-07-15 15:34:24.713696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:54 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.868 [2024-07-15 15:34:24.713721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:20.868 [2024-07-15 15:34:24.722716] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:20.868 [2024-07-15 15:34:24.722912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.868 [2024-07-15 15:34:24.722933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:20.868 [2024-07-15 15:34:24.731826] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:20.868 [2024-07-15 15:34:24.732031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.868 [2024-07-15 15:34:24.732060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:20.868 [2024-07-15 15:34:24.740903] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:20.868 [2024-07-15 15:34:24.741100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.868 [2024-07-15 15:34:24.741120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:20.868 [2024-07-15 15:34:24.750034] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:20.868 [2024-07-15 15:34:24.750240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.869 [2024-07-15 15:34:24.750260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:20.869 [2024-07-15 15:34:24.759146] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:20.869 [2024-07-15 15:34:24.759344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.869 [2024-07-15 15:34:24.759362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:20.869 [2024-07-15 15:34:24.768113] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:20.869 [2024-07-15 15:34:24.768309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:20.869 [2024-07-15 15:34:24.768328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.128 [2024-07-15 15:34:24.777542] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.128 [2024-07-15 15:34:24.777740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.128 [2024-07-15 15:34:24.777760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.128 [2024-07-15 15:34:24.786817] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.128 [2024-07-15 15:34:24.787033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.128 [2024-07-15 15:34:24.787053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.128 [2024-07-15 15:34:24.795956] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.128 [2024-07-15 15:34:24.796163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.128 [2024-07-15 15:34:24.796184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.128 [2024-07-15 15:34:24.805157] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.128 [2024-07-15 15:34:24.805356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.128 [2024-07-15 15:34:24.805376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.128 [2024-07-15 15:34:24.814325] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.128 [2024-07-15 15:34:24.814521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.128 [2024-07-15 15:34:24.814540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.128 [2024-07-15 15:34:24.823464] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.128 [2024-07-15 15:34:24.823661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.128 [2024-07-15 15:34:24.823680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.128 [2024-07-15 15:34:24.832736] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.128 [2024-07-15 15:34:24.832959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.128 [2024-07-15 15:34:24.832978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.128 [2024-07-15 15:34:24.841839] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.128 [2024-07-15 15:34:24.842043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.128 [2024-07-15 15:34:24.842064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.128 [2024-07-15 15:34:24.850982] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.128 [2024-07-15 15:34:24.851188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.128 [2024-07-15 15:34:24.851209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.128 [2024-07-15 15:34:24.860086] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.128 [2024-07-15 15:34:24.860290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.128 [2024-07-15 15:34:24.860310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.128 [2024-07-15 15:34:24.869248] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.128 [2024-07-15 15:34:24.869453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.128 [2024-07-15 15:34:24.869473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.128 [2024-07-15 15:34:24.878339] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.128 [2024-07-15 15:34:24.878545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.128 [2024-07-15 15:34:24.878564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.128 [2024-07-15 15:34:24.887524] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.128 [2024-07-15 15:34:24.887731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.128 [2024-07-15 15:34:24.887751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.128 [2024-07-15 15:34:24.896654] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.128 [2024-07-15 15:34:24.896861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.128 [2024-07-15 15:34:24.896880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.128 [2024-07-15 15:34:24.905783] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.129 [2024-07-15 15:34:24.905994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.129 [2024-07-15 15:34:24.906014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.129 [2024-07-15 15:34:24.914901] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.129 [2024-07-15 15:34:24.915108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.129 [2024-07-15 15:34:24.915127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.129 [2024-07-15 15:34:24.924026] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.129 [2024-07-15 15:34:24.924230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.129 [2024-07-15 15:34:24.924249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.129 [2024-07-15 15:34:24.933148] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.129 [2024-07-15 15:34:24.933347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.129 [2024-07-15 15:34:24.933366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.129 [2024-07-15 15:34:24.942293] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.129 [2024-07-15 15:34:24.942497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.129 [2024-07-15 15:34:24.942517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.129 [2024-07-15 15:34:24.951427] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.129 [2024-07-15 15:34:24.951631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.129 [2024-07-15 15:34:24.951654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.129 [2024-07-15 15:34:24.960511] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.129 [2024-07-15 15:34:24.960716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.129 [2024-07-15 15:34:24.960736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.129 [2024-07-15 15:34:24.969632] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.129 [2024-07-15 15:34:24.969846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.129 [2024-07-15 15:34:24.969865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.129 [2024-07-15 15:34:24.978905] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.129 [2024-07-15 15:34:24.979093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.129 [2024-07-15 15:34:24.979111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.129 [2024-07-15 15:34:24.988093] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.129 [2024-07-15 15:34:24.988279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.129 [2024-07-15 15:34:24.988298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.129 [2024-07-15 15:34:24.997264] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.129 [2024-07-15 15:34:24.997461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.129 [2024-07-15 15:34:24.997480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.129 [2024-07-15 15:34:25.006376] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.129 [2024-07-15 15:34:25.006581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.129 [2024-07-15 15:34:25.006601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.129 [2024-07-15 15:34:25.015441] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.129 [2024-07-15 15:34:25.015644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.129 [2024-07-15 15:34:25.015663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.129 [2024-07-15 15:34:25.024558] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.129 [2024-07-15 15:34:25.024759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.129 [2024-07-15 15:34:25.024788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.129 [2024-07-15 15:34:25.033756] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.129 [2024-07-15 15:34:25.033969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.129 [2024-07-15 15:34:25.033992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.388 [2024-07-15 15:34:25.043095] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.388 [2024-07-15 15:34:25.043301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.388 [2024-07-15 15:34:25.043330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.388 [2024-07-15 15:34:25.052219] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.388 [2024-07-15 15:34:25.052419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.388 [2024-07-15 15:34:25.052437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.388 [2024-07-15 15:34:25.061296] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.388 [2024-07-15 15:34:25.061499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.388 [2024-07-15 15:34:25.061518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.388 [2024-07-15 15:34:25.070419] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.388 [2024-07-15 15:34:25.070621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.388 [2024-07-15 15:34:25.070641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.388 [2024-07-15 15:34:25.079519] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.388 [2024-07-15 15:34:25.079721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.388 [2024-07-15 15:34:25.079741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.389 [2024-07-15 15:34:25.088704] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.389 [2024-07-15 15:34:25.088902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.389 [2024-07-15 15:34:25.088920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.389 [2024-07-15 15:34:25.097805] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.389 [2024-07-15 15:34:25.098018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.389 [2024-07-15 15:34:25.098038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.389 [2024-07-15 15:34:25.106944] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.389 [2024-07-15 15:34:25.107150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.389 [2024-07-15 15:34:25.107170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.389 [2024-07-15 15:34:25.116094] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.389 [2024-07-15 15:34:25.116305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.389 [2024-07-15 15:34:25.116324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.389 [2024-07-15 15:34:25.125287] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.389 [2024-07-15 15:34:25.125490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.389 [2024-07-15 15:34:25.125510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.389 [2024-07-15 15:34:25.134394] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.389 [2024-07-15 15:34:25.134597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.389 [2024-07-15 15:34:25.134617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.389 [2024-07-15 15:34:25.143498] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.389 [2024-07-15 15:34:25.143701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.389 [2024-07-15 15:34:25.143721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.389 [2024-07-15 15:34:25.152596] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.389 [2024-07-15 15:34:25.152800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.389 [2024-07-15 15:34:25.152820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.389 [2024-07-15 15:34:25.161728] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.389 [2024-07-15 15:34:25.161925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.389 [2024-07-15 15:34:25.161944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.389 [2024-07-15 15:34:25.170857] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.389 [2024-07-15 15:34:25.171055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.389 [2024-07-15 15:34:25.171074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.389 [2024-07-15 15:34:25.180000] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.389 [2024-07-15 15:34:25.180196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.389 [2024-07-15 15:34:25.180214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.389 [2024-07-15 15:34:25.189194] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.389 [2024-07-15 15:34:25.189399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.389 [2024-07-15 15:34:25.189419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.389 [2024-07-15 15:34:25.198369] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.389 [2024-07-15 15:34:25.198573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.389 [2024-07-15 15:34:25.198593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.389 [2024-07-15 15:34:25.207447] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.389 [2024-07-15 15:34:25.207650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.389 [2024-07-15 15:34:25.207670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.389 [2024-07-15 15:34:25.216577] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.389 [2024-07-15 15:34:25.216784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.389 [2024-07-15 15:34:25.216803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.389 [2024-07-15 15:34:25.225817] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.389 [2024-07-15 15:34:25.226034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.389 [2024-07-15 15:34:25.226052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.389 [2024-07-15 15:34:25.235004] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.389 [2024-07-15 15:34:25.235200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.389 [2024-07-15 15:34:25.235221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.389 [2024-07-15 15:34:25.244126] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.389 [2024-07-15 15:34:25.244322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.389 [2024-07-15 15:34:25.244341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.389 [2024-07-15 15:34:25.253232] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.389 [2024-07-15 15:34:25.253438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.389 [2024-07-15 15:34:25.253458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.389 [2024-07-15 15:34:25.262340] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.389 [2024-07-15 15:34:25.262542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:15765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.389 [2024-07-15 15:34:25.262561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.389 [2024-07-15 15:34:25.271463] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.389 [2024-07-15 15:34:25.271667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.389 [2024-07-15 15:34:25.271690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.389 [2024-07-15 15:34:25.280565] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.389 [2024-07-15 15:34:25.280771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.389 [2024-07-15 15:34:25.280790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.389 [2024-07-15 15:34:25.289731] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.389 [2024-07-15 15:34:25.289938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.389 [2024-07-15 15:34:25.289957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.649 [2024-07-15 15:34:25.299143] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.649 [2024-07-15 15:34:25.299348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.649 [2024-07-15 15:34:25.299368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.649 [2024-07-15 15:34:25.308357] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.649 [2024-07-15 15:34:25.308562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.649 [2024-07-15 15:34:25.308582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.649 [2024-07-15 15:34:25.317498] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.649 [2024-07-15 15:34:25.317702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.649 [2024-07-15 15:34:25.317722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.649 [2024-07-15 15:34:25.326722] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.649 [2024-07-15 15:34:25.326920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.649 [2024-07-15 15:34:25.326940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.649 [2024-07-15 15:34:25.335842] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.649 [2024-07-15 15:34:25.336049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.649 [2024-07-15 15:34:25.336069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.649 [2024-07-15 15:34:25.344959] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.649 [2024-07-15 15:34:25.345163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.649 [2024-07-15 15:34:25.345182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.649 [2024-07-15 15:34:25.354042] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.649 [2024-07-15 15:34:25.354249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.649 [2024-07-15 15:34:25.354269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.649 [2024-07-15 15:34:25.363174] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.649 [2024-07-15 15:34:25.363378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.649 [2024-07-15 15:34:25.363398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.649 [2024-07-15 15:34:25.372299] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.649 [2024-07-15 15:34:25.372505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.649 [2024-07-15 15:34:25.372524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.649 [2024-07-15 15:34:25.381448] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.649 [2024-07-15 15:34:25.381644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.649 [2024-07-15 15:34:25.381663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.649 [2024-07-15 15:34:25.390635] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.649 [2024-07-15 15:34:25.390830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.650 [2024-07-15 15:34:25.390852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.650 [2024-07-15 15:34:25.399759] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.650 [2024-07-15 15:34:25.399971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.650 [2024-07-15 15:34:25.400003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.650 [2024-07-15 15:34:25.408884] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.650 [2024-07-15 15:34:25.409089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.650 [2024-07-15 15:34:25.409109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.650 [2024-07-15 15:34:25.417983] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.650 [2024-07-15 15:34:25.418188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.650 [2024-07-15 15:34:25.418208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.650 [2024-07-15 15:34:25.427038] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.650 [2024-07-15 15:34:25.427244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.650 [2024-07-15 15:34:25.427262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.650 [2024-07-15 15:34:25.436167] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.650 [2024-07-15 15:34:25.436362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.650 [2024-07-15 15:34:25.436381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.650 [2024-07-15 15:34:25.445305] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.650 [2024-07-15 15:34:25.445504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.650 [2024-07-15 15:34:25.445523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.650 [2024-07-15 15:34:25.454389] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.650 [2024-07-15 15:34:25.454594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.650 [2024-07-15 15:34:25.454622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.650 [2024-07-15 15:34:25.463517] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.650 [2024-07-15 15:34:25.463721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.650 [2024-07-15 15:34:25.463739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.650 [2024-07-15 15:34:25.472611] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.650 [2024-07-15 15:34:25.472812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.650 [2024-07-15 15:34:25.472836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.650 [2024-07-15 15:34:25.481731] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.650 [2024-07-15 15:34:25.481937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.650 [2024-07-15 15:34:25.481956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.650 [2024-07-15 15:34:25.491289] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.650 [2024-07-15 15:34:25.491488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.650 [2024-07-15 15:34:25.491507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.650 [2024-07-15 15:34:25.500387] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.650 [2024-07-15 15:34:25.500584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.650 [2024-07-15 15:34:25.500603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.650 [2024-07-15 15:34:25.509517] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.650 [2024-07-15 15:34:25.509723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.650 [2024-07-15 15:34:25.509746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.650 [2024-07-15 15:34:25.518651] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.650 [2024-07-15 15:34:25.518854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.650 [2024-07-15 15:34:25.518873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.650 [2024-07-15 15:34:25.527774] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.650 [2024-07-15 15:34:25.527985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.650 [2024-07-15 15:34:25.528005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.650 [2024-07-15 15:34:25.536909] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.650 [2024-07-15 15:34:25.537107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.650 [2024-07-15 15:34:25.537126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.650 [2024-07-15 15:34:25.546000] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.650 [2024-07-15 15:34:25.546200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.650 [2024-07-15 15:34:25.546219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.650 [2024-07-15 15:34:25.555279] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.650 [2024-07-15 15:34:25.555502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.650 [2024-07-15 15:34:25.555523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.910 [2024-07-15 15:34:25.564629] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.910 [2024-07-15 15:34:25.564841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.910 [2024-07-15 15:34:25.564860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.910 [2024-07-15 15:34:25.573842] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.910 [2024-07-15 15:34:25.574048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.910 [2024-07-15 15:34:25.574069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.910 [2024-07-15 15:34:25.583161] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.910 [2024-07-15 15:34:25.583362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.910 [2024-07-15 15:34:25.583381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.910 [2024-07-15 15:34:25.592561] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.910 [2024-07-15 15:34:25.592768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.910 [2024-07-15 15:34:25.592791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.910 [2024-07-15 15:34:25.601771] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.910 [2024-07-15 15:34:25.601982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.910 [2024-07-15 15:34:25.602002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.910 [2024-07-15 15:34:25.610902] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.910 [2024-07-15 15:34:25.611100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.910 [2024-07-15 15:34:25.611119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.910 [2024-07-15 15:34:25.620042] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.910 [2024-07-15 15:34:25.620247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.910 [2024-07-15 15:34:25.620268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.910 [2024-07-15 15:34:25.629157] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.910 [2024-07-15 15:34:25.629355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.910 [2024-07-15 15:34:25.629374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.910 [2024-07-15 15:34:25.638305] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.910 [2024-07-15 15:34:25.638508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.910 [2024-07-15 15:34:25.638528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.910 [2024-07-15 15:34:25.647409] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.910 [2024-07-15 15:34:25.647615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.910 [2024-07-15 15:34:25.647634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.910 [2024-07-15 15:34:25.656554] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.910 [2024-07-15 15:34:25.656751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.910 [2024-07-15 15:34:25.656770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.910 [2024-07-15 15:34:25.665664] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.910 [2024-07-15 15:34:25.665866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.910 [2024-07-15 15:34:25.665886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.910 [2024-07-15 15:34:25.674763] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.910 [2024-07-15 15:34:25.674966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.910 [2024-07-15 15:34:25.674987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.910 [2024-07-15 15:34:25.683959] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.910 [2024-07-15 15:34:25.684152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.910 [2024-07-15 15:34:25.684171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.910 [2024-07-15 15:34:25.693078] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.910 [2024-07-15 15:34:25.693284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.910 [2024-07-15 15:34:25.693305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.910 [2024-07-15 15:34:25.702202] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.910 [2024-07-15 15:34:25.702405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:8281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.910 [2024-07-15 15:34:25.702425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.910 [2024-07-15 15:34:25.711334] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.910 [2024-07-15 15:34:25.711541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.910 [2024-07-15 15:34:25.711562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.910 [2024-07-15 15:34:25.720503] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.910 [2024-07-15 15:34:25.720707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.910 [2024-07-15 15:34:25.720727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.910 [2024-07-15 15:34:25.729652] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.910 [2024-07-15 15:34:25.729850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.910 [2024-07-15 15:34:25.729870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.910 [2024-07-15 15:34:25.738950] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.910 [2024-07-15 15:34:25.739167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.910 [2024-07-15 15:34:25.739188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.910 [2024-07-15 15:34:25.748362] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.910 [2024-07-15 15:34:25.748555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.911 [2024-07-15 15:34:25.748574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.911 [2024-07-15 15:34:25.757558] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.911 [2024-07-15 15:34:25.757748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.911 [2024-07-15 15:34:25.757768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.911 [2024-07-15 15:34:25.766682] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.911 [2024-07-15 15:34:25.766879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.911 [2024-07-15 15:34:25.766898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.911 [2024-07-15 15:34:25.775787] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.911 [2024-07-15 15:34:25.775998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.911 [2024-07-15 15:34:25.776018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.911 [2024-07-15 15:34:25.784988] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.911 [2024-07-15 15:34:25.785198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.911 [2024-07-15 15:34:25.785217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.911 [2024-07-15 15:34:25.794170] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.911 [2024-07-15 15:34:25.794374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.911 [2024-07-15 15:34:25.794394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.911 [2024-07-15 15:34:25.803305] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.911 [2024-07-15 15:34:25.803503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.911 [2024-07-15 15:34:25.803532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:21.911 [2024-07-15 15:34:25.812485] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:21.911 [2024-07-15 15:34:25.812693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:21.911 [2024-07-15 15:34:25.812712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.171 [2024-07-15 15:34:25.821851] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.171 [2024-07-15 15:34:25.822049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.171 [2024-07-15 15:34:25.822068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.171 [2024-07-15 15:34:25.831130] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.171 [2024-07-15 15:34:25.831327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.171 [2024-07-15 15:34:25.831349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.171 [2024-07-15 15:34:25.840257] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.171 [2024-07-15 15:34:25.840455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.171 [2024-07-15 15:34:25.840483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.171 [2024-07-15 15:34:25.849359] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.171 [2024-07-15 15:34:25.849562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.171 [2024-07-15 15:34:25.849581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.171 [2024-07-15 15:34:25.858498] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.171 [2024-07-15 15:34:25.858703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.171 [2024-07-15 15:34:25.858724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.171 [2024-07-15 15:34:25.867614] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.171 [2024-07-15 15:34:25.867821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.171 [2024-07-15 15:34:25.867847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.171 [2024-07-15 15:34:25.876753] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.171 [2024-07-15 15:34:25.876967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.171 [2024-07-15 15:34:25.876987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.171 [2024-07-15 15:34:25.885939] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.171 [2024-07-15 15:34:25.886141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.171 [2024-07-15 15:34:25.886161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.171 [2024-07-15 15:34:25.895088] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.171 [2024-07-15 15:34:25.895292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.171 [2024-07-15 15:34:25.895312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.171 [2024-07-15 15:34:25.904315] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.171 [2024-07-15 15:34:25.904521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.171 [2024-07-15 15:34:25.904540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.171 [2024-07-15 15:34:25.913651] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.171 [2024-07-15 15:34:25.913858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.171 [2024-07-15 15:34:25.913877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.171 [2024-07-15 15:34:25.922909] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.171 [2024-07-15 15:34:25.923114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.171 [2024-07-15 15:34:25.923133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.171 [2024-07-15 15:34:25.932054] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.171 [2024-07-15 15:34:25.932250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.171 [2024-07-15 15:34:25.932269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.171 [2024-07-15 15:34:25.941146] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.171 [2024-07-15 15:34:25.941347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.171 [2024-07-15 15:34:25.941367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.171 [2024-07-15 15:34:25.950319] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.171 [2024-07-15 15:34:25.950523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.171 [2024-07-15 15:34:25.950543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.171 [2024-07-15 15:34:25.959433] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.171 [2024-07-15 15:34:25.959636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.171 [2024-07-15 15:34:25.959655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.171 [2024-07-15 15:34:25.968538] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.171 [2024-07-15 15:34:25.968779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.171 [2024-07-15 15:34:25.968799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.171 [2024-07-15 15:34:25.977741] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.171 [2024-07-15 15:34:25.977945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.171 [2024-07-15 15:34:25.977964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.172 [2024-07-15 15:34:25.986928] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.172 [2024-07-15 15:34:25.987133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.172 [2024-07-15 15:34:25.987153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.172 [2024-07-15 15:34:25.996083] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.172 [2024-07-15 15:34:25.996288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.172 [2024-07-15 15:34:25.996307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.172 [2024-07-15 15:34:26.005395] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.172 [2024-07-15 15:34:26.005585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.172 [2024-07-15 15:34:26.005606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.172 [2024-07-15 15:34:26.014502] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.172 [2024-07-15 15:34:26.014685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.172 [2024-07-15 15:34:26.014705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.172 [2024-07-15 15:34:26.023649] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.172 [2024-07-15 15:34:26.023848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.172 [2024-07-15 15:34:26.023868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.172 [2024-07-15 15:34:26.032777] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.172 [2024-07-15 15:34:26.032988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.172 [2024-07-15 15:34:26.033009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.172 [2024-07-15 15:34:26.041891] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.172 [2024-07-15 15:34:26.042096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.172 [2024-07-15 15:34:26.042117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.172 [2024-07-15 15:34:26.051034] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.172 [2024-07-15 15:34:26.051236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.172 [2024-07-15 15:34:26.051255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.172 [2024-07-15 15:34:26.060138] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.172 [2024-07-15 15:34:26.060342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.172 [2024-07-15 15:34:26.060363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.172 [2024-07-15 15:34:26.069248] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.172 [2024-07-15 15:34:26.069454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.172 [2024-07-15 15:34:26.069477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.432 [2024-07-15 15:34:26.078615] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.432 [2024-07-15 15:34:26.078823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.432 [2024-07-15 15:34:26.078847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.432 [2024-07-15 15:34:26.087945] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.432 [2024-07-15 15:34:26.088151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:15558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.432 [2024-07-15 15:34:26.088172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.432 [2024-07-15 15:34:26.096988] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.432 [2024-07-15 15:34:26.097188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.432 [2024-07-15 15:34:26.097215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.432 [2024-07-15 15:34:26.106114] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.432 [2024-07-15 15:34:26.106313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.432 [2024-07-15 15:34:26.106333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.432 [2024-07-15 15:34:26.115206] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.432 [2024-07-15 15:34:26.115410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.432 [2024-07-15 15:34:26.115431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.432 [2024-07-15 15:34:26.124430] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.432 [2024-07-15 15:34:26.124627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.432 [2024-07-15 15:34:26.124646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.432 [2024-07-15 15:34:26.133555] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.432 [2024-07-15 15:34:26.133746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.432 [2024-07-15 15:34:26.133765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.432 [2024-07-15 15:34:26.142672] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.433 [2024-07-15 15:34:26.142895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.433 [2024-07-15 15:34:26.142913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.433 [2024-07-15 15:34:26.151861] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.433 [2024-07-15 15:34:26.152084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.433 [2024-07-15 15:34:26.152109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.433 [2024-07-15 15:34:26.160974] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.433 [2024-07-15 15:34:26.161180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.433 [2024-07-15 15:34:26.161200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.433 [2024-07-15 15:34:26.170058] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.433 [2024-07-15 15:34:26.170255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.433 [2024-07-15 15:34:26.170274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.433 [2024-07-15 15:34:26.179305] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.433 [2024-07-15 15:34:26.179526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.433 [2024-07-15 15:34:26.179546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.433 [2024-07-15 15:34:26.188472] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.433 [2024-07-15 15:34:26.188685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.433 [2024-07-15 15:34:26.188705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.433 [2024-07-15 15:34:26.197615] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.433 [2024-07-15 15:34:26.197840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.433 [2024-07-15 15:34:26.197861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.433 [2024-07-15 15:34:26.206719] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.433 [2024-07-15 15:34:26.206910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.433 [2024-07-15 15:34:26.206929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.433 [2024-07-15 15:34:26.215821] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.433 [2024-07-15 15:34:26.216047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.433 [2024-07-15 15:34:26.216067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.433 [2024-07-15 15:34:26.224997] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.433 [2024-07-15 15:34:26.225220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.433 [2024-07-15 15:34:26.225240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.433 [2024-07-15 15:34:26.234245] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.433 [2024-07-15 15:34:26.234445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.433 [2024-07-15 15:34:26.234465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.433 [2024-07-15 15:34:26.243374] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.433 [2024-07-15 15:34:26.243568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.433 [2024-07-15 15:34:26.243587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.433 [2024-07-15 15:34:26.252500] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.433 [2024-07-15 15:34:26.252692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.433 [2024-07-15 15:34:26.252711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.433 [2024-07-15 15:34:26.261737] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.433 [2024-07-15 15:34:26.261932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.433 [2024-07-15 15:34:26.261954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.433 [2024-07-15 15:34:26.270908] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.433 [2024-07-15 15:34:26.271130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.433 [2024-07-15 15:34:26.271150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.433 [2024-07-15 15:34:26.280094] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.433 [2024-07-15 15:34:26.280304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.433 [2024-07-15 15:34:26.280325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.433 [2024-07-15 15:34:26.289255] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.433 [2024-07-15 15:34:26.289449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.433 [2024-07-15 15:34:26.289468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.433 [2024-07-15 15:34:26.298388] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.433 [2024-07-15 15:34:26.298581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.433 [2024-07-15 15:34:26.298599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.433 [2024-07-15 15:34:26.307530] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.433 [2024-07-15 15:34:26.307719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.433 [2024-07-15 15:34:26.307738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.433 [2024-07-15 15:34:26.316666] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.433 [2024-07-15 15:34:26.316861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.433 [2024-07-15 15:34:26.316880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.433 [2024-07-15 15:34:26.325813] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.433 [2024-07-15 15:34:26.326027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.433 [2024-07-15 15:34:26.326047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.433 [2024-07-15 15:34:26.334966] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.433 [2024-07-15 15:34:26.335183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:40 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.433 [2024-07-15 15:34:26.335203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.693 [2024-07-15 15:34:26.344452] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.693 [2024-07-15 15:34:26.344684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.693 [2024-07-15 15:34:26.344705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.693 [2024-07-15 15:34:26.353651] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.693 [2024-07-15 15:34:26.353846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.693 [2024-07-15 15:34:26.353865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.693 [2024-07-15 15:34:26.362776] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.693 [2024-07-15 15:34:26.362969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.693 [2024-07-15 15:34:26.362988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.693 [2024-07-15 15:34:26.371922] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.693 [2024-07-15 15:34:26.372143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.693 [2024-07-15 15:34:26.372163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.693 [2024-07-15 15:34:26.381091] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.693 [2024-07-15 15:34:26.381272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.693 [2024-07-15 15:34:26.381291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.693 [2024-07-15 15:34:26.390298] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.693 [2024-07-15 15:34:26.390519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.693 [2024-07-15 15:34:26.390542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.693 [2024-07-15 15:34:26.399478] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.693 [2024-07-15 15:34:26.399655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.693 [2024-07-15 15:34:26.399675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.693 [2024-07-15 15:34:26.408620] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.693 [2024-07-15 15:34:26.408796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.693 [2024-07-15 15:34:26.408815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.693 [2024-07-15 15:34:26.417766] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.693 [2024-07-15 15:34:26.417994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.693 [2024-07-15 15:34:26.418014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.693 [2024-07-15 15:34:26.426886] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.693 [2024-07-15 15:34:26.427108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.693 [2024-07-15 15:34:26.427128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.693 [2024-07-15 15:34:26.436022] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x905a00) with pdu=0x2000190f3e60 00:29:22.693 [2024-07-15 15:34:26.436244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.693 [2024-07-15 15:34:26.436264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:22.693 00:29:22.693 Latency(us) 00:29:22.693 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:22.693 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:22.693 nvme0n1 : 2.00 27694.89 108.18 0.00 0.00 4613.86 2975.33 19084.08 00:29:22.693 =================================================================================================================== 00:29:22.693 Total : 27694.89 108.18 0.00 0.00 4613.86 2975.33 19084.08 00:29:22.693 0 00:29:22.693 15:34:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:22.693 15:34:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:22.694 15:34:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:22.694 | .driver_specific 00:29:22.694 | .nvme_error 00:29:22.694 | .status_code 00:29:22.694 | .command_transient_transport_error' 00:29:22.694 15:34:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:22.953 15:34:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 217 > 0 )) 00:29:22.953 15:34:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3214204 00:29:22.953 15:34:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3214204 ']' 00:29:22.953 15:34:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3214204 00:29:22.953 15:34:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:22.953 15:34:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:22.953 15:34:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3214204 00:29:22.953 15:34:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:22.953 15:34:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:22.953 15:34:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3214204' 00:29:22.953 killing process with pid 3214204 00:29:22.953 15:34:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3214204 00:29:22.953 Received shutdown signal, test time was about 2.000000 seconds 00:29:22.953 00:29:22.953 Latency(us) 00:29:22.953 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:22.953 =================================================================================================================== 00:29:22.953 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:22.953 15:34:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3214204 00:29:23.213 15:34:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:29:23.213 15:34:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:23.213 15:34:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:23.213 15:34:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:23.213 15:34:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:23.213 15:34:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3214751 00:29:23.213 15:34:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3214751 /var/tmp/bperf.sock 00:29:23.213 15:34:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:23.213 15:34:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3214751 ']' 00:29:23.213 15:34:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:23.213 15:34:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:23.213 15:34:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:23.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:23.213 15:34:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:23.213 15:34:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:23.213 [2024-07-15 15:34:26.911757] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:29:23.213 [2024-07-15 15:34:26.911811] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3214751 ] 00:29:23.213 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:23.213 Zero copy mechanism will not be used. 00:29:23.213 EAL: No free 2048 kB hugepages reported on node 1 00:29:23.213 [2024-07-15 15:34:26.981301] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:23.213 [2024-07-15 15:34:27.056837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:24.151 15:34:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:24.151 15:34:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:29:24.151 15:34:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:24.151 15:34:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:24.151 15:34:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:24.151 15:34:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.151 15:34:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:24.151 15:34:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.151 15:34:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:24.151 15:34:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:24.415 nvme0n1 00:29:24.415 15:34:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:24.415 15:34:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.415 15:34:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:24.415 15:34:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.415 15:34:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:24.415 15:34:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:24.415 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:24.415 Zero copy mechanism will not be used. 00:29:24.415 Running I/O for 2 seconds... 00:29:24.415 [2024-07-15 15:34:28.231259] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.415 [2024-07-15 15:34:28.231682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.415 [2024-07-15 15:34:28.231711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:24.415 [2024-07-15 15:34:28.243148] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.415 [2024-07-15 15:34:28.243507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.415 [2024-07-15 15:34:28.243533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:24.415 [2024-07-15 15:34:28.252372] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.415 [2024-07-15 15:34:28.252726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.415 [2024-07-15 15:34:28.252749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:24.415 [2024-07-15 15:34:28.260210] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.415 [2024-07-15 15:34:28.260592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.415 [2024-07-15 15:34:28.260615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.415 [2024-07-15 15:34:28.268963] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.415 [2024-07-15 15:34:28.269330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.415 [2024-07-15 15:34:28.269352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:24.415 [2024-07-15 15:34:28.276663] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.415 [2024-07-15 15:34:28.277025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.415 [2024-07-15 15:34:28.277047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:24.415 [2024-07-15 15:34:28.284472] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.415 [2024-07-15 15:34:28.284916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.415 [2024-07-15 15:34:28.284938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:24.415 [2024-07-15 15:34:28.292151] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.415 [2024-07-15 15:34:28.292522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.415 [2024-07-15 15:34:28.292543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.415 [2024-07-15 15:34:28.299841] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.415 [2024-07-15 15:34:28.300194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.415 [2024-07-15 15:34:28.300216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:24.415 [2024-07-15 15:34:28.309163] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.415 [2024-07-15 15:34:28.309603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.415 [2024-07-15 15:34:28.309623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:24.415 [2024-07-15 15:34:28.316967] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.415 [2024-07-15 15:34:28.317047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.415 [2024-07-15 15:34:28.317068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:24.675 [2024-07-15 15:34:28.333758] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.675 [2024-07-15 15:34:28.334179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.675 [2024-07-15 15:34:28.334201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.675 [2024-07-15 15:34:28.345907] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.675 [2024-07-15 15:34:28.346263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.675 [2024-07-15 15:34:28.346284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:24.675 [2024-07-15 15:34:28.354963] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.675 [2024-07-15 15:34:28.355319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.675 [2024-07-15 15:34:28.355341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:24.675 [2024-07-15 15:34:28.363072] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.675 [2024-07-15 15:34:28.363427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.675 [2024-07-15 15:34:28.363448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:24.675 [2024-07-15 15:34:28.370001] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.675 [2024-07-15 15:34:28.370080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.675 [2024-07-15 15:34:28.370099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.675 [2024-07-15 15:34:28.377404] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.675 [2024-07-15 15:34:28.377770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.675 [2024-07-15 15:34:28.377792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:24.675 [2024-07-15 15:34:28.384120] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.675 [2024-07-15 15:34:28.384446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.675 [2024-07-15 15:34:28.384468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:24.675 [2024-07-15 15:34:28.391514] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.675 [2024-07-15 15:34:28.391843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.675 [2024-07-15 15:34:28.391865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:24.675 [2024-07-15 15:34:28.398317] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.675 [2024-07-15 15:34:28.398647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.675 [2024-07-15 15:34:28.398668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.675 [2024-07-15 15:34:28.405453] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.675 [2024-07-15 15:34:28.405778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.675 [2024-07-15 15:34:28.405799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:24.675 [2024-07-15 15:34:28.412225] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.675 [2024-07-15 15:34:28.412569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.675 [2024-07-15 15:34:28.412594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:24.675 [2024-07-15 15:34:28.419497] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.675 [2024-07-15 15:34:28.419935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.675 [2024-07-15 15:34:28.419955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:24.675 [2024-07-15 15:34:28.426766] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.675 [2024-07-15 15:34:28.427108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.675 [2024-07-15 15:34:28.427129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.675 [2024-07-15 15:34:28.433615] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.675 [2024-07-15 15:34:28.433981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.675 [2024-07-15 15:34:28.434002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:24.675 [2024-07-15 15:34:28.440697] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.675 [2024-07-15 15:34:28.441030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.675 [2024-07-15 15:34:28.441051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:24.675 [2024-07-15 15:34:28.448459] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.676 [2024-07-15 15:34:28.448877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.676 [2024-07-15 15:34:28.448897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:24.676 [2024-07-15 15:34:28.455596] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.676 [2024-07-15 15:34:28.455941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.676 [2024-07-15 15:34:28.455961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.676 [2024-07-15 15:34:28.462503] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.676 [2024-07-15 15:34:28.462829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.676 [2024-07-15 15:34:28.462855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:24.676 [2024-07-15 15:34:28.470203] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.676 [2024-07-15 15:34:28.470593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.676 [2024-07-15 15:34:28.470613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:24.676 [2024-07-15 15:34:28.479176] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.676 [2024-07-15 15:34:28.479547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.676 [2024-07-15 15:34:28.479568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:24.676 [2024-07-15 15:34:28.488948] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.676 [2024-07-15 15:34:28.489349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.676 [2024-07-15 15:34:28.489370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.676 [2024-07-15 15:34:28.498283] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.676 [2024-07-15 15:34:28.498644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.676 [2024-07-15 15:34:28.498667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:24.676 [2024-07-15 15:34:28.506026] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.676 [2024-07-15 15:34:28.506369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.676 [2024-07-15 15:34:28.506390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:24.676 [2024-07-15 15:34:28.513356] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.676 [2024-07-15 15:34:28.513695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.676 [2024-07-15 15:34:28.513716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:24.676 [2024-07-15 15:34:28.520226] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.676 [2024-07-15 15:34:28.520686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.676 [2024-07-15 15:34:28.520707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.676 [2024-07-15 15:34:28.528296] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.676 [2024-07-15 15:34:28.528704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.676 [2024-07-15 15:34:28.528725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:24.676 [2024-07-15 15:34:28.537223] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.676 [2024-07-15 15:34:28.537689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.676 [2024-07-15 15:34:28.537709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:24.676 [2024-07-15 15:34:28.545018] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.676 [2024-07-15 15:34:28.545378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.676 [2024-07-15 15:34:28.545403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:24.676 [2024-07-15 15:34:28.551148] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.676 [2024-07-15 15:34:28.551446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.676 [2024-07-15 15:34:28.551467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.676 [2024-07-15 15:34:28.558690] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.676 [2024-07-15 15:34:28.559117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.676 [2024-07-15 15:34:28.559138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:24.676 [2024-07-15 15:34:28.566949] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.676 [2024-07-15 15:34:28.567286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.676 [2024-07-15 15:34:28.567306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:24.676 [2024-07-15 15:34:28.576403] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.676 [2024-07-15 15:34:28.576855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.676 [2024-07-15 15:34:28.576877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:24.935 [2024-07-15 15:34:28.587021] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.935 [2024-07-15 15:34:28.587390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.935 [2024-07-15 15:34:28.587411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.935 [2024-07-15 15:34:28.595303] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.935 [2024-07-15 15:34:28.595631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.935 [2024-07-15 15:34:28.595651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:24.935 [2024-07-15 15:34:28.602735] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.935 [2024-07-15 15:34:28.603121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.935 [2024-07-15 15:34:28.603142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:24.935 [2024-07-15 15:34:28.612035] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.935 [2024-07-15 15:34:28.612341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.935 [2024-07-15 15:34:28.612362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:24.935 [2024-07-15 15:34:28.619727] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.935 [2024-07-15 15:34:28.620000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.935 [2024-07-15 15:34:28.620021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.935 [2024-07-15 15:34:28.628035] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.935 [2024-07-15 15:34:28.628374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.935 [2024-07-15 15:34:28.628394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:24.935 [2024-07-15 15:34:28.636972] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.935 [2024-07-15 15:34:28.637337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.935 [2024-07-15 15:34:28.637358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:24.935 [2024-07-15 15:34:28.646402] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.935 [2024-07-15 15:34:28.646810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.935 [2024-07-15 15:34:28.646830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:24.935 [2024-07-15 15:34:28.654642] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.935 [2024-07-15 15:34:28.654963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.935 [2024-07-15 15:34:28.654983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.935 [2024-07-15 15:34:28.664476] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.935 [2024-07-15 15:34:28.664842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.935 [2024-07-15 15:34:28.664863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:24.935 [2024-07-15 15:34:28.673042] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.935 [2024-07-15 15:34:28.673377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.935 [2024-07-15 15:34:28.673397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:24.935 [2024-07-15 15:34:28.679892] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.935 [2024-07-15 15:34:28.680261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.935 [2024-07-15 15:34:28.680281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:24.935 [2024-07-15 15:34:28.687071] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.935 [2024-07-15 15:34:28.687473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.935 [2024-07-15 15:34:28.687493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.935 [2024-07-15 15:34:28.694669] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.936 [2024-07-15 15:34:28.695034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.936 [2024-07-15 15:34:28.695054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:24.936 [2024-07-15 15:34:28.701725] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.936 [2024-07-15 15:34:28.702075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.936 [2024-07-15 15:34:28.702096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:24.936 [2024-07-15 15:34:28.707864] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.936 [2024-07-15 15:34:28.708253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.936 [2024-07-15 15:34:28.708273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:24.936 [2024-07-15 15:34:28.714604] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.936 [2024-07-15 15:34:28.714987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.936 [2024-07-15 15:34:28.715008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.936 [2024-07-15 15:34:28.720980] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.936 [2024-07-15 15:34:28.721369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.936 [2024-07-15 15:34:28.721389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:24.936 [2024-07-15 15:34:28.727459] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.936 [2024-07-15 15:34:28.727787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.936 [2024-07-15 15:34:28.727808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:24.936 [2024-07-15 15:34:28.733300] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.936 [2024-07-15 15:34:28.733680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.936 [2024-07-15 15:34:28.733701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:24.936 [2024-07-15 15:34:28.740148] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.936 [2024-07-15 15:34:28.740529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.936 [2024-07-15 15:34:28.740549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.936 [2024-07-15 15:34:28.747400] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.936 [2024-07-15 15:34:28.747750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.936 [2024-07-15 15:34:28.747774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:24.936 [2024-07-15 15:34:28.753190] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.936 [2024-07-15 15:34:28.753494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.936 [2024-07-15 15:34:28.753516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:24.936 [2024-07-15 15:34:28.759963] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.936 [2024-07-15 15:34:28.760229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.936 [2024-07-15 15:34:28.760249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:24.936 [2024-07-15 15:34:28.766397] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.936 [2024-07-15 15:34:28.766710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.936 [2024-07-15 15:34:28.766731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.936 [2024-07-15 15:34:28.772903] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.936 [2024-07-15 15:34:28.773244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.936 [2024-07-15 15:34:28.773265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:24.936 [2024-07-15 15:34:28.779902] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.936 [2024-07-15 15:34:28.780162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.936 [2024-07-15 15:34:28.780183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:24.936 [2024-07-15 15:34:28.786745] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.936 [2024-07-15 15:34:28.787020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.936 [2024-07-15 15:34:28.787041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:24.936 [2024-07-15 15:34:28.793539] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.936 [2024-07-15 15:34:28.793853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.936 [2024-07-15 15:34:28.793873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.936 [2024-07-15 15:34:28.800578] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.936 [2024-07-15 15:34:28.800946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.936 [2024-07-15 15:34:28.800968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:24.936 [2024-07-15 15:34:28.806877] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.936 [2024-07-15 15:34:28.807286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.936 [2024-07-15 15:34:28.807307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:24.936 [2024-07-15 15:34:28.812417] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.936 [2024-07-15 15:34:28.812729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.936 [2024-07-15 15:34:28.812750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:24.936 [2024-07-15 15:34:28.819099] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.936 [2024-07-15 15:34:28.819475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.936 [2024-07-15 15:34:28.819497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.936 [2024-07-15 15:34:28.827149] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.936 [2024-07-15 15:34:28.827484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.936 [2024-07-15 15:34:28.827505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:24.936 [2024-07-15 15:34:28.835075] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:24.936 [2024-07-15 15:34:28.835401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.936 [2024-07-15 15:34:28.835421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:25.196 [2024-07-15 15:34:28.843038] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.196 [2024-07-15 15:34:28.843395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.196 [2024-07-15 15:34:28.843416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:25.196 [2024-07-15 15:34:28.850942] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.196 [2024-07-15 15:34:28.851360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.196 [2024-07-15 15:34:28.851380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.196 [2024-07-15 15:34:28.859178] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.196 [2024-07-15 15:34:28.859510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.196 [2024-07-15 15:34:28.859530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:25.196 [2024-07-15 15:34:28.867275] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.196 [2024-07-15 15:34:28.867660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.196 [2024-07-15 15:34:28.867681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:25.196 [2024-07-15 15:34:28.875693] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.196 [2024-07-15 15:34:28.876068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.196 [2024-07-15 15:34:28.876089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:25.196 [2024-07-15 15:34:28.883499] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.196 [2024-07-15 15:34:28.883910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.196 [2024-07-15 15:34:28.883931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.196 [2024-07-15 15:34:28.891769] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.196 [2024-07-15 15:34:28.892096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.196 [2024-07-15 15:34:28.892117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:25.196 [2024-07-15 15:34:28.899555] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.196 [2024-07-15 15:34:28.899899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.196 [2024-07-15 15:34:28.899920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:25.196 [2024-07-15 15:34:28.907508] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.196 [2024-07-15 15:34:28.907883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.196 [2024-07-15 15:34:28.907904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:25.196 [2024-07-15 15:34:28.915440] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.196 [2024-07-15 15:34:28.915788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.196 [2024-07-15 15:34:28.915809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.196 [2024-07-15 15:34:28.923884] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.196 [2024-07-15 15:34:28.924228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.196 [2024-07-15 15:34:28.924249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:25.196 [2024-07-15 15:34:28.932409] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.196 [2024-07-15 15:34:28.932797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.196 [2024-07-15 15:34:28.932818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:25.196 [2024-07-15 15:34:28.940695] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.196 [2024-07-15 15:34:28.941084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.196 [2024-07-15 15:34:28.941108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:25.196 [2024-07-15 15:34:28.949052] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.196 [2024-07-15 15:34:28.949425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.196 [2024-07-15 15:34:28.949445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.196 [2024-07-15 15:34:28.956709] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.196 [2024-07-15 15:34:28.957022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.196 [2024-07-15 15:34:28.957043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:25.196 [2024-07-15 15:34:28.965430] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.196 [2024-07-15 15:34:28.965790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.196 [2024-07-15 15:34:28.965811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:25.196 [2024-07-15 15:34:28.974261] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.196 [2024-07-15 15:34:28.974653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.196 [2024-07-15 15:34:28.974674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:25.196 [2024-07-15 15:34:28.983205] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.196 [2024-07-15 15:34:28.983569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.196 [2024-07-15 15:34:28.983590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.196 [2024-07-15 15:34:28.991591] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.196 [2024-07-15 15:34:28.992004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.196 [2024-07-15 15:34:28.992025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:25.196 [2024-07-15 15:34:28.999311] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.196 [2024-07-15 15:34:28.999725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.196 [2024-07-15 15:34:28.999746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:25.196 [2024-07-15 15:34:29.008296] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.196 [2024-07-15 15:34:29.008682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.196 [2024-07-15 15:34:29.008703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:25.196 [2024-07-15 15:34:29.016331] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.196 [2024-07-15 15:34:29.016678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.196 [2024-07-15 15:34:29.016698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.196 [2024-07-15 15:34:29.025150] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.196 [2024-07-15 15:34:29.025291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.196 [2024-07-15 15:34:29.025311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:25.196 [2024-07-15 15:34:29.032907] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.196 [2024-07-15 15:34:29.033322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.196 [2024-07-15 15:34:29.033343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:25.196 [2024-07-15 15:34:29.041358] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.197 [2024-07-15 15:34:29.041678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.197 [2024-07-15 15:34:29.041699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:25.197 [2024-07-15 15:34:29.050279] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.197 [2024-07-15 15:34:29.050544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.197 [2024-07-15 15:34:29.050564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.197 [2024-07-15 15:34:29.058816] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.197 [2024-07-15 15:34:29.059235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.197 [2024-07-15 15:34:29.059255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:25.197 [2024-07-15 15:34:29.067587] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.197 [2024-07-15 15:34:29.067916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.197 [2024-07-15 15:34:29.067937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:25.197 [2024-07-15 15:34:29.076289] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.197 [2024-07-15 15:34:29.076562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.197 [2024-07-15 15:34:29.076582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:25.197 [2024-07-15 15:34:29.084890] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.197 [2024-07-15 15:34:29.085161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.197 [2024-07-15 15:34:29.085182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.197 [2024-07-15 15:34:29.093942] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.197 [2024-07-15 15:34:29.094267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.197 [2024-07-15 15:34:29.094288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:25.197 [2024-07-15 15:34:29.102255] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.197 [2024-07-15 15:34:29.102642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.197 [2024-07-15 15:34:29.102663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:25.457 [2024-07-15 15:34:29.110796] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.457 [2024-07-15 15:34:29.111138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.457 [2024-07-15 15:34:29.111159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:25.457 [2024-07-15 15:34:29.119716] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.457 [2024-07-15 15:34:29.120044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.457 [2024-07-15 15:34:29.120067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.457 [2024-07-15 15:34:29.128168] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.457 [2024-07-15 15:34:29.128427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.457 [2024-07-15 15:34:29.128448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:25.457 [2024-07-15 15:34:29.137446] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.457 [2024-07-15 15:34:29.137780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.457 [2024-07-15 15:34:29.137801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:25.457 [2024-07-15 15:34:29.146203] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.457 [2024-07-15 15:34:29.146546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.457 [2024-07-15 15:34:29.146566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:25.457 [2024-07-15 15:34:29.154682] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.457 [2024-07-15 15:34:29.155073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.457 [2024-07-15 15:34:29.155095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.457 [2024-07-15 15:34:29.163725] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.457 [2024-07-15 15:34:29.164069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.457 [2024-07-15 15:34:29.164094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:25.457 [2024-07-15 15:34:29.171685] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.457 [2024-07-15 15:34:29.172036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.457 [2024-07-15 15:34:29.172058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:25.457 [2024-07-15 15:34:29.180235] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.457 [2024-07-15 15:34:29.180622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.457 [2024-07-15 15:34:29.180642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:25.457 [2024-07-15 15:34:29.187947] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.457 [2024-07-15 15:34:29.188253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.457 [2024-07-15 15:34:29.188274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.457 [2024-07-15 15:34:29.194801] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.457 [2024-07-15 15:34:29.195107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.457 [2024-07-15 15:34:29.195127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:25.457 [2024-07-15 15:34:29.201696] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.457 [2024-07-15 15:34:29.202033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.457 [2024-07-15 15:34:29.202054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:25.457 [2024-07-15 15:34:29.208509] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.457 [2024-07-15 15:34:29.208781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.457 [2024-07-15 15:34:29.208802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:25.457 [2024-07-15 15:34:29.215911] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.457 [2024-07-15 15:34:29.216230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.457 [2024-07-15 15:34:29.216250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.457 [2024-07-15 15:34:29.223023] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.457 [2024-07-15 15:34:29.223371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.457 [2024-07-15 15:34:29.223392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:25.457 [2024-07-15 15:34:29.230350] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.457 [2024-07-15 15:34:29.230768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.457 [2024-07-15 15:34:29.230788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:25.457 [2024-07-15 15:34:29.237543] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.457 [2024-07-15 15:34:29.237943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.457 [2024-07-15 15:34:29.237963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:25.457 [2024-07-15 15:34:29.244366] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.457 [2024-07-15 15:34:29.244722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.457 [2024-07-15 15:34:29.244743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.457 [2024-07-15 15:34:29.250420] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.457 [2024-07-15 15:34:29.250809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.457 [2024-07-15 15:34:29.250838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:25.457 [2024-07-15 15:34:29.257924] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.457 [2024-07-15 15:34:29.258324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.457 [2024-07-15 15:34:29.258345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:25.457 [2024-07-15 15:34:29.266046] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.457 [2024-07-15 15:34:29.266388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.457 [2024-07-15 15:34:29.266409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:25.457 [2024-07-15 15:34:29.273988] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.457 [2024-07-15 15:34:29.274327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.457 [2024-07-15 15:34:29.274348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.457 [2024-07-15 15:34:29.282315] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.457 [2024-07-15 15:34:29.282683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.457 [2024-07-15 15:34:29.282704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:25.457 [2024-07-15 15:34:29.290623] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.458 [2024-07-15 15:34:29.290952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.458 [2024-07-15 15:34:29.290973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:25.458 [2024-07-15 15:34:29.298610] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.458 [2024-07-15 15:34:29.299005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.458 [2024-07-15 15:34:29.299025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:25.458 [2024-07-15 15:34:29.306768] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.458 [2024-07-15 15:34:29.307159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.458 [2024-07-15 15:34:29.307180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.458 [2024-07-15 15:34:29.314858] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.458 [2024-07-15 15:34:29.315221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.458 [2024-07-15 15:34:29.315258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:25.458 [2024-07-15 15:34:29.322996] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.458 [2024-07-15 15:34:29.323364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.458 [2024-07-15 15:34:29.323385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:25.458 [2024-07-15 15:34:29.331453] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.458 [2024-07-15 15:34:29.331854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.458 [2024-07-15 15:34:29.331875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:25.458 [2024-07-15 15:34:29.339974] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.458 [2024-07-15 15:34:29.340312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.458 [2024-07-15 15:34:29.340333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.458 [2024-07-15 15:34:29.348643] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.458 [2024-07-15 15:34:29.349011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.458 [2024-07-15 15:34:29.349032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:25.458 [2024-07-15 15:34:29.356442] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.458 [2024-07-15 15:34:29.356752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.458 [2024-07-15 15:34:29.356773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:25.718 [2024-07-15 15:34:29.364671] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.718 [2024-07-15 15:34:29.365059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.718 [2024-07-15 15:34:29.365085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:25.718 [2024-07-15 15:34:29.373152] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.718 [2024-07-15 15:34:29.373548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.718 [2024-07-15 15:34:29.373568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.718 [2024-07-15 15:34:29.381757] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.718 [2024-07-15 15:34:29.382117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.718 [2024-07-15 15:34:29.382139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:25.718 [2024-07-15 15:34:29.390427] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.718 [2024-07-15 15:34:29.390751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.718 [2024-07-15 15:34:29.390772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:25.718 [2024-07-15 15:34:29.398704] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.718 [2024-07-15 15:34:29.399096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.718 [2024-07-15 15:34:29.399116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:25.718 [2024-07-15 15:34:29.407281] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.718 [2024-07-15 15:34:29.407586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.718 [2024-07-15 15:34:29.407607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.718 [2024-07-15 15:34:29.415088] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.718 [2024-07-15 15:34:29.415442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.718 [2024-07-15 15:34:29.415463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:25.718 [2024-07-15 15:34:29.422804] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.718 [2024-07-15 15:34:29.423147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.718 [2024-07-15 15:34:29.423168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:25.718 [2024-07-15 15:34:29.431105] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.718 [2024-07-15 15:34:29.431452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.718 [2024-07-15 15:34:29.431472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:25.718 [2024-07-15 15:34:29.439198] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.718 [2024-07-15 15:34:29.439559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.718 [2024-07-15 15:34:29.439580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.718 [2024-07-15 15:34:29.447698] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.718 [2024-07-15 15:34:29.448031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.718 [2024-07-15 15:34:29.448054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:25.718 [2024-07-15 15:34:29.456158] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.718 [2024-07-15 15:34:29.456507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.718 [2024-07-15 15:34:29.456527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:25.718 [2024-07-15 15:34:29.463949] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.718 [2024-07-15 15:34:29.464259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.718 [2024-07-15 15:34:29.464279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:25.718 [2024-07-15 15:34:29.472253] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.718 [2024-07-15 15:34:29.472634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.718 [2024-07-15 15:34:29.472655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.718 [2024-07-15 15:34:29.479345] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.718 [2024-07-15 15:34:29.479786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.718 [2024-07-15 15:34:29.479807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:25.718 [2024-07-15 15:34:29.486441] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.718 [2024-07-15 15:34:29.486759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.718 [2024-07-15 15:34:29.486779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:25.718 [2024-07-15 15:34:29.493492] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.718 [2024-07-15 15:34:29.493814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.718 [2024-07-15 15:34:29.493838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:25.718 [2024-07-15 15:34:29.500203] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.718 [2024-07-15 15:34:29.500528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.718 [2024-07-15 15:34:29.500548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.718 [2024-07-15 15:34:29.507410] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.718 [2024-07-15 15:34:29.507725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.718 [2024-07-15 15:34:29.507747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:25.718 [2024-07-15 15:34:29.513275] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.718 [2024-07-15 15:34:29.513574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.718 [2024-07-15 15:34:29.513594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:25.718 [2024-07-15 15:34:29.520090] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.719 [2024-07-15 15:34:29.520415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.719 [2024-07-15 15:34:29.520436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:25.719 [2024-07-15 15:34:29.526922] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.719 [2024-07-15 15:34:29.527195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.719 [2024-07-15 15:34:29.527215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.719 [2024-07-15 15:34:29.533443] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.719 [2024-07-15 15:34:29.533770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.719 [2024-07-15 15:34:29.533790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:25.719 [2024-07-15 15:34:29.540360] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.719 [2024-07-15 15:34:29.540659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.719 [2024-07-15 15:34:29.540680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:25.719 [2024-07-15 15:34:29.546804] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.719 [2024-07-15 15:34:29.547145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.719 [2024-07-15 15:34:29.547166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:25.719 [2024-07-15 15:34:29.553666] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.719 [2024-07-15 15:34:29.554014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.719 [2024-07-15 15:34:29.554034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.719 [2024-07-15 15:34:29.560629] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.719 [2024-07-15 15:34:29.560926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.719 [2024-07-15 15:34:29.560950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:25.719 [2024-07-15 15:34:29.567245] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.719 [2024-07-15 15:34:29.567610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.719 [2024-07-15 15:34:29.567630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:25.719 [2024-07-15 15:34:29.574356] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.719 [2024-07-15 15:34:29.574679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.719 [2024-07-15 15:34:29.574700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:25.719 [2024-07-15 15:34:29.580904] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.719 [2024-07-15 15:34:29.581300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.719 [2024-07-15 15:34:29.581321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.719 [2024-07-15 15:34:29.588302] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.719 [2024-07-15 15:34:29.588636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.719 [2024-07-15 15:34:29.588657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:25.719 [2024-07-15 15:34:29.595208] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.719 [2024-07-15 15:34:29.595540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.719 [2024-07-15 15:34:29.595561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:25.719 [2024-07-15 15:34:29.601557] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.719 [2024-07-15 15:34:29.601844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.719 [2024-07-15 15:34:29.601865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:25.719 [2024-07-15 15:34:29.608080] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.719 [2024-07-15 15:34:29.608376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.719 [2024-07-15 15:34:29.608396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.719 [2024-07-15 15:34:29.614350] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.719 [2024-07-15 15:34:29.614642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.719 [2024-07-15 15:34:29.614663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:25.719 [2024-07-15 15:34:29.620684] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.719 [2024-07-15 15:34:29.620957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.719 [2024-07-15 15:34:29.620979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:25.979 [2024-07-15 15:34:29.626951] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.979 [2024-07-15 15:34:29.627227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.979 [2024-07-15 15:34:29.627247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:25.979 [2024-07-15 15:34:29.633017] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.979 [2024-07-15 15:34:29.633322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.979 [2024-07-15 15:34:29.633343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.979 [2024-07-15 15:34:29.639171] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.979 [2024-07-15 15:34:29.639460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.979 [2024-07-15 15:34:29.639480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:25.979 [2024-07-15 15:34:29.645410] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.979 [2024-07-15 15:34:29.645727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.979 [2024-07-15 15:34:29.645747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:25.979 [2024-07-15 15:34:29.652753] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.979 [2024-07-15 15:34:29.653060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.979 [2024-07-15 15:34:29.653081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:25.979 [2024-07-15 15:34:29.659296] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.979 [2024-07-15 15:34:29.659614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.979 [2024-07-15 15:34:29.659634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.979 [2024-07-15 15:34:29.666607] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.979 [2024-07-15 15:34:29.666889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.979 [2024-07-15 15:34:29.666909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:25.979 [2024-07-15 15:34:29.673909] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.979 [2024-07-15 15:34:29.674293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.979 [2024-07-15 15:34:29.674314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:25.979 [2024-07-15 15:34:29.681036] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.979 [2024-07-15 15:34:29.681366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.979 [2024-07-15 15:34:29.681386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:25.979 [2024-07-15 15:34:29.688327] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.979 [2024-07-15 15:34:29.688640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.979 [2024-07-15 15:34:29.688661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.979 [2024-07-15 15:34:29.695439] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.979 [2024-07-15 15:34:29.695756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.979 [2024-07-15 15:34:29.695776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:25.979 [2024-07-15 15:34:29.702669] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.979 [2024-07-15 15:34:29.703001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.979 [2024-07-15 15:34:29.703021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:25.979 [2024-07-15 15:34:29.710115] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.979 [2024-07-15 15:34:29.710394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.979 [2024-07-15 15:34:29.710414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:25.979 [2024-07-15 15:34:29.717176] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.979 [2024-07-15 15:34:29.717485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.979 [2024-07-15 15:34:29.717506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.979 [2024-07-15 15:34:29.724750] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.979 [2024-07-15 15:34:29.725096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.979 [2024-07-15 15:34:29.725117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:25.979 [2024-07-15 15:34:29.732877] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.979 [2024-07-15 15:34:29.733225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.979 [2024-07-15 15:34:29.733245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:25.979 [2024-07-15 15:34:29.740443] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.979 [2024-07-15 15:34:29.740794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.979 [2024-07-15 15:34:29.740818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:25.979 [2024-07-15 15:34:29.747570] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.979 [2024-07-15 15:34:29.747905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.979 [2024-07-15 15:34:29.747926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.979 [2024-07-15 15:34:29.754692] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.979 [2024-07-15 15:34:29.755017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.979 [2024-07-15 15:34:29.755038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:25.979 [2024-07-15 15:34:29.761603] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.979 [2024-07-15 15:34:29.761944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.979 [2024-07-15 15:34:29.761966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:25.979 [2024-07-15 15:34:29.768253] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.979 [2024-07-15 15:34:29.768587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.979 [2024-07-15 15:34:29.768608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:25.979 [2024-07-15 15:34:29.774903] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.979 [2024-07-15 15:34:29.775221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.979 [2024-07-15 15:34:29.775242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.979 [2024-07-15 15:34:29.781215] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.979 [2024-07-15 15:34:29.781519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.979 [2024-07-15 15:34:29.781539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:25.979 [2024-07-15 15:34:29.788301] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.979 [2024-07-15 15:34:29.788579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.979 [2024-07-15 15:34:29.788599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:25.979 [2024-07-15 15:34:29.795190] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.979 [2024-07-15 15:34:29.795511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.980 [2024-07-15 15:34:29.795531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:25.980 [2024-07-15 15:34:29.802073] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.980 [2024-07-15 15:34:29.802400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.980 [2024-07-15 15:34:29.802420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.980 [2024-07-15 15:34:29.808653] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.980 [2024-07-15 15:34:29.808972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.980 [2024-07-15 15:34:29.808993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:25.980 [2024-07-15 15:34:29.815806] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.980 [2024-07-15 15:34:29.816149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.980 [2024-07-15 15:34:29.816169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:25.980 [2024-07-15 15:34:29.822463] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.980 [2024-07-15 15:34:29.822784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.980 [2024-07-15 15:34:29.822804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:25.980 [2024-07-15 15:34:29.829341] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.980 [2024-07-15 15:34:29.829668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.980 [2024-07-15 15:34:29.829688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.980 [2024-07-15 15:34:29.835661] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.980 [2024-07-15 15:34:29.835944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.980 [2024-07-15 15:34:29.835965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:25.980 [2024-07-15 15:34:29.842675] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.980 [2024-07-15 15:34:29.842984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.980 [2024-07-15 15:34:29.843005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:25.980 [2024-07-15 15:34:29.849329] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.980 [2024-07-15 15:34:29.849623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.980 [2024-07-15 15:34:29.849643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:25.980 [2024-07-15 15:34:29.855716] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.980 [2024-07-15 15:34:29.856077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.980 [2024-07-15 15:34:29.856101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.980 [2024-07-15 15:34:29.862614] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.980 [2024-07-15 15:34:29.862951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.980 [2024-07-15 15:34:29.862971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:25.980 [2024-07-15 15:34:29.869513] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.980 [2024-07-15 15:34:29.869905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.980 [2024-07-15 15:34:29.869926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:25.980 [2024-07-15 15:34:29.877964] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:25.980 [2024-07-15 15:34:29.878365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.980 [2024-07-15 15:34:29.878385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.239 [2024-07-15 15:34:29.886502] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:26.239 [2024-07-15 15:34:29.886871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.239 [2024-07-15 15:34:29.886893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.239 [2024-07-15 15:34:29.895291] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:26.239 [2024-07-15 15:34:29.895562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.239 [2024-07-15 15:34:29.895583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.239 [2024-07-15 15:34:29.903057] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:26.239 [2024-07-15 15:34:29.903386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.239 [2024-07-15 15:34:29.903406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.239 [2024-07-15 15:34:29.912077] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:26.239 [2024-07-15 15:34:29.912429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.239 [2024-07-15 15:34:29.912449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.239 [2024-07-15 15:34:29.920281] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:26.239 [2024-07-15 15:34:29.920630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.239 [2024-07-15 15:34:29.920651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.239 [2024-07-15 15:34:29.929006] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:26.239 [2024-07-15 15:34:29.929372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.239 [2024-07-15 15:34:29.929392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.239 [2024-07-15 15:34:29.937082] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:26.239 [2024-07-15 15:34:29.937360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.239 [2024-07-15 15:34:29.937380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.239 [2024-07-15 15:34:29.945124] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:26.239 [2024-07-15 15:34:29.945444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.239 [2024-07-15 15:34:29.945464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.239 [2024-07-15 15:34:29.952802] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:26.239 [2024-07-15 15:34:29.953125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.239 [2024-07-15 15:34:29.953146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.239 [2024-07-15 15:34:29.959770] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:26.239 [2024-07-15 15:34:29.960099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.239 [2024-07-15 15:34:29.960120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.239 [2024-07-15 15:34:29.966426] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:26.239 [2024-07-15 15:34:29.966735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.239 [2024-07-15 15:34:29.966756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.239 [2024-07-15 15:34:29.972397] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:26.239 [2024-07-15 15:34:29.972674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.239 [2024-07-15 15:34:29.972694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.239 [2024-07-15 15:34:29.978513] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:26.239 [2024-07-15 15:34:29.978831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.239 [2024-07-15 15:34:29.978858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.239 [2024-07-15 15:34:29.985851] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:26.239 [2024-07-15 15:34:29.986208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.239 [2024-07-15 15:34:29.986229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.239 [2024-07-15 15:34:29.992459] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:26.239 [2024-07-15 15:34:29.992795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.239 [2024-07-15 15:34:29.992816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.239 [2024-07-15 15:34:29.999648] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:26.239 [2024-07-15 15:34:29.999966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.239 [2024-07-15 15:34:29.999986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.239 [2024-07-15 15:34:30.006324] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:26.239 [2024-07-15 15:34:30.006618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.239 [2024-07-15 15:34:30.006643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.239 [2024-07-15 15:34:30.012606] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:26.239 [2024-07-15 15:34:30.012916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.239 [2024-07-15 15:34:30.012938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.239 [2024-07-15 15:34:30.019563] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:26.239 [2024-07-15 15:34:30.019890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.239 [2024-07-15 15:34:30.019912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.239 [2024-07-15 15:34:30.025741] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:26.239 [2024-07-15 15:34:30.026067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.239 [2024-07-15 15:34:30.026089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.239 [2024-07-15 15:34:30.031901] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:26.239 [2024-07-15 15:34:30.032236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.239 [2024-07-15 15:34:30.032259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.239 [2024-07-15 15:34:30.037994] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:26.239 [2024-07-15 15:34:30.038342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.239 [2024-07-15 15:34:30.038367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.239 [2024-07-15 15:34:30.044223] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:26.239 [2024-07-15 15:34:30.044768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.239 [2024-07-15 15:34:30.044989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.239 [2024-07-15 15:34:30.050798] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:26.239 [2024-07-15 15:34:30.051077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.239 [2024-07-15 15:34:30.051100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.239 [2024-07-15 15:34:30.056670] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:26.239 [2024-07-15 15:34:30.056948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.239 [2024-07-15 15:34:30.056970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.239 [2024-07-15 15:34:30.063158] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:26.239 [2024-07-15 15:34:30.063477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.239 [2024-07-15 15:34:30.063498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.239 [2024-07-15 15:34:30.069157] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:26.239 [2024-07-15 15:34:30.069403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.239 [2024-07-15 15:34:30.069425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.239 [2024-07-15 15:34:30.075552] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:26.239 [2024-07-15 15:34:30.075788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.239 [2024-07-15 15:34:30.075809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.239 [2024-07-15 15:34:30.082230] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:26.239 [2024-07-15 15:34:30.082456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.239 [2024-07-15 15:34:30.082477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.239 [2024-07-15 15:34:30.088396] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:26.239 [2024-07-15 15:34:30.088637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.239 [2024-07-15 15:34:30.088659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.239 [2024-07-15 15:34:30.094246] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:26.239 [2024-07-15 15:34:30.094486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.239 [2024-07-15 15:34:30.094507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.239 [2024-07-15 15:34:30.101086] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:26.239 [2024-07-15 15:34:30.101327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.239 [2024-07-15 15:34:30.101348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.239 [2024-07-15 15:34:30.106902] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:26.239 [2024-07-15 15:34:30.107144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.239 [2024-07-15 15:34:30.107166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.239 [2024-07-15 15:34:30.112728] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:26.239 [2024-07-15 15:34:30.113002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.239 [2024-07-15 15:34:30.113023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.239 [2024-07-15 15:34:30.118907] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:26.239 [2024-07-15 15:34:30.119203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.239 [2024-07-15 15:34:30.119225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.240 [2024-07-15 15:34:30.124975] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:26.240 [2024-07-15 15:34:30.125230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.240 [2024-07-15 15:34:30.125251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.240 [2024-07-15 15:34:30.130676] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:26.240 [2024-07-15 15:34:30.130959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.240 [2024-07-15 15:34:30.130980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.240 [2024-07-15 15:34:30.136403] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:26.240 [2024-07-15 15:34:30.136674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.240 [2024-07-15 15:34:30.136695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.240 [2024-07-15 15:34:30.142448] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:26.240 [2024-07-15 15:34:30.142764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.240 [2024-07-15 15:34:30.142785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.498 [2024-07-15 15:34:30.148633] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:26.498 [2024-07-15 15:34:30.148919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.498 [2024-07-15 15:34:30.148940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.498 [2024-07-15 15:34:30.155234] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:26.498 [2024-07-15 15:34:30.155507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.498 [2024-07-15 15:34:30.155529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.498 [2024-07-15 15:34:30.161189] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:26.498 [2024-07-15 15:34:30.161432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.498 [2024-07-15 15:34:30.161453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.498 [2024-07-15 15:34:30.166415] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:26.498 [2024-07-15 15:34:30.166647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.498 [2024-07-15 15:34:30.166667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.499 [2024-07-15 15:34:30.172277] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:26.499 [2024-07-15 15:34:30.172591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.499 [2024-07-15 15:34:30.172612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.499 [2024-07-15 15:34:30.178631] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:26.499 [2024-07-15 15:34:30.178865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.499 [2024-07-15 15:34:30.178886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.499 [2024-07-15 15:34:30.184764] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:26.499 [2024-07-15 15:34:30.185004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.499 [2024-07-15 15:34:30.185025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.499 [2024-07-15 15:34:30.190872] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:26.499 [2024-07-15 15:34:30.191117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.499 [2024-07-15 15:34:30.191137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.499 [2024-07-15 15:34:30.197015] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:26.499 [2024-07-15 15:34:30.197266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.499 [2024-07-15 15:34:30.197288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.499 [2024-07-15 15:34:30.204203] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x906270) with pdu=0x2000190fef90 00:29:26.499 [2024-07-15 15:34:30.204441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.499 [2024-07-15 15:34:30.204465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.499 00:29:26.499 Latency(us) 00:29:26.499 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:26.499 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:26.499 nvme0n1 : 2.00 4098.41 512.30 0.00 0.00 3899.02 2398.62 20237.52 00:29:26.499 =================================================================================================================== 00:29:26.499 Total : 4098.41 512.30 0.00 0.00 3899.02 2398.62 20237.52 00:29:26.499 0 00:29:26.499 15:34:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:26.499 15:34:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:26.499 15:34:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:26.499 | .driver_specific 00:29:26.499 | .nvme_error 00:29:26.499 | .status_code 00:29:26.499 | .command_transient_transport_error' 00:29:26.499 15:34:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:26.758 15:34:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 264 > 0 )) 00:29:26.758 15:34:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3214751 00:29:26.758 15:34:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3214751 ']' 00:29:26.758 15:34:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3214751 00:29:26.758 15:34:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:26.758 15:34:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:26.758 15:34:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3214751 00:29:26.758 15:34:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:26.758 15:34:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:26.758 15:34:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3214751' 00:29:26.758 killing process with pid 3214751 00:29:26.758 15:34:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3214751 00:29:26.758 Received shutdown signal, test time was about 2.000000 seconds 00:29:26.758 00:29:26.758 Latency(us) 00:29:26.758 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:26.758 =================================================================================================================== 00:29:26.758 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:26.758 15:34:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3214751 00:29:26.758 15:34:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3212593 00:29:26.758 15:34:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3212593 ']' 00:29:26.758 15:34:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3212593 00:29:26.758 15:34:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:26.758 15:34:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:26.758 15:34:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3212593 00:29:27.044 15:34:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:27.044 15:34:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:27.044 15:34:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3212593' 00:29:27.044 killing process with pid 3212593 00:29:27.044 15:34:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3212593 00:29:27.044 15:34:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3212593 00:29:27.044 00:29:27.044 real 0m16.750s 00:29:27.044 user 0m31.518s 00:29:27.044 sys 0m4.920s 00:29:27.044 15:34:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:27.044 15:34:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:27.044 ************************************ 00:29:27.044 END TEST nvmf_digest_error 00:29:27.044 ************************************ 00:29:27.044 15:34:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:29:27.044 15:34:30 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:29:27.044 15:34:30 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:29:27.044 15:34:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:27.044 15:34:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:29:27.044 15:34:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:27.044 15:34:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:29:27.044 15:34:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:27.044 15:34:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:27.044 rmmod nvme_tcp 00:29:27.304 rmmod nvme_fabrics 00:29:27.304 rmmod nvme_keyring 00:29:27.304 15:34:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:27.304 15:34:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:29:27.304 15:34:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:29:27.304 15:34:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 3212593 ']' 00:29:27.304 15:34:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 3212593 00:29:27.304 15:34:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 3212593 ']' 00:29:27.304 15:34:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 3212593 00:29:27.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3212593) - No such process 00:29:27.304 15:34:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 3212593 is not found' 00:29:27.304 Process with pid 3212593 is not found 00:29:27.304 15:34:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:27.304 15:34:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:27.304 15:34:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:27.304 15:34:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:27.304 15:34:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:27.304 15:34:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:27.304 15:34:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:27.304 15:34:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:29.203 15:34:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:29.203 00:29:29.203 real 0m42.751s 00:29:29.203 user 1m4.761s 00:29:29.203 sys 0m15.335s 00:29:29.203 15:34:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:29.203 15:34:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:29.203 ************************************ 00:29:29.203 END TEST nvmf_digest 00:29:29.203 ************************************ 00:29:29.463 15:34:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:29.463 15:34:33 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:29:29.463 15:34:33 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:29:29.463 15:34:33 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:29:29.463 15:34:33 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:29.463 15:34:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:29.463 15:34:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:29.463 15:34:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:29.463 ************************************ 00:29:29.463 START TEST nvmf_bdevperf 00:29:29.463 ************************************ 00:29:29.463 15:34:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:29.463 * Looking for test storage... 00:29:29.463 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:29.463 15:34:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:29.463 15:34:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:29:29.463 15:34:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:29.463 15:34:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:29.463 15:34:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:29.463 15:34:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:29.463 15:34:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:29.463 15:34:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:29.463 15:34:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:29.463 15:34:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:29.463 15:34:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:29.463 15:34:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:29.463 15:34:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:29:29.463 15:34:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:29:29.463 15:34:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:29.463 15:34:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:29.463 15:34:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:29.463 15:34:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:29.463 15:34:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:29.463 15:34:33 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:29.463 15:34:33 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:29.463 15:34:33 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:29.463 15:34:33 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.463 15:34:33 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.463 15:34:33 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.463 15:34:33 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:29:29.463 15:34:33 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.463 15:34:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:29:29.463 15:34:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:29.463 15:34:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:29.463 15:34:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:29.463 15:34:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:29.463 15:34:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:29.463 15:34:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:29.463 15:34:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:29.463 15:34:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:29.463 15:34:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:29.463 15:34:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:29.463 15:34:33 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:29.463 15:34:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:29.463 15:34:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:29.463 15:34:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:29.463 15:34:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:29.463 15:34:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:29.463 15:34:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:29.463 15:34:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:29.463 15:34:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:29.463 15:34:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:29.463 15:34:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:29.463 15:34:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:29:29.463 15:34:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:36.029 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:36.029 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:29:36.029 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:36.029 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:36.029 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:36.029 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:36.029 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:36.029 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:29:36.029 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:36.029 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:29:36.029 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:29:36.029 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:29:36.029 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:29:36.029 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:29:36.029 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:29:36.029 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:36.029 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:36.029 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:36.029 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:36.029 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:36.029 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:36.029 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:36.029 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:36.029 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:36.029 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:36.029 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:36.029 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:36.029 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:36.029 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:36.029 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:36.029 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:36.029 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:36.029 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:36.029 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:36.029 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:36.029 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:36.029 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:36.029 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:36.029 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:36.029 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:36.029 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:36.029 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:36.029 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:36.029 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:36.029 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:36.029 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:36.029 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:36.029 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:36.029 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:36.029 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:36.029 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:36.029 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:36.029 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:36.029 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:36.030 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:36.030 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:36.030 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:36.030 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:36.030 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:36.030 Found net devices under 0000:af:00.0: cvl_0_0 00:29:36.030 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:36.030 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:36.030 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:36.030 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:36.030 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:36.030 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:36.030 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:36.030 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:36.030 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:36.030 Found net devices under 0000:af:00.1: cvl_0_1 00:29:36.030 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:36.030 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:36.030 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:29:36.030 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:36.030 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:36.030 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:36.030 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:36.030 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:36.030 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:36.030 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:36.030 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:36.030 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:36.030 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:36.030 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:36.030 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:36.030 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:36.030 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:36.030 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:36.030 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:36.289 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:36.289 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:36.289 15:34:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:36.289 15:34:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:36.289 15:34:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:36.289 15:34:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:36.289 15:34:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:36.289 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:36.289 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:29:36.289 00:29:36.289 --- 10.0.0.2 ping statistics --- 00:29:36.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:36.289 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:29:36.289 15:34:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:36.289 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:36.289 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:29:36.289 00:29:36.289 --- 10.0.0.1 ping statistics --- 00:29:36.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:36.289 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:29:36.289 15:34:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:36.289 15:34:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:29:36.289 15:34:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:36.289 15:34:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:36.289 15:34:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:36.289 15:34:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:36.289 15:34:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:36.290 15:34:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:36.290 15:34:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:36.290 15:34:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:29:36.290 15:34:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:36.290 15:34:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:36.290 15:34:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:36.290 15:34:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:36.290 15:34:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3219099 00:29:36.290 15:34:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:36.290 15:34:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3219099 00:29:36.290 15:34:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 3219099 ']' 00:29:36.290 15:34:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:36.290 15:34:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:36.290 15:34:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:36.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:36.290 15:34:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:36.290 15:34:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:36.549 [2024-07-15 15:34:40.240253] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:29:36.549 [2024-07-15 15:34:40.240305] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:36.549 EAL: No free 2048 kB hugepages reported on node 1 00:29:36.549 [2024-07-15 15:34:40.315552] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:36.549 [2024-07-15 15:34:40.390574] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:36.549 [2024-07-15 15:34:40.390612] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:36.549 [2024-07-15 15:34:40.390621] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:36.549 [2024-07-15 15:34:40.390630] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:36.549 [2024-07-15 15:34:40.390637] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:36.549 [2024-07-15 15:34:40.390738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:36.549 [2024-07-15 15:34:40.390852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:36.549 [2024-07-15 15:34:40.390855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:37.487 15:34:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:37.487 15:34:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:29:37.487 15:34:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:37.487 15:34:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:37.487 15:34:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:37.487 15:34:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:37.487 15:34:41 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:37.487 15:34:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.487 15:34:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:37.487 [2024-07-15 15:34:41.102826] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:37.487 15:34:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.487 15:34:41 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:37.487 15:34:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.487 15:34:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:37.487 Malloc0 00:29:37.487 15:34:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.487 15:34:41 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:37.487 15:34:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.487 15:34:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:37.487 15:34:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.487 15:34:41 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:37.487 15:34:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.487 15:34:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:37.487 15:34:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.487 15:34:41 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:37.487 15:34:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.487 15:34:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:37.487 [2024-07-15 15:34:41.161787] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:37.487 15:34:41 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.487 15:34:41 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:37.487 15:34:41 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:37.487 15:34:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:29:37.487 15:34:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:29:37.487 15:34:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:37.487 15:34:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:37.487 { 00:29:37.487 "params": { 00:29:37.487 "name": "Nvme$subsystem", 00:29:37.487 "trtype": "$TEST_TRANSPORT", 00:29:37.487 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:37.487 "adrfam": "ipv4", 00:29:37.487 "trsvcid": "$NVMF_PORT", 00:29:37.487 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:37.487 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:37.487 "hdgst": ${hdgst:-false}, 00:29:37.487 "ddgst": ${ddgst:-false} 00:29:37.487 }, 00:29:37.487 "method": "bdev_nvme_attach_controller" 00:29:37.487 } 00:29:37.487 EOF 00:29:37.487 )") 00:29:37.487 15:34:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:29:37.487 15:34:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:29:37.487 15:34:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:29:37.487 15:34:41 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:37.487 "params": { 00:29:37.487 "name": "Nvme1", 00:29:37.487 "trtype": "tcp", 00:29:37.487 "traddr": "10.0.0.2", 00:29:37.487 "adrfam": "ipv4", 00:29:37.487 "trsvcid": "4420", 00:29:37.487 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:37.487 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:37.487 "hdgst": false, 00:29:37.487 "ddgst": false 00:29:37.487 }, 00:29:37.487 "method": "bdev_nvme_attach_controller" 00:29:37.487 }' 00:29:37.487 [2024-07-15 15:34:41.214414] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:29:37.487 [2024-07-15 15:34:41.214463] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3219265 ] 00:29:37.487 EAL: No free 2048 kB hugepages reported on node 1 00:29:37.487 [2024-07-15 15:34:41.284142] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:37.487 [2024-07-15 15:34:41.353334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:37.747 Running I/O for 1 seconds... 00:29:38.684 00:29:38.684 Latency(us) 00:29:38.684 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:38.684 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:38.684 Verification LBA range: start 0x0 length 0x4000 00:29:38.684 Nvme1n1 : 1.00 11781.17 46.02 0.00 0.00 10825.26 2306.87 18350.08 00:29:38.684 =================================================================================================================== 00:29:38.684 Total : 11781.17 46.02 0.00 0.00 10825.26 2306.87 18350.08 00:29:38.942 15:34:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3219530 00:29:38.942 15:34:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:29:38.942 15:34:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:38.942 15:34:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:38.943 15:34:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:29:38.943 15:34:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:29:38.943 15:34:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:38.943 15:34:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:38.943 { 00:29:38.943 "params": { 00:29:38.943 "name": "Nvme$subsystem", 00:29:38.943 "trtype": "$TEST_TRANSPORT", 00:29:38.943 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:38.943 "adrfam": "ipv4", 00:29:38.943 "trsvcid": "$NVMF_PORT", 00:29:38.943 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:38.943 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:38.943 "hdgst": ${hdgst:-false}, 00:29:38.943 "ddgst": ${ddgst:-false} 00:29:38.943 }, 00:29:38.943 "method": "bdev_nvme_attach_controller" 00:29:38.943 } 00:29:38.943 EOF 00:29:38.943 )") 00:29:38.943 15:34:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:29:38.943 15:34:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:29:38.943 15:34:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:29:38.943 15:34:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:38.943 "params": { 00:29:38.943 "name": "Nvme1", 00:29:38.943 "trtype": "tcp", 00:29:38.943 "traddr": "10.0.0.2", 00:29:38.943 "adrfam": "ipv4", 00:29:38.943 "trsvcid": "4420", 00:29:38.943 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:38.943 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:38.943 "hdgst": false, 00:29:38.943 "ddgst": false 00:29:38.943 }, 00:29:38.943 "method": "bdev_nvme_attach_controller" 00:29:38.943 }' 00:29:38.943 [2024-07-15 15:34:42.749241] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:29:38.943 [2024-07-15 15:34:42.749290] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3219530 ] 00:29:38.943 EAL: No free 2048 kB hugepages reported on node 1 00:29:38.943 [2024-07-15 15:34:42.820006] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:39.202 [2024-07-15 15:34:42.885702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:39.460 Running I/O for 15 seconds... 00:29:41.997 15:34:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3219099 00:29:41.997 15:34:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:29:41.997 [2024-07-15 15:34:45.717628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:109488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.997 [2024-07-15 15:34:45.717670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.997 [2024-07-15 15:34:45.717689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:109496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.997 [2024-07-15 15:34:45.717700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.997 [2024-07-15 15:34:45.717711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:109504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.997 [2024-07-15 15:34:45.717723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.997 [2024-07-15 15:34:45.717735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:109512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.997 [2024-07-15 15:34:45.717746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.997 [2024-07-15 15:34:45.717764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:109520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.997 [2024-07-15 15:34:45.717775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.998 [2024-07-15 15:34:45.717787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:109528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.998 [2024-07-15 15:34:45.717797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.998 [2024-07-15 15:34:45.717811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:109536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.998 [2024-07-15 15:34:45.717820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.998 [2024-07-15 15:34:45.717836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:109544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.998 [2024-07-15 15:34:45.717847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.998 [2024-07-15 15:34:45.717860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.998 [2024-07-15 15:34:45.717872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.998 [2024-07-15 15:34:45.717884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:109560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.998 [2024-07-15 15:34:45.717894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.998 [2024-07-15 15:34:45.717916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:109568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.998 [2024-07-15 15:34:45.717929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.998 [2024-07-15 15:34:45.717944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:109576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.998 [2024-07-15 15:34:45.717958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.998 [2024-07-15 15:34:45.717971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:109584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.998 [2024-07-15 15:34:45.717982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.998 [2024-07-15 15:34:45.717994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:109592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.998 [2024-07-15 15:34:45.718003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.998 [2024-07-15 15:34:45.718014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:109600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.998 [2024-07-15 15:34:45.718024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.998 [2024-07-15 15:34:45.718034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:109608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.998 [2024-07-15 15:34:45.718043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.998 [2024-07-15 15:34:45.718054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:109616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.998 [2024-07-15 15:34:45.718065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.998 [2024-07-15 15:34:45.718079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:109624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.998 [2024-07-15 15:34:45.718088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.998 [2024-07-15 15:34:45.718099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:109632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.998 [2024-07-15 15:34:45.718108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.998 [2024-07-15 15:34:45.718119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:109640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.998 [2024-07-15 15:34:45.718128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.998 [2024-07-15 15:34:45.718139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:109648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.998 [2024-07-15 15:34:45.718148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.998 [2024-07-15 15:34:45.718159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:109656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.998 [2024-07-15 15:34:45.718169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.998 [2024-07-15 15:34:45.718180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:109664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.998 [2024-07-15 15:34:45.718189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.998 [2024-07-15 15:34:45.718200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.998 [2024-07-15 15:34:45.718209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.998 [2024-07-15 15:34:45.718220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:109680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.998 [2024-07-15 15:34:45.718229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.998 [2024-07-15 15:34:45.718240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:109688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.998 [2024-07-15 15:34:45.718249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.998 [2024-07-15 15:34:45.718259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:109696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.998 [2024-07-15 15:34:45.718269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.998 [2024-07-15 15:34:45.718280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:109704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.998 [2024-07-15 15:34:45.718290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.998 [2024-07-15 15:34:45.718300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:109712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.998 [2024-07-15 15:34:45.718309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.998 [2024-07-15 15:34:45.718322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:109720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.998 [2024-07-15 15:34:45.718331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.998 [2024-07-15 15:34:45.718342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:109728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.998 [2024-07-15 15:34:45.718351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.998 [2024-07-15 15:34:45.718361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:109736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.998 [2024-07-15 15:34:45.718371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.998 [2024-07-15 15:34:45.718382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:109744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.998 [2024-07-15 15:34:45.718391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.998 [2024-07-15 15:34:45.718403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:109752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.998 [2024-07-15 15:34:45.718412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.998 [2024-07-15 15:34:45.718423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:109784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.998 [2024-07-15 15:34:45.718433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.998 [2024-07-15 15:34:45.718444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:109792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.998 [2024-07-15 15:34:45.718453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.998 [2024-07-15 15:34:45.718464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:109800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.998 [2024-07-15 15:34:45.718473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.998 [2024-07-15 15:34:45.718483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:109808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.998 [2024-07-15 15:34:45.718493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.998 [2024-07-15 15:34:45.718504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:109816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.998 [2024-07-15 15:34:45.718513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.998 [2024-07-15 15:34:45.718524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:109824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.998 [2024-07-15 15:34:45.718534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.998 [2024-07-15 15:34:45.718544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:109832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.998 [2024-07-15 15:34:45.718553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.998 [2024-07-15 15:34:45.718564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:109840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.998 [2024-07-15 15:34:45.718574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.998 [2024-07-15 15:34:45.718586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:109848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.998 [2024-07-15 15:34:45.718595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.998 [2024-07-15 15:34:45.718606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:109856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.998 [2024-07-15 15:34:45.718615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.998 [2024-07-15 15:34:45.718625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:109864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.998 [2024-07-15 15:34:45.718634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.998 [2024-07-15 15:34:45.718645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:109872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.998 [2024-07-15 15:34:45.718654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.998 [2024-07-15 15:34:45.718665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:109880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.998 [2024-07-15 15:34:45.718674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.998 [2024-07-15 15:34:45.718684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:109888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.999 [2024-07-15 15:34:45.718693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.999 [2024-07-15 15:34:45.718704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:109896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.999 [2024-07-15 15:34:45.718714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.999 [2024-07-15 15:34:45.718726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:109904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.999 [2024-07-15 15:34:45.718735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.999 [2024-07-15 15:34:45.718745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:109912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.999 [2024-07-15 15:34:45.718755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.999 [2024-07-15 15:34:45.718765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:109920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.999 [2024-07-15 15:34:45.718775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.999 [2024-07-15 15:34:45.718785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:109928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.999 [2024-07-15 15:34:45.718794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.999 [2024-07-15 15:34:45.718805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:109936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.999 [2024-07-15 15:34:45.718814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.999 [2024-07-15 15:34:45.718825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:109944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.999 [2024-07-15 15:34:45.718839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.999 [2024-07-15 15:34:45.718850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:109952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.999 [2024-07-15 15:34:45.718859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.999 [2024-07-15 15:34:45.718870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:109960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.999 [2024-07-15 15:34:45.718879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.999 [2024-07-15 15:34:45.718890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:109968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.999 [2024-07-15 15:34:45.718899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.999 [2024-07-15 15:34:45.718910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:109976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.999 [2024-07-15 15:34:45.718920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.999 [2024-07-15 15:34:45.718930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.999 [2024-07-15 15:34:45.718939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.999 [2024-07-15 15:34:45.718949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:109992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.999 [2024-07-15 15:34:45.718958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.999 [2024-07-15 15:34:45.718969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:109760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.999 [2024-07-15 15:34:45.718978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.999 [2024-07-15 15:34:45.718990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:109768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.999 [2024-07-15 15:34:45.718999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.999 [2024-07-15 15:34:45.719009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.999 [2024-07-15 15:34:45.719019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.999 [2024-07-15 15:34:45.719030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:110000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.999 [2024-07-15 15:34:45.719039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.999 [2024-07-15 15:34:45.719051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:110008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.999 [2024-07-15 15:34:45.719060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.999 [2024-07-15 15:34:45.719071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:110016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.999 [2024-07-15 15:34:45.719080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.999 [2024-07-15 15:34:45.719092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:110024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.999 [2024-07-15 15:34:45.719102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.999 [2024-07-15 15:34:45.719112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:110032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.999 [2024-07-15 15:34:45.719121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.999 [2024-07-15 15:34:45.719132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:110040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.999 [2024-07-15 15:34:45.719141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.999 [2024-07-15 15:34:45.719152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.999 [2024-07-15 15:34:45.719161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.999 [2024-07-15 15:34:45.719171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:110056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.999 [2024-07-15 15:34:45.719181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.999 [2024-07-15 15:34:45.719191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:110064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.999 [2024-07-15 15:34:45.719200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.999 [2024-07-15 15:34:45.719211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:110072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.999 [2024-07-15 15:34:45.719220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.999 [2024-07-15 15:34:45.719231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:110080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.999 [2024-07-15 15:34:45.719240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.999 [2024-07-15 15:34:45.719250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:110088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.999 [2024-07-15 15:34:45.719260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.999 [2024-07-15 15:34:45.719270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:110096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.999 [2024-07-15 15:34:45.719280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.999 [2024-07-15 15:34:45.719291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:110104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.999 [2024-07-15 15:34:45.719300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.999 [2024-07-15 15:34:45.719310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:110112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.999 [2024-07-15 15:34:45.719319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.999 [2024-07-15 15:34:45.719330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:110120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.999 [2024-07-15 15:34:45.719341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.999 [2024-07-15 15:34:45.719352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:110128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.999 [2024-07-15 15:34:45.719362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.999 [2024-07-15 15:34:45.719373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:110136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.999 [2024-07-15 15:34:45.719383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.999 [2024-07-15 15:34:45.719393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:110144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.999 [2024-07-15 15:34:45.719402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.999 [2024-07-15 15:34:45.719413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:110152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.999 [2024-07-15 15:34:45.719422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.999 [2024-07-15 15:34:45.719433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:110160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.999 [2024-07-15 15:34:45.719443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.999 [2024-07-15 15:34:45.719453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:110168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.999 [2024-07-15 15:34:45.719462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.999 [2024-07-15 15:34:45.719473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:110176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.999 [2024-07-15 15:34:45.719482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.999 [2024-07-15 15:34:45.719493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:110184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.999 [2024-07-15 15:34:45.719503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.999 [2024-07-15 15:34:45.719513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:110192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.999 [2024-07-15 15:34:45.719523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.999 [2024-07-15 15:34:45.719533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:110200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.999 [2024-07-15 15:34:45.719542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.000 [2024-07-15 15:34:45.719553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:110208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.000 [2024-07-15 15:34:45.719562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.000 [2024-07-15 15:34:45.719573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:110216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.000 [2024-07-15 15:34:45.719582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.000 [2024-07-15 15:34:45.719594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.000 [2024-07-15 15:34:45.719603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.000 [2024-07-15 15:34:45.719614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:110232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.000 [2024-07-15 15:34:45.719623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.000 [2024-07-15 15:34:45.719634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:110240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.000 [2024-07-15 15:34:45.719643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.000 [2024-07-15 15:34:45.719653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:110248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.000 [2024-07-15 15:34:45.719663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.000 [2024-07-15 15:34:45.719673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:110256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.000 [2024-07-15 15:34:45.719682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.000 [2024-07-15 15:34:45.719694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:110264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.000 [2024-07-15 15:34:45.719703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.000 [2024-07-15 15:34:45.719714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:110272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.000 [2024-07-15 15:34:45.719723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.000 [2024-07-15 15:34:45.719733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:110280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.000 [2024-07-15 15:34:45.719742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.000 [2024-07-15 15:34:45.719753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.000 [2024-07-15 15:34:45.719762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.000 [2024-07-15 15:34:45.719773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:110296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.000 [2024-07-15 15:34:45.719782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.000 [2024-07-15 15:34:45.719793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:110304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.000 [2024-07-15 15:34:45.719802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.000 [2024-07-15 15:34:45.719812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:110312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.000 [2024-07-15 15:34:45.719821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.000 [2024-07-15 15:34:45.719836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:110320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.000 [2024-07-15 15:34:45.719849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.000 [2024-07-15 15:34:45.719861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:110328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.000 [2024-07-15 15:34:45.719870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.000 [2024-07-15 15:34:45.719882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:110336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.000 [2024-07-15 15:34:45.719891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.000 [2024-07-15 15:34:45.719902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:110344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.000 [2024-07-15 15:34:45.719912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.000 [2024-07-15 15:34:45.719922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:110352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.000 [2024-07-15 15:34:45.719931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.000 [2024-07-15 15:34:45.719942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:110360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.000 [2024-07-15 15:34:45.719951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.000 [2024-07-15 15:34:45.719962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:110368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.000 [2024-07-15 15:34:45.719971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.000 [2024-07-15 15:34:45.719982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:110376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.000 [2024-07-15 15:34:45.719991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.000 [2024-07-15 15:34:45.720002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:110384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.000 [2024-07-15 15:34:45.720011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.000 [2024-07-15 15:34:45.720023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:110392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.000 [2024-07-15 15:34:45.720033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.000 [2024-07-15 15:34:45.720043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:110400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.000 [2024-07-15 15:34:45.720053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.000 [2024-07-15 15:34:45.720064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:110408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.000 [2024-07-15 15:34:45.720073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.000 [2024-07-15 15:34:45.720084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:110416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.000 [2024-07-15 15:34:45.720093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.000 [2024-07-15 15:34:45.720103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.000 [2024-07-15 15:34:45.720115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.000 [2024-07-15 15:34:45.720125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:110432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.000 [2024-07-15 15:34:45.720135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.000 [2024-07-15 15:34:45.720146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:110440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.000 [2024-07-15 15:34:45.720155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.000 [2024-07-15 15:34:45.720166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:110448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.000 [2024-07-15 15:34:45.720175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.000 [2024-07-15 15:34:45.720186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:110456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.000 [2024-07-15 15:34:45.720195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.000 [2024-07-15 15:34:45.720206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:110464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.000 [2024-07-15 15:34:45.720215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.000 [2024-07-15 15:34:45.720226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:110472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.000 [2024-07-15 15:34:45.720235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.000 [2024-07-15 15:34:45.720246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:110480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.000 [2024-07-15 15:34:45.720256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.000 [2024-07-15 15:34:45.720267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:110488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.000 [2024-07-15 15:34:45.720276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.000 [2024-07-15 15:34:45.720287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:110496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:42.000 [2024-07-15 15:34:45.720296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.000 [2024-07-15 15:34:45.720306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d830 is same with the state(5) to be set 00:29:42.000 [2024-07-15 15:34:45.720317] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:42.000 [2024-07-15 15:34:45.720325] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:42.000 [2024-07-15 15:34:45.720333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110504 len:8 PRP1 0x0 PRP2 0x0 00:29:42.000 [2024-07-15 15:34:45.720342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.000 [2024-07-15 15:34:45.720389] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c1d830 was disconnected and freed. reset controller. 00:29:42.000 [2024-07-15 15:34:45.720434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.000 [2024-07-15 15:34:45.720448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.000 [2024-07-15 15:34:45.720459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.000 [2024-07-15 15:34:45.720468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.000 [2024-07-15 15:34:45.720478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.000 [2024-07-15 15:34:45.720487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.000 [2024-07-15 15:34:45.720497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.001 [2024-07-15 15:34:45.720507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.001 [2024-07-15 15:34:45.720517] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.001 [2024-07-15 15:34:45.723172] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.001 [2024-07-15 15:34:45.723198] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.001 [2024-07-15 15:34:45.723908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.001 [2024-07-15 15:34:45.723927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.001 [2024-07-15 15:34:45.723937] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.001 [2024-07-15 15:34:45.724108] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.001 [2024-07-15 15:34:45.724278] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.001 [2024-07-15 15:34:45.724289] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.001 [2024-07-15 15:34:45.724300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.001 [2024-07-15 15:34:45.726979] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.001 [2024-07-15 15:34:45.736267] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.001 [2024-07-15 15:34:45.736815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.001 [2024-07-15 15:34:45.736884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.001 [2024-07-15 15:34:45.736918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.001 [2024-07-15 15:34:45.737294] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.001 [2024-07-15 15:34:45.737460] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.001 [2024-07-15 15:34:45.737471] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.001 [2024-07-15 15:34:45.737481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.001 [2024-07-15 15:34:45.740033] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.001 [2024-07-15 15:34:45.749006] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.001 [2024-07-15 15:34:45.749538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.001 [2024-07-15 15:34:45.749590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.001 [2024-07-15 15:34:45.749622] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.001 [2024-07-15 15:34:45.750183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.001 [2024-07-15 15:34:45.750367] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.001 [2024-07-15 15:34:45.750379] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.001 [2024-07-15 15:34:45.750388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.001 [2024-07-15 15:34:45.753028] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.001 [2024-07-15 15:34:45.761926] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.001 [2024-07-15 15:34:45.762355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.001 [2024-07-15 15:34:45.762374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.001 [2024-07-15 15:34:45.762383] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.001 [2024-07-15 15:34:45.762549] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.001 [2024-07-15 15:34:45.762715] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.001 [2024-07-15 15:34:45.762726] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.001 [2024-07-15 15:34:45.762735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.001 [2024-07-15 15:34:45.765336] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.001 [2024-07-15 15:34:45.774652] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.001 [2024-07-15 15:34:45.775181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.001 [2024-07-15 15:34:45.775233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.001 [2024-07-15 15:34:45.775267] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.001 [2024-07-15 15:34:45.775769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.001 [2024-07-15 15:34:45.775932] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.001 [2024-07-15 15:34:45.775942] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.001 [2024-07-15 15:34:45.775951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.001 [2024-07-15 15:34:45.778405] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.001 [2024-07-15 15:34:45.787448] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.001 [2024-07-15 15:34:45.787895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.001 [2024-07-15 15:34:45.787936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.001 [2024-07-15 15:34:45.787970] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.001 [2024-07-15 15:34:45.788494] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.001 [2024-07-15 15:34:45.788652] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.001 [2024-07-15 15:34:45.788662] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.001 [2024-07-15 15:34:45.788671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.001 [2024-07-15 15:34:45.791140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.001 [2024-07-15 15:34:45.800136] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.001 [2024-07-15 15:34:45.800676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.001 [2024-07-15 15:34:45.800727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.001 [2024-07-15 15:34:45.800760] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.001 [2024-07-15 15:34:45.801283] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.001 [2024-07-15 15:34:45.801442] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.001 [2024-07-15 15:34:45.801453] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.001 [2024-07-15 15:34:45.801461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.001 [2024-07-15 15:34:45.803924] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.001 [2024-07-15 15:34:45.812865] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.001 [2024-07-15 15:34:45.813363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.001 [2024-07-15 15:34:45.813413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.001 [2024-07-15 15:34:45.813445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.001 [2024-07-15 15:34:45.814053] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.001 [2024-07-15 15:34:45.814330] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.001 [2024-07-15 15:34:45.814345] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.001 [2024-07-15 15:34:45.814358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.001 [2024-07-15 15:34:45.818096] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.001 [2024-07-15 15:34:45.826079] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.001 [2024-07-15 15:34:45.826575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.001 [2024-07-15 15:34:45.826593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.001 [2024-07-15 15:34:45.826602] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.001 [2024-07-15 15:34:45.826759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.001 [2024-07-15 15:34:45.826924] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.001 [2024-07-15 15:34:45.826936] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.001 [2024-07-15 15:34:45.826948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.001 [2024-07-15 15:34:45.829407] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.001 [2024-07-15 15:34:45.838844] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.001 [2024-07-15 15:34:45.839371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.001 [2024-07-15 15:34:45.839421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.001 [2024-07-15 15:34:45.839454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.001 [2024-07-15 15:34:45.839751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.001 [2024-07-15 15:34:45.839915] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.001 [2024-07-15 15:34:45.839925] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.002 [2024-07-15 15:34:45.839935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.002 [2024-07-15 15:34:45.842394] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.002 [2024-07-15 15:34:45.851537] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.002 [2024-07-15 15:34:45.851981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.002 [2024-07-15 15:34:45.852033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.002 [2024-07-15 15:34:45.852065] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.002 [2024-07-15 15:34:45.852449] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.002 [2024-07-15 15:34:45.852607] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.002 [2024-07-15 15:34:45.852617] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.002 [2024-07-15 15:34:45.852626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.002 [2024-07-15 15:34:45.855129] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.002 [2024-07-15 15:34:45.864187] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.002 [2024-07-15 15:34:45.864695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.002 [2024-07-15 15:34:45.864746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.002 [2024-07-15 15:34:45.864779] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.002 [2024-07-15 15:34:45.865385] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.002 [2024-07-15 15:34:45.865753] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.002 [2024-07-15 15:34:45.865764] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.002 [2024-07-15 15:34:45.865773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.002 [2024-07-15 15:34:45.868233] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.002 [2024-07-15 15:34:45.876939] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.002 [2024-07-15 15:34:45.877363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.002 [2024-07-15 15:34:45.877420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.002 [2024-07-15 15:34:45.877454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.002 [2024-07-15 15:34:45.877826] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.002 [2024-07-15 15:34:45.877991] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.002 [2024-07-15 15:34:45.878002] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.002 [2024-07-15 15:34:45.878010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.002 [2024-07-15 15:34:45.880469] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.002 [2024-07-15 15:34:45.889608] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.002 [2024-07-15 15:34:45.890138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.002 [2024-07-15 15:34:45.890189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.002 [2024-07-15 15:34:45.890222] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.002 [2024-07-15 15:34:45.890597] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.002 [2024-07-15 15:34:45.890754] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.002 [2024-07-15 15:34:45.890765] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.002 [2024-07-15 15:34:45.890774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.002 [2024-07-15 15:34:45.893241] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.263 [2024-07-15 15:34:45.902594] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.263 [2024-07-15 15:34:45.902985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.263 [2024-07-15 15:34:45.903003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.263 [2024-07-15 15:34:45.903014] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.263 [2024-07-15 15:34:45.903179] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.263 [2024-07-15 15:34:45.903345] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.263 [2024-07-15 15:34:45.903356] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.263 [2024-07-15 15:34:45.903365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.263 [2024-07-15 15:34:45.905928] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.263 [2024-07-15 15:34:45.915416] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.263 [2024-07-15 15:34:45.915942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.263 [2024-07-15 15:34:45.915960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.263 [2024-07-15 15:34:45.915989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.263 [2024-07-15 15:34:45.916541] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.263 [2024-07-15 15:34:45.916701] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.263 [2024-07-15 15:34:45.916712] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.263 [2024-07-15 15:34:45.916721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.263 [2024-07-15 15:34:45.919222] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.263 [2024-07-15 15:34:45.928064] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.263 [2024-07-15 15:34:45.928583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.263 [2024-07-15 15:34:45.928633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.263 [2024-07-15 15:34:45.928666] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.263 [2024-07-15 15:34:45.929273] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.263 [2024-07-15 15:34:45.929844] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.263 [2024-07-15 15:34:45.929855] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.263 [2024-07-15 15:34:45.929864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.263 [2024-07-15 15:34:45.932404] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.263 [2024-07-15 15:34:45.940821] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.263 [2024-07-15 15:34:45.941341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.263 [2024-07-15 15:34:45.941358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.263 [2024-07-15 15:34:45.941368] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.263 [2024-07-15 15:34:45.941525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.263 [2024-07-15 15:34:45.941683] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.263 [2024-07-15 15:34:45.941693] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.263 [2024-07-15 15:34:45.941702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.263 [2024-07-15 15:34:45.944168] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.263 [2024-07-15 15:34:45.953622] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.263 [2024-07-15 15:34:45.954163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.263 [2024-07-15 15:34:45.954215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.263 [2024-07-15 15:34:45.954247] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.263 [2024-07-15 15:34:45.954782] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.263 [2024-07-15 15:34:45.954950] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.263 [2024-07-15 15:34:45.954961] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.263 [2024-07-15 15:34:45.954970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.263 [2024-07-15 15:34:45.957440] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.263 [2024-07-15 15:34:45.966291] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.263 [2024-07-15 15:34:45.966831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.263 [2024-07-15 15:34:45.966896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.263 [2024-07-15 15:34:45.966928] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.263 [2024-07-15 15:34:45.967518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.263 [2024-07-15 15:34:45.967988] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.263 [2024-07-15 15:34:45.967999] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.263 [2024-07-15 15:34:45.968009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.263 [2024-07-15 15:34:45.970570] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.263 [2024-07-15 15:34:45.979159] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.263 [2024-07-15 15:34:45.979628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.263 [2024-07-15 15:34:45.979678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.263 [2024-07-15 15:34:45.979711] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.263 [2024-07-15 15:34:45.980109] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.263 [2024-07-15 15:34:45.980275] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.263 [2024-07-15 15:34:45.980286] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.263 [2024-07-15 15:34:45.980295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.263 [2024-07-15 15:34:45.982910] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.263 [2024-07-15 15:34:45.992064] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.263 [2024-07-15 15:34:45.992429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.263 [2024-07-15 15:34:45.992447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.263 [2024-07-15 15:34:45.992456] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.263 [2024-07-15 15:34:45.992622] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.263 [2024-07-15 15:34:45.992788] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.263 [2024-07-15 15:34:45.992799] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.263 [2024-07-15 15:34:45.992808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.263 [2024-07-15 15:34:45.995406] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.263 [2024-07-15 15:34:46.004764] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.263 [2024-07-15 15:34:46.005304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.263 [2024-07-15 15:34:46.005358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.263 [2024-07-15 15:34:46.005399] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.264 [2024-07-15 15:34:46.006005] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.264 [2024-07-15 15:34:46.006352] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.264 [2024-07-15 15:34:46.006363] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.264 [2024-07-15 15:34:46.006372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.264 [2024-07-15 15:34:46.009890] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.264 [2024-07-15 15:34:46.018172] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.264 [2024-07-15 15:34:46.018533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.264 [2024-07-15 15:34:46.018586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.264 [2024-07-15 15:34:46.018618] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.264 [2024-07-15 15:34:46.019057] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.264 [2024-07-15 15:34:46.019215] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.264 [2024-07-15 15:34:46.019226] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.264 [2024-07-15 15:34:46.019234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.264 [2024-07-15 15:34:46.021692] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.264 [2024-07-15 15:34:46.030839] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.264 [2024-07-15 15:34:46.031199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.264 [2024-07-15 15:34:46.031217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.264 [2024-07-15 15:34:46.031226] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.264 [2024-07-15 15:34:46.031383] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.264 [2024-07-15 15:34:46.031542] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.264 [2024-07-15 15:34:46.031552] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.264 [2024-07-15 15:34:46.031561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.264 [2024-07-15 15:34:46.034029] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.264 [2024-07-15 15:34:46.043605] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.264 [2024-07-15 15:34:46.044101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.264 [2024-07-15 15:34:46.044118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.264 [2024-07-15 15:34:46.044128] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.264 [2024-07-15 15:34:46.044285] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.264 [2024-07-15 15:34:46.044443] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.264 [2024-07-15 15:34:46.044457] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.264 [2024-07-15 15:34:46.044465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.264 [2024-07-15 15:34:46.046930] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.264 [2024-07-15 15:34:46.056362] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.264 [2024-07-15 15:34:46.056855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.264 [2024-07-15 15:34:46.056872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.264 [2024-07-15 15:34:46.056882] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.264 [2024-07-15 15:34:46.057039] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.264 [2024-07-15 15:34:46.057197] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.264 [2024-07-15 15:34:46.057207] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.264 [2024-07-15 15:34:46.057216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.264 [2024-07-15 15:34:46.059680] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.264 [2024-07-15 15:34:46.069113] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.264 [2024-07-15 15:34:46.069648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.264 [2024-07-15 15:34:46.069699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.264 [2024-07-15 15:34:46.069732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.264 [2024-07-15 15:34:46.070135] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.264 [2024-07-15 15:34:46.070293] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.264 [2024-07-15 15:34:46.070303] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.264 [2024-07-15 15:34:46.070312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.264 [2024-07-15 15:34:46.072771] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.264 [2024-07-15 15:34:46.081765] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.264 [2024-07-15 15:34:46.082297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.264 [2024-07-15 15:34:46.082350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.264 [2024-07-15 15:34:46.082383] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.264 [2024-07-15 15:34:46.082948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.264 [2024-07-15 15:34:46.083107] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.264 [2024-07-15 15:34:46.083118] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.264 [2024-07-15 15:34:46.083127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.264 [2024-07-15 15:34:46.085586] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.264 [2024-07-15 15:34:46.094441] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.264 [2024-07-15 15:34:46.094967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.264 [2024-07-15 15:34:46.095021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.264 [2024-07-15 15:34:46.095054] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.264 [2024-07-15 15:34:46.095384] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.264 [2024-07-15 15:34:46.095541] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.264 [2024-07-15 15:34:46.095552] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.264 [2024-07-15 15:34:46.095561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.264 [2024-07-15 15:34:46.098025] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.264 [2024-07-15 15:34:46.107163] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.264 [2024-07-15 15:34:46.107689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.264 [2024-07-15 15:34:46.107740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.264 [2024-07-15 15:34:46.107773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.264 [2024-07-15 15:34:46.108245] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.264 [2024-07-15 15:34:46.108402] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.264 [2024-07-15 15:34:46.108413] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.264 [2024-07-15 15:34:46.108422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.264 [2024-07-15 15:34:46.111010] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.264 [2024-07-15 15:34:46.119900] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.264 [2024-07-15 15:34:46.120428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.264 [2024-07-15 15:34:46.120479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.264 [2024-07-15 15:34:46.120511] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.264 [2024-07-15 15:34:46.121117] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.264 [2024-07-15 15:34:46.121646] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.264 [2024-07-15 15:34:46.121657] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.264 [2024-07-15 15:34:46.121666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.264 [2024-07-15 15:34:46.124201] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.264 [2024-07-15 15:34:46.132604] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.264 [2024-07-15 15:34:46.133141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.264 [2024-07-15 15:34:46.133192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.264 [2024-07-15 15:34:46.133224] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.264 [2024-07-15 15:34:46.133657] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.264 [2024-07-15 15:34:46.133815] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.264 [2024-07-15 15:34:46.133826] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.264 [2024-07-15 15:34:46.133840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.264 [2024-07-15 15:34:46.136300] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.264 [2024-07-15 15:34:46.145290] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.264 [2024-07-15 15:34:46.145818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.264 [2024-07-15 15:34:46.145883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.265 [2024-07-15 15:34:46.145915] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.265 [2024-07-15 15:34:46.146455] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.265 [2024-07-15 15:34:46.146613] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.265 [2024-07-15 15:34:46.146623] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.265 [2024-07-15 15:34:46.146633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.265 [2024-07-15 15:34:46.149097] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.265 [2024-07-15 15:34:46.157972] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.265 [2024-07-15 15:34:46.158504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.265 [2024-07-15 15:34:46.158555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.265 [2024-07-15 15:34:46.158588] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.265 [2024-07-15 15:34:46.159193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.265 [2024-07-15 15:34:46.159535] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.265 [2024-07-15 15:34:46.159553] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.265 [2024-07-15 15:34:46.159561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.265 [2024-07-15 15:34:46.162023] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.526 [2024-07-15 15:34:46.170822] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.526 [2024-07-15 15:34:46.171361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.526 [2024-07-15 15:34:46.171412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.526 [2024-07-15 15:34:46.171444] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.526 [2024-07-15 15:34:46.172050] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.526 [2024-07-15 15:34:46.172504] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.526 [2024-07-15 15:34:46.172515] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.526 [2024-07-15 15:34:46.172527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.526 [2024-07-15 15:34:46.175190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.526 [2024-07-15 15:34:46.183466] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.526 [2024-07-15 15:34:46.183995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.526 [2024-07-15 15:34:46.184047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.526 [2024-07-15 15:34:46.184079] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.526 [2024-07-15 15:34:46.184559] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.526 [2024-07-15 15:34:46.184717] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.526 [2024-07-15 15:34:46.184727] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.526 [2024-07-15 15:34:46.184736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.526 [2024-07-15 15:34:46.187200] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.526 [2024-07-15 15:34:46.196195] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.526 [2024-07-15 15:34:46.196721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.526 [2024-07-15 15:34:46.196773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.526 [2024-07-15 15:34:46.196806] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.526 [2024-07-15 15:34:46.197414] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.526 [2024-07-15 15:34:46.197803] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.526 [2024-07-15 15:34:46.197814] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.526 [2024-07-15 15:34:46.197822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.526 [2024-07-15 15:34:46.201268] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.526 [2024-07-15 15:34:46.209726] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.526 [2024-07-15 15:34:46.210236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.526 [2024-07-15 15:34:46.210254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.526 [2024-07-15 15:34:46.210264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.526 [2024-07-15 15:34:46.210430] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.527 [2024-07-15 15:34:46.210618] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.527 [2024-07-15 15:34:46.210629] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.527 [2024-07-15 15:34:46.210639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.527 [2024-07-15 15:34:46.213171] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.527 [2024-07-15 15:34:46.222446] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.527 [2024-07-15 15:34:46.222979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.527 [2024-07-15 15:34:46.223030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.527 [2024-07-15 15:34:46.223064] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.527 [2024-07-15 15:34:46.223536] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.527 [2024-07-15 15:34:46.223703] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.527 [2024-07-15 15:34:46.223714] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.527 [2024-07-15 15:34:46.223723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.527 [2024-07-15 15:34:46.226323] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.527 [2024-07-15 15:34:46.235338] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.527 [2024-07-15 15:34:46.235852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.527 [2024-07-15 15:34:46.235871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.527 [2024-07-15 15:34:46.235880] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.527 [2024-07-15 15:34:46.236037] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.527 [2024-07-15 15:34:46.236195] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.527 [2024-07-15 15:34:46.236205] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.527 [2024-07-15 15:34:46.236214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.527 [2024-07-15 15:34:46.238672] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.527 [2024-07-15 15:34:46.248126] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.527 [2024-07-15 15:34:46.248670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.527 [2024-07-15 15:34:46.248727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.527 [2024-07-15 15:34:46.248760] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.527 [2024-07-15 15:34:46.249368] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.527 [2024-07-15 15:34:46.249829] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.527 [2024-07-15 15:34:46.249844] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.527 [2024-07-15 15:34:46.249854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.527 [2024-07-15 15:34:46.252347] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.527 [2024-07-15 15:34:46.260769] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.527 [2024-07-15 15:34:46.261291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.527 [2024-07-15 15:34:46.261309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.527 [2024-07-15 15:34:46.261318] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.527 [2024-07-15 15:34:46.261475] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.527 [2024-07-15 15:34:46.261636] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.527 [2024-07-15 15:34:46.261647] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.527 [2024-07-15 15:34:46.261656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.527 [2024-07-15 15:34:46.264122] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.527 [2024-07-15 15:34:46.273554] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.527 [2024-07-15 15:34:46.274086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.527 [2024-07-15 15:34:46.274139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.527 [2024-07-15 15:34:46.274173] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.527 [2024-07-15 15:34:46.274693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.527 [2024-07-15 15:34:46.274855] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.527 [2024-07-15 15:34:46.274866] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.527 [2024-07-15 15:34:46.274876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.527 [2024-07-15 15:34:46.277333] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.527 [2024-07-15 15:34:46.286327] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.527 [2024-07-15 15:34:46.286857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.527 [2024-07-15 15:34:46.286909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.527 [2024-07-15 15:34:46.286943] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.527 [2024-07-15 15:34:46.287533] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.527 [2024-07-15 15:34:46.287873] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.527 [2024-07-15 15:34:46.287884] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.527 [2024-07-15 15:34:46.287893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.527 [2024-07-15 15:34:46.290360] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.527 [2024-07-15 15:34:46.299067] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.527 [2024-07-15 15:34:46.299516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.527 [2024-07-15 15:34:46.299567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.527 [2024-07-15 15:34:46.299600] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.527 [2024-07-15 15:34:46.300206] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.527 [2024-07-15 15:34:46.300766] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.527 [2024-07-15 15:34:46.300777] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.527 [2024-07-15 15:34:46.300786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.527 [2024-07-15 15:34:46.303284] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.527 [2024-07-15 15:34:46.311736] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.527 [2024-07-15 15:34:46.312276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.527 [2024-07-15 15:34:46.312328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.527 [2024-07-15 15:34:46.312361] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.527 [2024-07-15 15:34:46.312851] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.527 [2024-07-15 15:34:46.313009] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.527 [2024-07-15 15:34:46.313020] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.527 [2024-07-15 15:34:46.313029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.527 [2024-07-15 15:34:46.315482] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.527 [2024-07-15 15:34:46.324540] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.527 [2024-07-15 15:34:46.325042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.527 [2024-07-15 15:34:46.325093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.527 [2024-07-15 15:34:46.325126] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.527 [2024-07-15 15:34:46.325618] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.527 [2024-07-15 15:34:46.325776] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.527 [2024-07-15 15:34:46.325787] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.527 [2024-07-15 15:34:46.325796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.527 [2024-07-15 15:34:46.328264] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.527 [2024-07-15 15:34:46.337272] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.527 [2024-07-15 15:34:46.337762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.527 [2024-07-15 15:34:46.337779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.527 [2024-07-15 15:34:46.337788] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.527 [2024-07-15 15:34:46.337949] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.527 [2024-07-15 15:34:46.338108] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.527 [2024-07-15 15:34:46.338120] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.527 [2024-07-15 15:34:46.338128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.527 [2024-07-15 15:34:46.340585] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.527 [2024-07-15 15:34:46.350079] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.527 [2024-07-15 15:34:46.350577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.527 [2024-07-15 15:34:46.350599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.527 [2024-07-15 15:34:46.350610] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.527 [2024-07-15 15:34:46.350776] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.528 [2024-07-15 15:34:46.350948] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.528 [2024-07-15 15:34:46.350960] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.528 [2024-07-15 15:34:46.350969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.528 [2024-07-15 15:34:46.353478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.528 [2024-07-15 15:34:46.362770] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.528 [2024-07-15 15:34:46.363295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.528 [2024-07-15 15:34:46.363348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.528 [2024-07-15 15:34:46.363381] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.528 [2024-07-15 15:34:46.363986] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.528 [2024-07-15 15:34:46.364272] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.528 [2024-07-15 15:34:46.364283] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.528 [2024-07-15 15:34:46.364292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.528 [2024-07-15 15:34:46.366748] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.528 [2024-07-15 15:34:46.375546] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.528 [2024-07-15 15:34:46.376103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.528 [2024-07-15 15:34:46.376167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.528 [2024-07-15 15:34:46.376200] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.528 [2024-07-15 15:34:46.376751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.528 [2024-07-15 15:34:46.376922] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.528 [2024-07-15 15:34:46.376933] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.528 [2024-07-15 15:34:46.376942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.528 [2024-07-15 15:34:46.379541] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.528 [2024-07-15 15:34:46.388325] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.528 [2024-07-15 15:34:46.388779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.528 [2024-07-15 15:34:46.388797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.528 [2024-07-15 15:34:46.388807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.528 [2024-07-15 15:34:46.388968] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.528 [2024-07-15 15:34:46.389129] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.528 [2024-07-15 15:34:46.389140] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.528 [2024-07-15 15:34:46.389149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.528 [2024-07-15 15:34:46.391609] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.528 [2024-07-15 15:34:46.401097] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.528 [2024-07-15 15:34:46.401541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.528 [2024-07-15 15:34:46.401592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.528 [2024-07-15 15:34:46.401626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.528 [2024-07-15 15:34:46.402014] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.528 [2024-07-15 15:34:46.402172] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.528 [2024-07-15 15:34:46.402183] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.528 [2024-07-15 15:34:46.402192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.528 [2024-07-15 15:34:46.404653] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.528 [2024-07-15 15:34:46.413919] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.528 [2024-07-15 15:34:46.414411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.528 [2024-07-15 15:34:46.414429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.528 [2024-07-15 15:34:46.414439] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.528 [2024-07-15 15:34:46.414595] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.528 [2024-07-15 15:34:46.414753] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.528 [2024-07-15 15:34:46.414764] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.528 [2024-07-15 15:34:46.414773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.528 [2024-07-15 15:34:46.417237] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.528 [2024-07-15 15:34:46.426720] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.528 [2024-07-15 15:34:46.427130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.528 [2024-07-15 15:34:46.427148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.528 [2024-07-15 15:34:46.427159] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.528 [2024-07-15 15:34:46.427328] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.528 [2024-07-15 15:34:46.427500] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.528 [2024-07-15 15:34:46.427511] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.528 [2024-07-15 15:34:46.427520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.528 [2024-07-15 15:34:46.430200] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.789 [2024-07-15 15:34:46.439709] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.789 [2024-07-15 15:34:46.440240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.789 [2024-07-15 15:34:46.440259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.789 [2024-07-15 15:34:46.440270] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.789 [2024-07-15 15:34:46.440436] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.789 [2024-07-15 15:34:46.440602] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.789 [2024-07-15 15:34:46.440613] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.789 [2024-07-15 15:34:46.440622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.789 [2024-07-15 15:34:46.443306] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.789 [2024-07-15 15:34:46.452620] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.789 [2024-07-15 15:34:46.453128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.789 [2024-07-15 15:34:46.453146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.789 [2024-07-15 15:34:46.453156] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.789 [2024-07-15 15:34:46.453321] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.789 [2024-07-15 15:34:46.453488] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.789 [2024-07-15 15:34:46.453500] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.789 [2024-07-15 15:34:46.453509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.789 [2024-07-15 15:34:46.456147] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.789 [2024-07-15 15:34:46.465303] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.789 [2024-07-15 15:34:46.465735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.789 [2024-07-15 15:34:46.465786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.789 [2024-07-15 15:34:46.465818] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.789 [2024-07-15 15:34:46.466255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.789 [2024-07-15 15:34:46.466413] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.789 [2024-07-15 15:34:46.466424] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.789 [2024-07-15 15:34:46.466433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.789 [2024-07-15 15:34:46.468896] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.790 [2024-07-15 15:34:46.477981] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.790 [2024-07-15 15:34:46.478258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.790 [2024-07-15 15:34:46.478275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.790 [2024-07-15 15:34:46.478288] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.790 [2024-07-15 15:34:46.478444] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.790 [2024-07-15 15:34:46.478602] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.790 [2024-07-15 15:34:46.478613] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.790 [2024-07-15 15:34:46.478622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.790 [2024-07-15 15:34:46.481282] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.790 [2024-07-15 15:34:46.490913] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.790 [2024-07-15 15:34:46.491364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.790 [2024-07-15 15:34:46.491382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.790 [2024-07-15 15:34:46.491392] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.790 [2024-07-15 15:34:46.491741] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.790 [2024-07-15 15:34:46.491921] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.790 [2024-07-15 15:34:46.491934] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.790 [2024-07-15 15:34:46.491943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.790 [2024-07-15 15:34:46.494666] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.790 [2024-07-15 15:34:46.503848] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.790 [2024-07-15 15:34:46.504388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.790 [2024-07-15 15:34:46.504407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.790 [2024-07-15 15:34:46.504417] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.790 [2024-07-15 15:34:46.504589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.790 [2024-07-15 15:34:46.504760] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.790 [2024-07-15 15:34:46.504772] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.790 [2024-07-15 15:34:46.504781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.790 [2024-07-15 15:34:46.507456] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.790 [2024-07-15 15:34:46.516756] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.790 [2024-07-15 15:34:46.517282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.790 [2024-07-15 15:34:46.517302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.790 [2024-07-15 15:34:46.517313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.790 [2024-07-15 15:34:46.517483] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.790 [2024-07-15 15:34:46.517654] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.790 [2024-07-15 15:34:46.517668] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.790 [2024-07-15 15:34:46.517677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.790 [2024-07-15 15:34:46.520354] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.790 [2024-07-15 15:34:46.529944] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.790 [2024-07-15 15:34:46.530488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.790 [2024-07-15 15:34:46.530507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.790 [2024-07-15 15:34:46.530518] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.790 [2024-07-15 15:34:46.530699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.790 [2024-07-15 15:34:46.530894] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.790 [2024-07-15 15:34:46.530905] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.790 [2024-07-15 15:34:46.530915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.790 [2024-07-15 15:34:46.533581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.790 [2024-07-15 15:34:46.542874] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.790 [2024-07-15 15:34:46.543379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.790 [2024-07-15 15:34:46.543397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.790 [2024-07-15 15:34:46.543408] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.790 [2024-07-15 15:34:46.543577] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.790 [2024-07-15 15:34:46.543749] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.790 [2024-07-15 15:34:46.543760] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.790 [2024-07-15 15:34:46.543769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.790 [2024-07-15 15:34:46.546445] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.790 [2024-07-15 15:34:46.555741] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.790 [2024-07-15 15:34:46.556207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.790 [2024-07-15 15:34:46.556226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.790 [2024-07-15 15:34:46.556236] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.790 [2024-07-15 15:34:46.556406] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.790 [2024-07-15 15:34:46.556579] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.790 [2024-07-15 15:34:46.556590] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.790 [2024-07-15 15:34:46.556600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.790 [2024-07-15 15:34:46.559276] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.790 [2024-07-15 15:34:46.568636] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.790 [2024-07-15 15:34:46.569160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.790 [2024-07-15 15:34:46.569178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.790 [2024-07-15 15:34:46.569189] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.790 [2024-07-15 15:34:46.569359] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.790 [2024-07-15 15:34:46.569530] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.790 [2024-07-15 15:34:46.569541] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.790 [2024-07-15 15:34:46.569550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.790 [2024-07-15 15:34:46.572224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.790 [2024-07-15 15:34:46.581529] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.790 [2024-07-15 15:34:46.582079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.790 [2024-07-15 15:34:46.582099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.790 [2024-07-15 15:34:46.582109] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.790 [2024-07-15 15:34:46.582280] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.790 [2024-07-15 15:34:46.582451] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.790 [2024-07-15 15:34:46.582462] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.790 [2024-07-15 15:34:46.582471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.790 [2024-07-15 15:34:46.585141] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.790 [2024-07-15 15:34:46.594564] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.790 [2024-07-15 15:34:46.595067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.790 [2024-07-15 15:34:46.595087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.790 [2024-07-15 15:34:46.595097] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.790 [2024-07-15 15:34:46.595277] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.790 [2024-07-15 15:34:46.595459] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.790 [2024-07-15 15:34:46.595470] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.790 [2024-07-15 15:34:46.595480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.790 [2024-07-15 15:34:46.598238] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.790 [2024-07-15 15:34:46.607496] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.790 [2024-07-15 15:34:46.608017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.790 [2024-07-15 15:34:46.608036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.790 [2024-07-15 15:34:46.608047] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.790 [2024-07-15 15:34:46.608221] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.790 [2024-07-15 15:34:46.608392] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.790 [2024-07-15 15:34:46.608405] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.791 [2024-07-15 15:34:46.608414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.791 [2024-07-15 15:34:46.611096] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.791 [2024-07-15 15:34:46.620398] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.791 [2024-07-15 15:34:46.620903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.791 [2024-07-15 15:34:46.620922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.791 [2024-07-15 15:34:46.620932] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.791 [2024-07-15 15:34:46.621103] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.791 [2024-07-15 15:34:46.621275] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.791 [2024-07-15 15:34:46.621286] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.791 [2024-07-15 15:34:46.621296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.791 [2024-07-15 15:34:46.623971] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.791 [2024-07-15 15:34:46.633272] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.791 [2024-07-15 15:34:46.633731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.791 [2024-07-15 15:34:46.633749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.791 [2024-07-15 15:34:46.633760] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.791 [2024-07-15 15:34:46.633936] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.791 [2024-07-15 15:34:46.634108] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.791 [2024-07-15 15:34:46.634119] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.791 [2024-07-15 15:34:46.634128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.791 [2024-07-15 15:34:46.636796] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.791 [2024-07-15 15:34:46.646256] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.791 [2024-07-15 15:34:46.646788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.791 [2024-07-15 15:34:46.646807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.791 [2024-07-15 15:34:46.646817] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.791 [2024-07-15 15:34:46.646992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.791 [2024-07-15 15:34:46.647164] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.791 [2024-07-15 15:34:46.647175] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.791 [2024-07-15 15:34:46.647188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.791 [2024-07-15 15:34:46.649862] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.791 [2024-07-15 15:34:46.659167] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.791 [2024-07-15 15:34:46.659703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.791 [2024-07-15 15:34:46.659721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.791 [2024-07-15 15:34:46.659731] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.791 [2024-07-15 15:34:46.659907] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.791 [2024-07-15 15:34:46.660079] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.791 [2024-07-15 15:34:46.660090] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.791 [2024-07-15 15:34:46.660099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.791 [2024-07-15 15:34:46.662766] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.791 [2024-07-15 15:34:46.672075] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.791 [2024-07-15 15:34:46.672611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.791 [2024-07-15 15:34:46.672630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.791 [2024-07-15 15:34:46.672640] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.791 [2024-07-15 15:34:46.672810] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.791 [2024-07-15 15:34:46.672985] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.791 [2024-07-15 15:34:46.672996] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.791 [2024-07-15 15:34:46.673006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.791 [2024-07-15 15:34:46.675674] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:42.791 [2024-07-15 15:34:46.684972] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:42.791 [2024-07-15 15:34:46.685502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.791 [2024-07-15 15:34:46.685521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:42.791 [2024-07-15 15:34:46.685531] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:42.791 [2024-07-15 15:34:46.685701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:42.791 [2024-07-15 15:34:46.685877] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:42.791 [2024-07-15 15:34:46.685889] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:42.791 [2024-07-15 15:34:46.685898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:42.791 [2024-07-15 15:34:46.688570] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.052 [2024-07-15 15:34:46.697864] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.052 [2024-07-15 15:34:46.698378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.052 [2024-07-15 15:34:46.698400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.052 [2024-07-15 15:34:46.698410] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.052 [2024-07-15 15:34:46.698579] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.052 [2024-07-15 15:34:46.698750] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.052 [2024-07-15 15:34:46.698761] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.052 [2024-07-15 15:34:46.698770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.052 [2024-07-15 15:34:46.701446] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.052 [2024-07-15 15:34:46.710742] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.052 [2024-07-15 15:34:46.711196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.052 [2024-07-15 15:34:46.711215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.052 [2024-07-15 15:34:46.711225] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.052 [2024-07-15 15:34:46.711396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.052 [2024-07-15 15:34:46.711568] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.052 [2024-07-15 15:34:46.711579] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.052 [2024-07-15 15:34:46.711588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.052 [2024-07-15 15:34:46.714264] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.052 [2024-07-15 15:34:46.723723] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.052 [2024-07-15 15:34:46.724262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.052 [2024-07-15 15:34:46.724281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.052 [2024-07-15 15:34:46.724291] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.052 [2024-07-15 15:34:46.724461] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.052 [2024-07-15 15:34:46.724633] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.052 [2024-07-15 15:34:46.724644] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.052 [2024-07-15 15:34:46.724655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.052 [2024-07-15 15:34:46.727327] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.052 [2024-07-15 15:34:46.736751] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.052 [2024-07-15 15:34:46.737284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.052 [2024-07-15 15:34:46.737304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.052 [2024-07-15 15:34:46.737314] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.052 [2024-07-15 15:34:46.737495] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.052 [2024-07-15 15:34:46.737679] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.052 [2024-07-15 15:34:46.737691] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.052 [2024-07-15 15:34:46.737701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.052 [2024-07-15 15:34:46.740541] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.052 [2024-07-15 15:34:46.749781] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.052 [2024-07-15 15:34:46.750319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.052 [2024-07-15 15:34:46.750339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.052 [2024-07-15 15:34:46.750349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.053 [2024-07-15 15:34:46.750542] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.053 [2024-07-15 15:34:46.750724] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.053 [2024-07-15 15:34:46.750736] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.053 [2024-07-15 15:34:46.750746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.053 [2024-07-15 15:34:46.753558] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.053 [2024-07-15 15:34:46.762714] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.053 [2024-07-15 15:34:46.763084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.053 [2024-07-15 15:34:46.763103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.053 [2024-07-15 15:34:46.763114] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.053 [2024-07-15 15:34:46.763283] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.053 [2024-07-15 15:34:46.763456] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.053 [2024-07-15 15:34:46.763470] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.053 [2024-07-15 15:34:46.763480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.053 [2024-07-15 15:34:46.766160] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.053 [2024-07-15 15:34:46.775627] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.053 [2024-07-15 15:34:46.776068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.053 [2024-07-15 15:34:46.776087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.053 [2024-07-15 15:34:46.776098] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.053 [2024-07-15 15:34:46.776267] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.053 [2024-07-15 15:34:46.776438] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.053 [2024-07-15 15:34:46.776450] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.053 [2024-07-15 15:34:46.776460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.053 [2024-07-15 15:34:46.779138] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.053 [2024-07-15 15:34:46.788603] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.053 [2024-07-15 15:34:46.789022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.053 [2024-07-15 15:34:46.789042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.053 [2024-07-15 15:34:46.789052] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.053 [2024-07-15 15:34:46.789222] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.053 [2024-07-15 15:34:46.789393] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.053 [2024-07-15 15:34:46.789405] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.053 [2024-07-15 15:34:46.789414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.053 [2024-07-15 15:34:46.792090] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.053 [2024-07-15 15:34:46.801557] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.053 [2024-07-15 15:34:46.802084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.053 [2024-07-15 15:34:46.802103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.053 [2024-07-15 15:34:46.802113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.053 [2024-07-15 15:34:46.802283] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.053 [2024-07-15 15:34:46.802455] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.053 [2024-07-15 15:34:46.802466] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.053 [2024-07-15 15:34:46.802475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.053 [2024-07-15 15:34:46.805150] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.053 [2024-07-15 15:34:46.814449] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.053 [2024-07-15 15:34:46.814969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.053 [2024-07-15 15:34:46.814988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.053 [2024-07-15 15:34:46.814999] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.053 [2024-07-15 15:34:46.815169] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.053 [2024-07-15 15:34:46.815340] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.053 [2024-07-15 15:34:46.815352] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.053 [2024-07-15 15:34:46.815361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.053 [2024-07-15 15:34:46.818037] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.053 [2024-07-15 15:34:46.827335] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.053 [2024-07-15 15:34:46.827842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.053 [2024-07-15 15:34:46.827860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.053 [2024-07-15 15:34:46.827873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.053 [2024-07-15 15:34:46.828044] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.053 [2024-07-15 15:34:46.828215] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.053 [2024-07-15 15:34:46.828227] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.053 [2024-07-15 15:34:46.828237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.053 [2024-07-15 15:34:46.830908] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.053 [2024-07-15 15:34:46.840201] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.053 [2024-07-15 15:34:46.840702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.053 [2024-07-15 15:34:46.840720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.053 [2024-07-15 15:34:46.840730] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.053 [2024-07-15 15:34:46.840905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.053 [2024-07-15 15:34:46.841077] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.053 [2024-07-15 15:34:46.841089] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.053 [2024-07-15 15:34:46.841098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.053 [2024-07-15 15:34:46.843770] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.053 [2024-07-15 15:34:46.853221] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.053 [2024-07-15 15:34:46.853654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.053 [2024-07-15 15:34:46.853672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.053 [2024-07-15 15:34:46.853682] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.053 [2024-07-15 15:34:46.853868] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.053 [2024-07-15 15:34:46.854039] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.053 [2024-07-15 15:34:46.854050] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.053 [2024-07-15 15:34:46.854060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.053 [2024-07-15 15:34:46.856729] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.053 [2024-07-15 15:34:46.866193] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.053 [2024-07-15 15:34:46.866721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.053 [2024-07-15 15:34:46.866773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.053 [2024-07-15 15:34:46.866805] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.053 [2024-07-15 15:34:46.867317] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.053 [2024-07-15 15:34:46.867488] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.053 [2024-07-15 15:34:46.867507] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.053 [2024-07-15 15:34:46.867516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.053 [2024-07-15 15:34:46.870192] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.053 [2024-07-15 15:34:46.879179] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.053 [2024-07-15 15:34:46.879724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.053 [2024-07-15 15:34:46.879775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.053 [2024-07-15 15:34:46.879807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.053 [2024-07-15 15:34:46.880259] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.053 [2024-07-15 15:34:46.880431] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.053 [2024-07-15 15:34:46.880442] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.053 [2024-07-15 15:34:46.880451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.053 [2024-07-15 15:34:46.883124] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.053 [2024-07-15 15:34:46.892105] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.053 [2024-07-15 15:34:46.892637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.053 [2024-07-15 15:34:46.892655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.053 [2024-07-15 15:34:46.892666] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.054 [2024-07-15 15:34:46.892842] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.054 [2024-07-15 15:34:46.893022] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.054 [2024-07-15 15:34:46.893033] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.054 [2024-07-15 15:34:46.893042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.054 [2024-07-15 15:34:46.895637] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.054 [2024-07-15 15:34:46.904805] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.054 [2024-07-15 15:34:46.905335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.054 [2024-07-15 15:34:46.905388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.054 [2024-07-15 15:34:46.905421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.054 [2024-07-15 15:34:46.906029] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.054 [2024-07-15 15:34:46.906409] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.054 [2024-07-15 15:34:46.906419] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.054 [2024-07-15 15:34:46.906428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.054 [2024-07-15 15:34:46.910161] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.054 [2024-07-15 15:34:46.918121] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.054 [2024-07-15 15:34:46.918622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.054 [2024-07-15 15:34:46.918640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.054 [2024-07-15 15:34:46.918650] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.054 [2024-07-15 15:34:46.918806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.054 [2024-07-15 15:34:46.918970] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.054 [2024-07-15 15:34:46.918981] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.054 [2024-07-15 15:34:46.918989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.054 [2024-07-15 15:34:46.921448] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.054 [2024-07-15 15:34:46.930822] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.054 [2024-07-15 15:34:46.931312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.054 [2024-07-15 15:34:46.931352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.054 [2024-07-15 15:34:46.931384] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.054 [2024-07-15 15:34:46.931990] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.054 [2024-07-15 15:34:46.932255] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.054 [2024-07-15 15:34:46.932266] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.054 [2024-07-15 15:34:46.932274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.054 [2024-07-15 15:34:46.934730] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.054 [2024-07-15 15:34:46.943584] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.054 [2024-07-15 15:34:46.944102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.054 [2024-07-15 15:34:46.944119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.054 [2024-07-15 15:34:46.944128] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.054 [2024-07-15 15:34:46.944284] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.054 [2024-07-15 15:34:46.944441] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.054 [2024-07-15 15:34:46.944451] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.054 [2024-07-15 15:34:46.944460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.054 [2024-07-15 15:34:46.946924] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.054 [2024-07-15 15:34:46.956493] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.054 [2024-07-15 15:34:46.957019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.054 [2024-07-15 15:34:46.957048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.054 [2024-07-15 15:34:46.957061] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.054 [2024-07-15 15:34:46.957226] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.054 [2024-07-15 15:34:46.957391] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.054 [2024-07-15 15:34:46.957402] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.054 [2024-07-15 15:34:46.957412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.315 [2024-07-15 15:34:46.960132] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.315 [2024-07-15 15:34:46.969197] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.315 [2024-07-15 15:34:46.969720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.315 [2024-07-15 15:34:46.969737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.315 [2024-07-15 15:34:46.969747] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.315 [2024-07-15 15:34:46.969918] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.315 [2024-07-15 15:34:46.970087] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.315 [2024-07-15 15:34:46.970097] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.315 [2024-07-15 15:34:46.970105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.315 [2024-07-15 15:34:46.972560] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.315 [2024-07-15 15:34:46.981929] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.315 [2024-07-15 15:34:46.982429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.315 [2024-07-15 15:34:46.982481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.315 [2024-07-15 15:34:46.982514] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.315 [2024-07-15 15:34:46.983120] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.315 [2024-07-15 15:34:46.983624] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.315 [2024-07-15 15:34:46.983635] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.315 [2024-07-15 15:34:46.983644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.315 [2024-07-15 15:34:46.986105] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.315 [2024-07-15 15:34:46.994609] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.315 [2024-07-15 15:34:46.995142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.315 [2024-07-15 15:34:46.995160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.315 [2024-07-15 15:34:46.995170] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.315 [2024-07-15 15:34:46.995327] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.315 [2024-07-15 15:34:46.995484] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.315 [2024-07-15 15:34:46.995497] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.315 [2024-07-15 15:34:46.995506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.315 [2024-07-15 15:34:46.998072] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.315 [2024-07-15 15:34:47.007423] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.315 [2024-07-15 15:34:47.007933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.315 [2024-07-15 15:34:47.007985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.315 [2024-07-15 15:34:47.008017] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.315 [2024-07-15 15:34:47.008544] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.315 [2024-07-15 15:34:47.008703] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.315 [2024-07-15 15:34:47.008714] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.315 [2024-07-15 15:34:47.008723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.315 [2024-07-15 15:34:47.011254] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.315 [2024-07-15 15:34:47.020202] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.316 [2024-07-15 15:34:47.020652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.316 [2024-07-15 15:34:47.020704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.316 [2024-07-15 15:34:47.020737] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.316 [2024-07-15 15:34:47.021288] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.316 [2024-07-15 15:34:47.021447] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.316 [2024-07-15 15:34:47.021458] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.316 [2024-07-15 15:34:47.021467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.316 [2024-07-15 15:34:47.023934] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.316 [2024-07-15 15:34:47.032991] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.316 [2024-07-15 15:34:47.033498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.316 [2024-07-15 15:34:47.033515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.316 [2024-07-15 15:34:47.033524] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.316 [2024-07-15 15:34:47.033680] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.316 [2024-07-15 15:34:47.033844] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.316 [2024-07-15 15:34:47.033855] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.316 [2024-07-15 15:34:47.033863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.316 [2024-07-15 15:34:47.036321] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.316 [2024-07-15 15:34:47.045793] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.316 [2024-07-15 15:34:47.046309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.316 [2024-07-15 15:34:47.046328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.316 [2024-07-15 15:34:47.046338] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.316 [2024-07-15 15:34:47.046508] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.316 [2024-07-15 15:34:47.046678] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.316 [2024-07-15 15:34:47.046689] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.316 [2024-07-15 15:34:47.046699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.316 [2024-07-15 15:34:47.049374] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.316 [2024-07-15 15:34:47.058551] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.316 [2024-07-15 15:34:47.059104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.316 [2024-07-15 15:34:47.059156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.316 [2024-07-15 15:34:47.059189] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.316 [2024-07-15 15:34:47.059630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.316 [2024-07-15 15:34:47.059789] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.316 [2024-07-15 15:34:47.059800] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.316 [2024-07-15 15:34:47.059809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.316 [2024-07-15 15:34:47.062275] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.316 [2024-07-15 15:34:47.071271] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.316 [2024-07-15 15:34:47.071795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.316 [2024-07-15 15:34:47.071857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.316 [2024-07-15 15:34:47.071890] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.316 [2024-07-15 15:34:47.072481] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.316 [2024-07-15 15:34:47.072910] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.316 [2024-07-15 15:34:47.072921] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.316 [2024-07-15 15:34:47.072930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.316 [2024-07-15 15:34:47.075476] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.316 [2024-07-15 15:34:47.084041] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.316 [2024-07-15 15:34:47.084471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.316 [2024-07-15 15:34:47.084489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.316 [2024-07-15 15:34:47.084498] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.316 [2024-07-15 15:34:47.084658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.316 [2024-07-15 15:34:47.084816] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.316 [2024-07-15 15:34:47.084826] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.316 [2024-07-15 15:34:47.084842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.316 [2024-07-15 15:34:47.087298] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.316 [2024-07-15 15:34:47.096733] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.316 [2024-07-15 15:34:47.097241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.316 [2024-07-15 15:34:47.097294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.316 [2024-07-15 15:34:47.097326] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.316 [2024-07-15 15:34:47.097693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.316 [2024-07-15 15:34:47.097859] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.316 [2024-07-15 15:34:47.097871] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.316 [2024-07-15 15:34:47.097879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.316 [2024-07-15 15:34:47.101496] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.316 [2024-07-15 15:34:47.110002] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.316 [2024-07-15 15:34:47.110464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.316 [2024-07-15 15:34:47.110482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.316 [2024-07-15 15:34:47.110492] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.316 [2024-07-15 15:34:47.110661] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.316 [2024-07-15 15:34:47.110840] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.316 [2024-07-15 15:34:47.110852] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.316 [2024-07-15 15:34:47.110861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.316 [2024-07-15 15:34:47.113465] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.316 [2024-07-15 15:34:47.122758] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.316 [2024-07-15 15:34:47.123284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.316 [2024-07-15 15:34:47.123336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.316 [2024-07-15 15:34:47.123368] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.316 [2024-07-15 15:34:47.123841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.316 [2024-07-15 15:34:47.124000] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.316 [2024-07-15 15:34:47.124010] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.316 [2024-07-15 15:34:47.124022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.316 [2024-07-15 15:34:47.126564] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.316 [2024-07-15 15:34:47.135503] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.316 [2024-07-15 15:34:47.136030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.316 [2024-07-15 15:34:47.136084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.316 [2024-07-15 15:34:47.136117] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.316 [2024-07-15 15:34:47.136708] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.316 [2024-07-15 15:34:47.136967] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.316 [2024-07-15 15:34:47.136978] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.316 [2024-07-15 15:34:47.136987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.316 [2024-07-15 15:34:47.139447] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.316 [2024-07-15 15:34:47.148238] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.316 [2024-07-15 15:34:47.148747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.316 [2024-07-15 15:34:47.148764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.316 [2024-07-15 15:34:47.148773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.316 [2024-07-15 15:34:47.148935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.316 [2024-07-15 15:34:47.149093] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.316 [2024-07-15 15:34:47.149103] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.316 [2024-07-15 15:34:47.149112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.316 [2024-07-15 15:34:47.151699] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.316 [2024-07-15 15:34:47.160989] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.317 [2024-07-15 15:34:47.161499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.317 [2024-07-15 15:34:47.161517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.317 [2024-07-15 15:34:47.161526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.317 [2024-07-15 15:34:47.161682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.317 [2024-07-15 15:34:47.161847] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.317 [2024-07-15 15:34:47.161859] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.317 [2024-07-15 15:34:47.161867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.317 [2024-07-15 15:34:47.164327] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.317 [2024-07-15 15:34:47.173749] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.317 [2024-07-15 15:34:47.174269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.317 [2024-07-15 15:34:47.174329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.317 [2024-07-15 15:34:47.174361] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.317 [2024-07-15 15:34:47.174872] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.317 [2024-07-15 15:34:47.175040] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.317 [2024-07-15 15:34:47.175052] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.317 [2024-07-15 15:34:47.175061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.317 [2024-07-15 15:34:47.177559] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.317 [2024-07-15 15:34:47.186406] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.317 [2024-07-15 15:34:47.186930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.317 [2024-07-15 15:34:47.186981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.317 [2024-07-15 15:34:47.187013] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.317 [2024-07-15 15:34:47.187603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.317 [2024-07-15 15:34:47.188146] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.317 [2024-07-15 15:34:47.188157] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.317 [2024-07-15 15:34:47.188166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.317 [2024-07-15 15:34:47.190624] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.317 [2024-07-15 15:34:47.199183] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.317 [2024-07-15 15:34:47.199691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.317 [2024-07-15 15:34:47.199739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.317 [2024-07-15 15:34:47.199772] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.317 [2024-07-15 15:34:47.200376] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.317 [2024-07-15 15:34:47.200951] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.317 [2024-07-15 15:34:47.200962] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.317 [2024-07-15 15:34:47.200971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.317 [2024-07-15 15:34:47.203430] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.317 [2024-07-15 15:34:47.212027] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.317 [2024-07-15 15:34:47.212569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.317 [2024-07-15 15:34:47.212620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.317 [2024-07-15 15:34:47.212653] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.317 [2024-07-15 15:34:47.213055] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.317 [2024-07-15 15:34:47.213229] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.317 [2024-07-15 15:34:47.213240] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.317 [2024-07-15 15:34:47.213249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.317 [2024-07-15 15:34:47.215704] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.578 [2024-07-15 15:34:47.224910] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.578 [2024-07-15 15:34:47.225434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.579 [2024-07-15 15:34:47.225484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.579 [2024-07-15 15:34:47.225516] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.579 [2024-07-15 15:34:47.226048] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.579 [2024-07-15 15:34:47.226221] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.579 [2024-07-15 15:34:47.226232] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.579 [2024-07-15 15:34:47.226242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.579 [2024-07-15 15:34:47.228906] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.579 [2024-07-15 15:34:47.237561] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.579 [2024-07-15 15:34:47.238072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.579 [2024-07-15 15:34:47.238090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.579 [2024-07-15 15:34:47.238099] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.579 [2024-07-15 15:34:47.238256] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.579 [2024-07-15 15:34:47.238413] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.579 [2024-07-15 15:34:47.238424] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.579 [2024-07-15 15:34:47.238432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.579 [2024-07-15 15:34:47.240986] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.579 [2024-07-15 15:34:47.250288] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.579 [2024-07-15 15:34:47.250755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.579 [2024-07-15 15:34:47.250773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.579 [2024-07-15 15:34:47.250783] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.579 [2024-07-15 15:34:47.250955] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.579 [2024-07-15 15:34:47.251120] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.579 [2024-07-15 15:34:47.251132] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.579 [2024-07-15 15:34:47.251140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.579 [2024-07-15 15:34:47.253749] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.579 [2024-07-15 15:34:47.263187] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.579 [2024-07-15 15:34:47.263718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.579 [2024-07-15 15:34:47.263769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.579 [2024-07-15 15:34:47.263802] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.579 [2024-07-15 15:34:47.264216] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.579 [2024-07-15 15:34:47.264384] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.579 [2024-07-15 15:34:47.264395] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.579 [2024-07-15 15:34:47.264405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.579 [2024-07-15 15:34:47.266893] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.579 [2024-07-15 15:34:47.275982] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.579 [2024-07-15 15:34:47.276466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.579 [2024-07-15 15:34:47.276517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.579 [2024-07-15 15:34:47.276550] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.579 [2024-07-15 15:34:47.276979] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.579 [2024-07-15 15:34:47.277138] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.579 [2024-07-15 15:34:47.277149] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.579 [2024-07-15 15:34:47.277158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.579 [2024-07-15 15:34:47.279616] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.579 [2024-07-15 15:34:47.288759] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.579 [2024-07-15 15:34:47.289284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.579 [2024-07-15 15:34:47.289336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.579 [2024-07-15 15:34:47.289368] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.579 [2024-07-15 15:34:47.289975] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.579 [2024-07-15 15:34:47.290433] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.579 [2024-07-15 15:34:47.290444] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.579 [2024-07-15 15:34:47.290453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.579 [2024-07-15 15:34:47.293963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.579 [2024-07-15 15:34:47.302159] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.579 [2024-07-15 15:34:47.302674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.579 [2024-07-15 15:34:47.302725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.579 [2024-07-15 15:34:47.302764] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.579 [2024-07-15 15:34:47.303311] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.579 [2024-07-15 15:34:47.303470] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.579 [2024-07-15 15:34:47.303480] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.579 [2024-07-15 15:34:47.303489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.579 [2024-07-15 15:34:47.305949] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.579 [2024-07-15 15:34:47.314902] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.579 [2024-07-15 15:34:47.315429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.579 [2024-07-15 15:34:47.315479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.579 [2024-07-15 15:34:47.315511] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.579 [2024-07-15 15:34:47.315992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.579 [2024-07-15 15:34:47.316150] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.579 [2024-07-15 15:34:47.316162] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.579 [2024-07-15 15:34:47.316171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.579 [2024-07-15 15:34:47.318629] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.579 [2024-07-15 15:34:47.327721] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.579 [2024-07-15 15:34:47.328250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.579 [2024-07-15 15:34:47.328308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.579 [2024-07-15 15:34:47.328341] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.579 [2024-07-15 15:34:47.328949] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.579 [2024-07-15 15:34:47.329309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.579 [2024-07-15 15:34:47.329320] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.579 [2024-07-15 15:34:47.329329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.579 [2024-07-15 15:34:47.331786] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.579 [2024-07-15 15:34:47.340490] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.579 [2024-07-15 15:34:47.341011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.579 [2024-07-15 15:34:47.341062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.580 [2024-07-15 15:34:47.341094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.580 [2024-07-15 15:34:47.341399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.580 [2024-07-15 15:34:47.341556] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.580 [2024-07-15 15:34:47.341570] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.580 [2024-07-15 15:34:47.341580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.580 [2024-07-15 15:34:47.344045] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.580 [2024-07-15 15:34:47.353315] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.580 [2024-07-15 15:34:47.353849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.580 [2024-07-15 15:34:47.353901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.580 [2024-07-15 15:34:47.353934] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.580 [2024-07-15 15:34:47.354423] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.580 [2024-07-15 15:34:47.354581] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.580 [2024-07-15 15:34:47.354592] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.580 [2024-07-15 15:34:47.354600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.580 [2024-07-15 15:34:47.357062] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.580 [2024-07-15 15:34:47.366061] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.580 [2024-07-15 15:34:47.366565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.580 [2024-07-15 15:34:47.366616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.580 [2024-07-15 15:34:47.366648] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.580 [2024-07-15 15:34:47.367256] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.580 [2024-07-15 15:34:47.367602] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.580 [2024-07-15 15:34:47.367613] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.580 [2024-07-15 15:34:47.367621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.580 [2024-07-15 15:34:47.370084] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.580 [2024-07-15 15:34:47.378706] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.580 [2024-07-15 15:34:47.379232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.580 [2024-07-15 15:34:47.379250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.580 [2024-07-15 15:34:47.379259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.580 [2024-07-15 15:34:47.379415] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.580 [2024-07-15 15:34:47.379572] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.580 [2024-07-15 15:34:47.379583] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.580 [2024-07-15 15:34:47.379591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.580 [2024-07-15 15:34:47.382057] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.580 [2024-07-15 15:34:47.391493] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.580 [2024-07-15 15:34:47.392015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.580 [2024-07-15 15:34:47.392068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.580 [2024-07-15 15:34:47.392101] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.580 [2024-07-15 15:34:47.392619] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.580 [2024-07-15 15:34:47.392777] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.580 [2024-07-15 15:34:47.392788] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.580 [2024-07-15 15:34:47.392796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.580 [2024-07-15 15:34:47.395260] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.580 [2024-07-15 15:34:47.404273] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.580 [2024-07-15 15:34:47.404788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.580 [2024-07-15 15:34:47.404806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.580 [2024-07-15 15:34:47.404816] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.580 [2024-07-15 15:34:47.404979] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.580 [2024-07-15 15:34:47.405137] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.580 [2024-07-15 15:34:47.405148] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.580 [2024-07-15 15:34:47.405156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.580 [2024-07-15 15:34:47.407618] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.580 [2024-07-15 15:34:47.417025] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.580 [2024-07-15 15:34:47.417490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.580 [2024-07-15 15:34:47.417508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.580 [2024-07-15 15:34:47.417517] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.580 [2024-07-15 15:34:47.417674] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.580 [2024-07-15 15:34:47.417838] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.580 [2024-07-15 15:34:47.417850] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.580 [2024-07-15 15:34:47.417858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.580 [2024-07-15 15:34:47.420321] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.580 [2024-07-15 15:34:47.429703] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.580 [2024-07-15 15:34:47.430198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.580 [2024-07-15 15:34:47.430216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.580 [2024-07-15 15:34:47.430226] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.580 [2024-07-15 15:34:47.430387] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.580 [2024-07-15 15:34:47.430545] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.580 [2024-07-15 15:34:47.430556] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.580 [2024-07-15 15:34:47.430565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.580 [2024-07-15 15:34:47.433033] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.580 [2024-07-15 15:34:47.442467] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.580 [2024-07-15 15:34:47.442982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.580 [2024-07-15 15:34:47.443034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.580 [2024-07-15 15:34:47.443067] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.580 [2024-07-15 15:34:47.443522] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.580 [2024-07-15 15:34:47.443680] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.580 [2024-07-15 15:34:47.443691] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.580 [2024-07-15 15:34:47.443701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.580 [2024-07-15 15:34:47.446164] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.580 [2024-07-15 15:34:47.455222] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.580 [2024-07-15 15:34:47.455744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.580 [2024-07-15 15:34:47.455796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.580 [2024-07-15 15:34:47.455828] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.580 [2024-07-15 15:34:47.456460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.580 [2024-07-15 15:34:47.456750] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.580 [2024-07-15 15:34:47.456761] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.580 [2024-07-15 15:34:47.456771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.580 [2024-07-15 15:34:47.459251] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.580 [2024-07-15 15:34:47.467956] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.580 [2024-07-15 15:34:47.468384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.580 [2024-07-15 15:34:47.468401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.581 [2024-07-15 15:34:47.468411] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.581 [2024-07-15 15:34:47.468567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.581 [2024-07-15 15:34:47.468724] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.581 [2024-07-15 15:34:47.468735] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.581 [2024-07-15 15:34:47.468746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.581 [2024-07-15 15:34:47.471213] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.581 [2024-07-15 15:34:47.480905] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.581 [2024-07-15 15:34:47.481445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.581 [2024-07-15 15:34:47.481463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.581 [2024-07-15 15:34:47.481473] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.581 [2024-07-15 15:34:47.481642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.581 [2024-07-15 15:34:47.481813] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.581 [2024-07-15 15:34:47.481823] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.581 [2024-07-15 15:34:47.481838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.841 [2024-07-15 15:34:47.484515] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.841 [2024-07-15 15:34:47.493886] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.841 [2024-07-15 15:34:47.494414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.841 [2024-07-15 15:34:47.494469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.841 [2024-07-15 15:34:47.494502] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.841 [2024-07-15 15:34:47.494809] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.841 [2024-07-15 15:34:47.494998] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.841 [2024-07-15 15:34:47.495010] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.841 [2024-07-15 15:34:47.495020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.841 [2024-07-15 15:34:47.497591] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.841 [2024-07-15 15:34:47.506587] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.841 [2024-07-15 15:34:47.507132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.841 [2024-07-15 15:34:47.507185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.841 [2024-07-15 15:34:47.507217] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.841 [2024-07-15 15:34:47.507774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.841 [2024-07-15 15:34:47.507957] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.841 [2024-07-15 15:34:47.507968] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.841 [2024-07-15 15:34:47.507977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.841 [2024-07-15 15:34:47.510613] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.841 [2024-07-15 15:34:47.519530] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.841 [2024-07-15 15:34:47.520060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.841 [2024-07-15 15:34:47.520112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.841 [2024-07-15 15:34:47.520145] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.841 [2024-07-15 15:34:47.520630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.841 [2024-07-15 15:34:47.520797] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.841 [2024-07-15 15:34:47.520809] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.841 [2024-07-15 15:34:47.520818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.841 [2024-07-15 15:34:47.523413] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.841 [2024-07-15 15:34:47.532212] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.841 [2024-07-15 15:34:47.532724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.841 [2024-07-15 15:34:47.532742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.842 [2024-07-15 15:34:47.532750] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.842 [2024-07-15 15:34:47.532914] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.842 [2024-07-15 15:34:47.533071] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.842 [2024-07-15 15:34:47.533081] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.842 [2024-07-15 15:34:47.533089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.842 [2024-07-15 15:34:47.535547] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.842 [2024-07-15 15:34:47.544972] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.842 [2024-07-15 15:34:47.545433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.842 [2024-07-15 15:34:47.545485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.842 [2024-07-15 15:34:47.545518] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.842 [2024-07-15 15:34:47.545945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.842 [2024-07-15 15:34:47.546103] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.842 [2024-07-15 15:34:47.546114] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.842 [2024-07-15 15:34:47.546122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.842 [2024-07-15 15:34:47.548577] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.842 [2024-07-15 15:34:47.557703] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.842 [2024-07-15 15:34:47.558239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.842 [2024-07-15 15:34:47.558292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.842 [2024-07-15 15:34:47.558324] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.842 [2024-07-15 15:34:47.558698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.842 [2024-07-15 15:34:47.558863] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.842 [2024-07-15 15:34:47.558875] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.842 [2024-07-15 15:34:47.558883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.842 [2024-07-15 15:34:47.561340] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.842 [2024-07-15 15:34:47.570481] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.842 [2024-07-15 15:34:47.570931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.842 [2024-07-15 15:34:47.570982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.842 [2024-07-15 15:34:47.571015] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.842 [2024-07-15 15:34:47.571604] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.842 [2024-07-15 15:34:47.572114] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.842 [2024-07-15 15:34:47.572126] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.842 [2024-07-15 15:34:47.572134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.842 [2024-07-15 15:34:47.574594] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.842 [2024-07-15 15:34:47.583439] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.842 [2024-07-15 15:34:47.583952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.842 [2024-07-15 15:34:47.584004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.842 [2024-07-15 15:34:47.584037] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.842 [2024-07-15 15:34:47.584626] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.842 [2024-07-15 15:34:47.585086] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.842 [2024-07-15 15:34:47.585098] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.842 [2024-07-15 15:34:47.585107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.842 [2024-07-15 15:34:47.587562] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.842 [2024-07-15 15:34:47.596119] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.842 [2024-07-15 15:34:47.596638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.842 [2024-07-15 15:34:47.596689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.842 [2024-07-15 15:34:47.596721] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.842 [2024-07-15 15:34:47.597057] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.842 [2024-07-15 15:34:47.597216] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.842 [2024-07-15 15:34:47.597227] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.842 [2024-07-15 15:34:47.597238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.842 [2024-07-15 15:34:47.599702] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.842 [2024-07-15 15:34:47.608879] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.842 [2024-07-15 15:34:47.609412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.842 [2024-07-15 15:34:47.609465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.842 [2024-07-15 15:34:47.609496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.842 [2024-07-15 15:34:47.609987] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.842 [2024-07-15 15:34:47.610145] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.842 [2024-07-15 15:34:47.610156] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.842 [2024-07-15 15:34:47.610165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.842 [2024-07-15 15:34:47.612729] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.842 [2024-07-15 15:34:47.621580] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.842 [2024-07-15 15:34:47.622078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.842 [2024-07-15 15:34:47.622129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.842 [2024-07-15 15:34:47.622161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.842 [2024-07-15 15:34:47.622748] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.842 [2024-07-15 15:34:47.622913] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.842 [2024-07-15 15:34:47.622925] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.842 [2024-07-15 15:34:47.622935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.842 [2024-07-15 15:34:47.625389] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.842 [2024-07-15 15:34:47.634292] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.842 [2024-07-15 15:34:47.634808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.842 [2024-07-15 15:34:47.634872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.842 [2024-07-15 15:34:47.634905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.842 [2024-07-15 15:34:47.635210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.842 [2024-07-15 15:34:47.635369] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.842 [2024-07-15 15:34:47.635380] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.842 [2024-07-15 15:34:47.635388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.842 [2024-07-15 15:34:47.637851] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.842 [2024-07-15 15:34:47.646987] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.842 [2024-07-15 15:34:47.647428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.842 [2024-07-15 15:34:47.647449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.842 [2024-07-15 15:34:47.647458] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.842 [2024-07-15 15:34:47.647615] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.842 [2024-07-15 15:34:47.647772] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.842 [2024-07-15 15:34:47.647784] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.842 [2024-07-15 15:34:47.647792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.842 [2024-07-15 15:34:47.650261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.842 [2024-07-15 15:34:47.659714] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.842 [2024-07-15 15:34:47.660194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.842 [2024-07-15 15:34:47.660246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.842 [2024-07-15 15:34:47.660280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.842 [2024-07-15 15:34:47.660887] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.842 [2024-07-15 15:34:47.661340] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.842 [2024-07-15 15:34:47.661351] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.842 [2024-07-15 15:34:47.661360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.842 [2024-07-15 15:34:47.663819] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.842 [2024-07-15 15:34:47.672380] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.842 [2024-07-15 15:34:47.672796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.843 [2024-07-15 15:34:47.672872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.843 [2024-07-15 15:34:47.672905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.843 [2024-07-15 15:34:47.673495] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.843 [2024-07-15 15:34:47.674098] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.843 [2024-07-15 15:34:47.674132] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.843 [2024-07-15 15:34:47.674140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.843 [2024-07-15 15:34:47.677787] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.843 [2024-07-15 15:34:47.685634] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.843 [2024-07-15 15:34:47.686163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.843 [2024-07-15 15:34:47.686216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.843 [2024-07-15 15:34:47.686248] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.843 [2024-07-15 15:34:47.686854] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.843 [2024-07-15 15:34:47.687374] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.843 [2024-07-15 15:34:47.687385] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.843 [2024-07-15 15:34:47.687394] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.843 [2024-07-15 15:34:47.689855] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.843 [2024-07-15 15:34:47.698417] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.843 [2024-07-15 15:34:47.698857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.843 [2024-07-15 15:34:47.698875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.843 [2024-07-15 15:34:47.698884] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.843 [2024-07-15 15:34:47.699041] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.843 [2024-07-15 15:34:47.699198] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.843 [2024-07-15 15:34:47.699209] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.843 [2024-07-15 15:34:47.699217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.843 [2024-07-15 15:34:47.701681] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.843 [2024-07-15 15:34:47.711216] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.843 [2024-07-15 15:34:47.711742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.843 [2024-07-15 15:34:47.711793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.843 [2024-07-15 15:34:47.711825] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.843 [2024-07-15 15:34:47.712229] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.843 [2024-07-15 15:34:47.712388] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.843 [2024-07-15 15:34:47.712399] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.843 [2024-07-15 15:34:47.712407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.843 [2024-07-15 15:34:47.714952] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.843 [2024-07-15 15:34:47.723945] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.843 [2024-07-15 15:34:47.724468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.843 [2024-07-15 15:34:47.724517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.843 [2024-07-15 15:34:47.724550] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.843 [2024-07-15 15:34:47.725156] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.843 [2024-07-15 15:34:47.725443] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.843 [2024-07-15 15:34:47.725454] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.843 [2024-07-15 15:34:47.725463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.843 [2024-07-15 15:34:47.728012] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:43.843 [2024-07-15 15:34:47.736714] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:43.843 [2024-07-15 15:34:47.737225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.843 [2024-07-15 15:34:47.737243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:43.843 [2024-07-15 15:34:47.737252] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:43.843 [2024-07-15 15:34:47.737408] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:43.843 [2024-07-15 15:34:47.737566] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:43.843 [2024-07-15 15:34:47.737577] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:43.843 [2024-07-15 15:34:47.737586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:43.843 [2024-07-15 15:34:47.740051] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.103 [2024-07-15 15:34:47.749689] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.103 [2024-07-15 15:34:47.750210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.103 [2024-07-15 15:34:47.750229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.103 [2024-07-15 15:34:47.750238] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.103 [2024-07-15 15:34:47.750402] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.103 [2024-07-15 15:34:47.750568] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.103 [2024-07-15 15:34:47.750578] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.103 [2024-07-15 15:34:47.750587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.103 [2024-07-15 15:34:47.753252] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.103 [2024-07-15 15:34:47.762443] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.103 [2024-07-15 15:34:47.762945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.103 [2024-07-15 15:34:47.762997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.103 [2024-07-15 15:34:47.763030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.103 [2024-07-15 15:34:47.763382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.103 [2024-07-15 15:34:47.763539] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.103 [2024-07-15 15:34:47.763550] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.103 [2024-07-15 15:34:47.763559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.103 [2024-07-15 15:34:47.766160] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.103 [2024-07-15 15:34:47.775246] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.104 [2024-07-15 15:34:47.775772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.104 [2024-07-15 15:34:47.775824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.104 [2024-07-15 15:34:47.776071] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.104 [2024-07-15 15:34:47.776386] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.104 [2024-07-15 15:34:47.776558] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.104 [2024-07-15 15:34:47.776570] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.104 [2024-07-15 15:34:47.776580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.104 [2024-07-15 15:34:47.779117] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.104 [2024-07-15 15:34:47.787975] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.104 [2024-07-15 15:34:47.788427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.104 [2024-07-15 15:34:47.788479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.104 [2024-07-15 15:34:47.788512] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.104 [2024-07-15 15:34:47.789121] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.104 [2024-07-15 15:34:47.789326] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.104 [2024-07-15 15:34:47.789337] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.104 [2024-07-15 15:34:47.789346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.104 [2024-07-15 15:34:47.791806] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.104 [2024-07-15 15:34:47.800662] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.104 [2024-07-15 15:34:47.801139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.104 [2024-07-15 15:34:47.801157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.104 [2024-07-15 15:34:47.801167] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.104 [2024-07-15 15:34:47.801323] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.104 [2024-07-15 15:34:47.801481] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.104 [2024-07-15 15:34:47.801492] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.104 [2024-07-15 15:34:47.801500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.104 [2024-07-15 15:34:47.803961] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.104 [2024-07-15 15:34:47.813462] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.104 [2024-07-15 15:34:47.814003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.104 [2024-07-15 15:34:47.814054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.104 [2024-07-15 15:34:47.814086] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.104 [2024-07-15 15:34:47.814458] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.104 [2024-07-15 15:34:47.814616] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.104 [2024-07-15 15:34:47.814630] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.104 [2024-07-15 15:34:47.814639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.104 [2024-07-15 15:34:47.817107] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.104 [2024-07-15 15:34:47.826112] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.104 [2024-07-15 15:34:47.826603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.104 [2024-07-15 15:34:47.826621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.104 [2024-07-15 15:34:47.826630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.104 [2024-07-15 15:34:47.826795] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.104 [2024-07-15 15:34:47.826970] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.104 [2024-07-15 15:34:47.826982] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.104 [2024-07-15 15:34:47.826990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.104 [2024-07-15 15:34:47.829488] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.104 [2024-07-15 15:34:47.838781] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.104 [2024-07-15 15:34:47.839334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.104 [2024-07-15 15:34:47.839353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.104 [2024-07-15 15:34:47.839362] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.104 [2024-07-15 15:34:47.839527] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.104 [2024-07-15 15:34:47.839693] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.104 [2024-07-15 15:34:47.839703] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.104 [2024-07-15 15:34:47.839712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.104 [2024-07-15 15:34:47.842304] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.104 [2024-07-15 15:34:47.851573] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.104 [2024-07-15 15:34:47.852082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.104 [2024-07-15 15:34:47.852133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.104 [2024-07-15 15:34:47.852164] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.104 [2024-07-15 15:34:47.852597] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.104 [2024-07-15 15:34:47.852755] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.104 [2024-07-15 15:34:47.852766] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.104 [2024-07-15 15:34:47.852776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.104 [2024-07-15 15:34:47.855321] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.104 [2024-07-15 15:34:47.864462] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.104 [2024-07-15 15:34:47.864992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.104 [2024-07-15 15:34:47.865011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.104 [2024-07-15 15:34:47.865020] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.104 [2024-07-15 15:34:47.865186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.104 [2024-07-15 15:34:47.865352] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.104 [2024-07-15 15:34:47.865362] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.104 [2024-07-15 15:34:47.865371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.104 [2024-07-15 15:34:47.868046] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.104 [2024-07-15 15:34:47.877346] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.104 [2024-07-15 15:34:47.877876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.104 [2024-07-15 15:34:47.877894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.104 [2024-07-15 15:34:47.877904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.104 [2024-07-15 15:34:47.878074] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.104 [2024-07-15 15:34:47.878245] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.104 [2024-07-15 15:34:47.878255] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.104 [2024-07-15 15:34:47.878266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.104 [2024-07-15 15:34:47.880938] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.104 [2024-07-15 15:34:47.890228] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.104 [2024-07-15 15:34:47.890759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.104 [2024-07-15 15:34:47.890777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.104 [2024-07-15 15:34:47.890787] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.104 [2024-07-15 15:34:47.890964] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.104 [2024-07-15 15:34:47.891136] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.104 [2024-07-15 15:34:47.891147] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.104 [2024-07-15 15:34:47.891156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.104 [2024-07-15 15:34:47.893839] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.104 [2024-07-15 15:34:47.903130] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.104 [2024-07-15 15:34:47.903655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.104 [2024-07-15 15:34:47.903674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.104 [2024-07-15 15:34:47.903684] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.104 [2024-07-15 15:34:47.903864] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.104 [2024-07-15 15:34:47.904035] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.104 [2024-07-15 15:34:47.904046] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.104 [2024-07-15 15:34:47.904057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.105 [2024-07-15 15:34:47.906727] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.105 [2024-07-15 15:34:47.916026] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.105 [2024-07-15 15:34:47.916551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.105 [2024-07-15 15:34:47.916570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.105 [2024-07-15 15:34:47.916581] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.105 [2024-07-15 15:34:47.916750] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.105 [2024-07-15 15:34:47.916927] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.105 [2024-07-15 15:34:47.916938] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.105 [2024-07-15 15:34:47.916947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.105 [2024-07-15 15:34:47.919618] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.105 [2024-07-15 15:34:47.928932] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.105 [2024-07-15 15:34:47.929432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.105 [2024-07-15 15:34:47.929451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.105 [2024-07-15 15:34:47.929461] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.105 [2024-07-15 15:34:47.929631] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.105 [2024-07-15 15:34:47.929801] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.105 [2024-07-15 15:34:47.929812] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.105 [2024-07-15 15:34:47.929821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.105 [2024-07-15 15:34:47.932498] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.105 [2024-07-15 15:34:47.941801] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.105 [2024-07-15 15:34:47.942358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.105 [2024-07-15 15:34:47.942377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.105 [2024-07-15 15:34:47.942386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.105 [2024-07-15 15:34:47.942557] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.105 [2024-07-15 15:34:47.942728] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.105 [2024-07-15 15:34:47.942739] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.105 [2024-07-15 15:34:47.942751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.105 [2024-07-15 15:34:47.945425] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.105 [2024-07-15 15:34:47.954749] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.105 [2024-07-15 15:34:47.955283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.105 [2024-07-15 15:34:47.955303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.105 [2024-07-15 15:34:47.955313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.105 [2024-07-15 15:34:47.955482] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.105 [2024-07-15 15:34:47.955653] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.105 [2024-07-15 15:34:47.955665] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.105 [2024-07-15 15:34:47.955674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.105 [2024-07-15 15:34:47.958359] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.105 [2024-07-15 15:34:47.967672] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.105 [2024-07-15 15:34:47.968139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.105 [2024-07-15 15:34:47.968158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.105 [2024-07-15 15:34:47.968168] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.105 [2024-07-15 15:34:47.968338] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.105 [2024-07-15 15:34:47.968509] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.105 [2024-07-15 15:34:47.968520] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.105 [2024-07-15 15:34:47.968530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.105 [2024-07-15 15:34:47.971208] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.105 [2024-07-15 15:34:47.980673] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.105 [2024-07-15 15:34:47.981142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.105 [2024-07-15 15:34:47.981162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.105 [2024-07-15 15:34:47.981171] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.105 [2024-07-15 15:34:47.981342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.105 [2024-07-15 15:34:47.981512] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.105 [2024-07-15 15:34:47.981523] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.105 [2024-07-15 15:34:47.981531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.105 [2024-07-15 15:34:47.984205] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.105 [2024-07-15 15:34:47.993666] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.105 [2024-07-15 15:34:47.994176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.105 [2024-07-15 15:34:47.994195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.105 [2024-07-15 15:34:47.994205] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.105 [2024-07-15 15:34:47.994375] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.105 [2024-07-15 15:34:47.994545] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.105 [2024-07-15 15:34:47.994556] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.105 [2024-07-15 15:34:47.994566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.105 [2024-07-15 15:34:47.997244] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.105 [2024-07-15 15:34:48.006555] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.105 [2024-07-15 15:34:48.007112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.105 [2024-07-15 15:34:48.007131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.105 [2024-07-15 15:34:48.007142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.105 [2024-07-15 15:34:48.007312] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.105 [2024-07-15 15:34:48.007484] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.105 [2024-07-15 15:34:48.007495] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.105 [2024-07-15 15:34:48.007504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.365 [2024-07-15 15:34:48.010184] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.365 [2024-07-15 15:34:48.019494] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.365 [2024-07-15 15:34:48.019961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.365 [2024-07-15 15:34:48.019980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.365 [2024-07-15 15:34:48.019990] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.365 [2024-07-15 15:34:48.020159] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.365 [2024-07-15 15:34:48.020330] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.365 [2024-07-15 15:34:48.020341] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.365 [2024-07-15 15:34:48.020351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.365 [2024-07-15 15:34:48.023027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.365 [2024-07-15 15:34:48.032482] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.365 [2024-07-15 15:34:48.032933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.365 [2024-07-15 15:34:48.032952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.365 [2024-07-15 15:34:48.032962] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.365 [2024-07-15 15:34:48.033132] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.365 [2024-07-15 15:34:48.033305] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.365 [2024-07-15 15:34:48.033316] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.365 [2024-07-15 15:34:48.033326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.365 [2024-07-15 15:34:48.035998] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.365 [2024-07-15 15:34:48.045455] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.365 [2024-07-15 15:34:48.045959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.365 [2024-07-15 15:34:48.045978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.365 [2024-07-15 15:34:48.045988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.365 [2024-07-15 15:34:48.046158] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.365 [2024-07-15 15:34:48.046329] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.365 [2024-07-15 15:34:48.046340] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.365 [2024-07-15 15:34:48.046349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.365 [2024-07-15 15:34:48.049022] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.365 [2024-07-15 15:34:48.058481] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.365 [2024-07-15 15:34:48.058940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.365 [2024-07-15 15:34:48.058959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.365 [2024-07-15 15:34:48.058969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.365 [2024-07-15 15:34:48.059139] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.365 [2024-07-15 15:34:48.059310] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.365 [2024-07-15 15:34:48.059321] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.365 [2024-07-15 15:34:48.059331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.365 [2024-07-15 15:34:48.062004] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.365 [2024-07-15 15:34:48.071463] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.365 [2024-07-15 15:34:48.071987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.365 [2024-07-15 15:34:48.072005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.365 [2024-07-15 15:34:48.072015] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.365 [2024-07-15 15:34:48.072185] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.365 [2024-07-15 15:34:48.072356] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.365 [2024-07-15 15:34:48.072367] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.365 [2024-07-15 15:34:48.072376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.365 [2024-07-15 15:34:48.075055] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.365 [2024-07-15 15:34:48.084350] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.365 [2024-07-15 15:34:48.084879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.365 [2024-07-15 15:34:48.084898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.365 [2024-07-15 15:34:48.084908] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.365 [2024-07-15 15:34:48.085079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.365 [2024-07-15 15:34:48.085250] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.365 [2024-07-15 15:34:48.085261] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.365 [2024-07-15 15:34:48.085271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.365 [2024-07-15 15:34:48.087945] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.365 [2024-07-15 15:34:48.097240] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.365 [2024-07-15 15:34:48.097768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.365 [2024-07-15 15:34:48.097787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.365 [2024-07-15 15:34:48.097797] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.365 [2024-07-15 15:34:48.097972] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.365 [2024-07-15 15:34:48.098144] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.365 [2024-07-15 15:34:48.098155] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.365 [2024-07-15 15:34:48.098164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.365 [2024-07-15 15:34:48.100836] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.365 [2024-07-15 15:34:48.110127] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.365 [2024-07-15 15:34:48.110588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.365 [2024-07-15 15:34:48.110606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.365 [2024-07-15 15:34:48.110616] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.365 [2024-07-15 15:34:48.110786] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.365 [2024-07-15 15:34:48.110964] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.365 [2024-07-15 15:34:48.110975] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.365 [2024-07-15 15:34:48.110985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.365 [2024-07-15 15:34:48.113653] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.365 [2024-07-15 15:34:48.123116] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.365 [2024-07-15 15:34:48.123648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.365 [2024-07-15 15:34:48.123669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.365 [2024-07-15 15:34:48.123679] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.365 [2024-07-15 15:34:48.123853] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.365 [2024-07-15 15:34:48.124024] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.365 [2024-07-15 15:34:48.124036] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.365 [2024-07-15 15:34:48.124045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.365 [2024-07-15 15:34:48.126712] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.366 [2024-07-15 15:34:48.136012] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.366 [2024-07-15 15:34:48.136542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.366 [2024-07-15 15:34:48.136560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.366 [2024-07-15 15:34:48.136570] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.366 [2024-07-15 15:34:48.136741] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.366 [2024-07-15 15:34:48.136918] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.366 [2024-07-15 15:34:48.136929] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.366 [2024-07-15 15:34:48.136939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.366 [2024-07-15 15:34:48.139606] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.366 [2024-07-15 15:34:48.148902] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.366 [2024-07-15 15:34:48.149409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.366 [2024-07-15 15:34:48.149427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.366 [2024-07-15 15:34:48.149436] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.366 [2024-07-15 15:34:48.149606] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.366 [2024-07-15 15:34:48.149777] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.366 [2024-07-15 15:34:48.149788] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.366 [2024-07-15 15:34:48.149798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.366 [2024-07-15 15:34:48.152474] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.366 [2024-07-15 15:34:48.161775] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.366 [2024-07-15 15:34:48.162311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.366 [2024-07-15 15:34:48.162330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.366 [2024-07-15 15:34:48.162340] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.366 [2024-07-15 15:34:48.162510] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.366 [2024-07-15 15:34:48.162686] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.366 [2024-07-15 15:34:48.162697] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.366 [2024-07-15 15:34:48.162706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.366 [2024-07-15 15:34:48.165379] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.366 [2024-07-15 15:34:48.174679] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.366 [2024-07-15 15:34:48.175209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.366 [2024-07-15 15:34:48.175228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.366 [2024-07-15 15:34:48.175238] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.366 [2024-07-15 15:34:48.175407] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.366 [2024-07-15 15:34:48.175579] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.366 [2024-07-15 15:34:48.175590] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.366 [2024-07-15 15:34:48.175600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.366 [2024-07-15 15:34:48.178275] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.366 [2024-07-15 15:34:48.187574] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.366 [2024-07-15 15:34:48.188101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.366 [2024-07-15 15:34:48.188120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.366 [2024-07-15 15:34:48.188129] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.366 [2024-07-15 15:34:48.188301] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.366 [2024-07-15 15:34:48.188472] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.366 [2024-07-15 15:34:48.188484] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.366 [2024-07-15 15:34:48.188493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.366 [2024-07-15 15:34:48.191169] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.366 [2024-07-15 15:34:48.200453] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.366 [2024-07-15 15:34:48.200849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.366 [2024-07-15 15:34:48.200868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.366 [2024-07-15 15:34:48.200878] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.366 [2024-07-15 15:34:48.201048] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.366 [2024-07-15 15:34:48.201219] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.366 [2024-07-15 15:34:48.201230] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.366 [2024-07-15 15:34:48.201239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.366 [2024-07-15 15:34:48.203915] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.366 [2024-07-15 15:34:48.213376] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.366 [2024-07-15 15:34:48.213835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.366 [2024-07-15 15:34:48.213854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.366 [2024-07-15 15:34:48.213864] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.366 [2024-07-15 15:34:48.214034] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.366 [2024-07-15 15:34:48.214205] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.366 [2024-07-15 15:34:48.214216] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.366 [2024-07-15 15:34:48.214226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.366 [2024-07-15 15:34:48.216899] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.366 [2024-07-15 15:34:48.226346] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.366 [2024-07-15 15:34:48.226875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.366 [2024-07-15 15:34:48.226894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.366 [2024-07-15 15:34:48.226904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.366 [2024-07-15 15:34:48.227074] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.366 [2024-07-15 15:34:48.227245] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.366 [2024-07-15 15:34:48.227256] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.366 [2024-07-15 15:34:48.227266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.366 [2024-07-15 15:34:48.229937] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.366 [2024-07-15 15:34:48.239232] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.366 [2024-07-15 15:34:48.239735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.366 [2024-07-15 15:34:48.239753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.366 [2024-07-15 15:34:48.239763] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.366 [2024-07-15 15:34:48.239939] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.366 [2024-07-15 15:34:48.240111] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.366 [2024-07-15 15:34:48.240123] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.366 [2024-07-15 15:34:48.240132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.366 [2024-07-15 15:34:48.242801] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.366 [2024-07-15 15:34:48.252100] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.366 [2024-07-15 15:34:48.252628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.366 [2024-07-15 15:34:48.252647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.366 [2024-07-15 15:34:48.252660] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.366 [2024-07-15 15:34:48.252829] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.366 [2024-07-15 15:34:48.253006] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.366 [2024-07-15 15:34:48.253017] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.366 [2024-07-15 15:34:48.253026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.366 [2024-07-15 15:34:48.255696] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.366 [2024-07-15 15:34:48.265008] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.366 [2024-07-15 15:34:48.265537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.366 [2024-07-15 15:34:48.265555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.366 [2024-07-15 15:34:48.265565] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.366 [2024-07-15 15:34:48.265736] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.366 [2024-07-15 15:34:48.265913] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.366 [2024-07-15 15:34:48.265924] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.366 [2024-07-15 15:34:48.265934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.366 [2024-07-15 15:34:48.268600] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.625 [2024-07-15 15:34:48.277898] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.625 [2024-07-15 15:34:48.278395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.625 [2024-07-15 15:34:48.278414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.625 [2024-07-15 15:34:48.278423] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.626 [2024-07-15 15:34:48.278594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.626 [2024-07-15 15:34:48.278766] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.626 [2024-07-15 15:34:48.278777] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.626 [2024-07-15 15:34:48.278786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.626 [2024-07-15 15:34:48.281460] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.626 [2024-07-15 15:34:48.290924] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.626 [2024-07-15 15:34:48.291426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.626 [2024-07-15 15:34:48.291445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.626 [2024-07-15 15:34:48.291454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.626 [2024-07-15 15:34:48.291625] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.626 [2024-07-15 15:34:48.291796] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.626 [2024-07-15 15:34:48.291811] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.626 [2024-07-15 15:34:48.291821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.626 [2024-07-15 15:34:48.294494] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.626 [2024-07-15 15:34:48.303800] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.626 [2024-07-15 15:34:48.304249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.626 [2024-07-15 15:34:48.304268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.626 [2024-07-15 15:34:48.304279] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.626 [2024-07-15 15:34:48.304448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.626 [2024-07-15 15:34:48.304619] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.626 [2024-07-15 15:34:48.304630] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.626 [2024-07-15 15:34:48.304640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.626 [2024-07-15 15:34:48.307315] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.626 [2024-07-15 15:34:48.316765] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.626 [2024-07-15 15:34:48.317144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.626 [2024-07-15 15:34:48.317162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.626 [2024-07-15 15:34:48.317172] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.626 [2024-07-15 15:34:48.317342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.626 [2024-07-15 15:34:48.317513] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.626 [2024-07-15 15:34:48.317524] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.626 [2024-07-15 15:34:48.317533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.626 [2024-07-15 15:34:48.320209] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.626 [2024-07-15 15:34:48.329667] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.626 [2024-07-15 15:34:48.330133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.626 [2024-07-15 15:34:48.330152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.626 [2024-07-15 15:34:48.330163] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.626 [2024-07-15 15:34:48.330332] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.626 [2024-07-15 15:34:48.330503] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.626 [2024-07-15 15:34:48.330515] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.626 [2024-07-15 15:34:48.330525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.626 [2024-07-15 15:34:48.333195] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.626 [2024-07-15 15:34:48.342652] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.626 [2024-07-15 15:34:48.343190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.626 [2024-07-15 15:34:48.343208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.626 [2024-07-15 15:34:48.343218] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.626 [2024-07-15 15:34:48.343388] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.626 [2024-07-15 15:34:48.343559] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.626 [2024-07-15 15:34:48.343570] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.626 [2024-07-15 15:34:48.343580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.626 [2024-07-15 15:34:48.346254] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.626 [2024-07-15 15:34:48.355550] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.626 [2024-07-15 15:34:48.356056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.626 [2024-07-15 15:34:48.356075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.626 [2024-07-15 15:34:48.356085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.626 [2024-07-15 15:34:48.356257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.626 [2024-07-15 15:34:48.356428] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.626 [2024-07-15 15:34:48.356439] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.626 [2024-07-15 15:34:48.356449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.626 [2024-07-15 15:34:48.359128] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.626 [2024-07-15 15:34:48.368426] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.626 [2024-07-15 15:34:48.368881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.626 [2024-07-15 15:34:48.368900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.626 [2024-07-15 15:34:48.368910] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.626 [2024-07-15 15:34:48.369080] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.626 [2024-07-15 15:34:48.369252] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.626 [2024-07-15 15:34:48.369264] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.626 [2024-07-15 15:34:48.369273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.626 [2024-07-15 15:34:48.371953] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.626 [2024-07-15 15:34:48.381409] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.626 [2024-07-15 15:34:48.381918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.626 [2024-07-15 15:34:48.381937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.626 [2024-07-15 15:34:48.381947] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.626 [2024-07-15 15:34:48.382121] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.626 [2024-07-15 15:34:48.382292] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.626 [2024-07-15 15:34:48.382303] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.626 [2024-07-15 15:34:48.382313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.626 [2024-07-15 15:34:48.384989] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.626 [2024-07-15 15:34:48.394284] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.626 [2024-07-15 15:34:48.394809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.626 [2024-07-15 15:34:48.394827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.626 [2024-07-15 15:34:48.394843] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.626 [2024-07-15 15:34:48.395013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.626 [2024-07-15 15:34:48.395184] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.626 [2024-07-15 15:34:48.395195] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.626 [2024-07-15 15:34:48.395205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.626 [2024-07-15 15:34:48.397882] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.626 [2024-07-15 15:34:48.407173] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.626 [2024-07-15 15:34:48.407701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.626 [2024-07-15 15:34:48.407719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.626 [2024-07-15 15:34:48.407729] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.626 [2024-07-15 15:34:48.407905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.626 [2024-07-15 15:34:48.408077] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.626 [2024-07-15 15:34:48.408088] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.626 [2024-07-15 15:34:48.408097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.626 [2024-07-15 15:34:48.410770] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.626 [2024-07-15 15:34:48.420078] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.626 [2024-07-15 15:34:48.420585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.626 [2024-07-15 15:34:48.420604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.627 [2024-07-15 15:34:48.420613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.627 [2024-07-15 15:34:48.420784] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.627 [2024-07-15 15:34:48.420960] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.627 [2024-07-15 15:34:48.420972] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.627 [2024-07-15 15:34:48.420985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.627 [2024-07-15 15:34:48.423657] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.627 [2024-07-15 15:34:48.432954] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.627 [2024-07-15 15:34:48.433485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.627 [2024-07-15 15:34:48.433503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.627 [2024-07-15 15:34:48.433514] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.627 [2024-07-15 15:34:48.433683] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.627 [2024-07-15 15:34:48.433859] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.627 [2024-07-15 15:34:48.433871] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.627 [2024-07-15 15:34:48.433880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.627 [2024-07-15 15:34:48.436555] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.627 [2024-07-15 15:34:48.445857] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.627 [2024-07-15 15:34:48.446393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.627 [2024-07-15 15:34:48.446412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.627 [2024-07-15 15:34:48.446422] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.627 [2024-07-15 15:34:48.446592] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.627 [2024-07-15 15:34:48.446765] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.627 [2024-07-15 15:34:48.446776] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.627 [2024-07-15 15:34:48.446785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.627 [2024-07-15 15:34:48.449466] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.627 [2024-07-15 15:34:48.458760] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.627 [2024-07-15 15:34:48.459301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.627 [2024-07-15 15:34:48.459320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.627 [2024-07-15 15:34:48.459330] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.627 [2024-07-15 15:34:48.459495] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.627 [2024-07-15 15:34:48.459661] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.627 [2024-07-15 15:34:48.459672] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.627 [2024-07-15 15:34:48.459682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.627 [2024-07-15 15:34:48.462291] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.627 [2024-07-15 15:34:48.471525] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.627 [2024-07-15 15:34:48.472022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.627 [2024-07-15 15:34:48.472039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.627 [2024-07-15 15:34:48.472049] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.627 [2024-07-15 15:34:48.472205] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.627 [2024-07-15 15:34:48.472363] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.627 [2024-07-15 15:34:48.472374] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.627 [2024-07-15 15:34:48.472383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.627 [2024-07-15 15:34:48.474851] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.627 [2024-07-15 15:34:48.484231] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.627 [2024-07-15 15:34:48.484731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.627 [2024-07-15 15:34:48.484782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.627 [2024-07-15 15:34:48.484814] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.627 [2024-07-15 15:34:48.485164] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.627 [2024-07-15 15:34:48.485322] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.627 [2024-07-15 15:34:48.485333] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.627 [2024-07-15 15:34:48.485342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.627 [2024-07-15 15:34:48.487877] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.627 [2024-07-15 15:34:48.496969] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.627 [2024-07-15 15:34:48.497500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.627 [2024-07-15 15:34:48.497555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.627 [2024-07-15 15:34:48.497590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.627 [2024-07-15 15:34:48.498061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.627 [2024-07-15 15:34:48.498219] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.627 [2024-07-15 15:34:48.498230] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.627 [2024-07-15 15:34:48.498239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.627 [2024-07-15 15:34:48.500697] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.627 [2024-07-15 15:34:48.509639] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.627 [2024-07-15 15:34:48.510105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.627 [2024-07-15 15:34:48.510159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.627 [2024-07-15 15:34:48.510191] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.627 [2024-07-15 15:34:48.510782] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.627 [2024-07-15 15:34:48.511253] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.627 [2024-07-15 15:34:48.511264] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.627 [2024-07-15 15:34:48.511274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.627 [2024-07-15 15:34:48.513819] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.627 [2024-07-15 15:34:48.522435] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.627 [2024-07-15 15:34:48.522882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.627 [2024-07-15 15:34:48.522900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.627 [2024-07-15 15:34:48.522911] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.627 [2024-07-15 15:34:48.523076] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.627 [2024-07-15 15:34:48.523242] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.627 [2024-07-15 15:34:48.523253] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.627 [2024-07-15 15:34:48.523262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.627 [2024-07-15 15:34:48.525866] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.886 [2024-07-15 15:34:48.535244] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.886 [2024-07-15 15:34:48.535770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.886 [2024-07-15 15:34:48.535822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.886 [2024-07-15 15:34:48.535877] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.886 [2024-07-15 15:34:48.536311] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.886 [2024-07-15 15:34:48.536482] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.886 [2024-07-15 15:34:48.536493] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.886 [2024-07-15 15:34:48.536502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.886 [2024-07-15 15:34:48.539175] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.886 [2024-07-15 15:34:48.548114] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.886 [2024-07-15 15:34:48.548613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.886 [2024-07-15 15:34:48.548631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.886 [2024-07-15 15:34:48.548641] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.886 [2024-07-15 15:34:48.548805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.886 [2024-07-15 15:34:48.548977] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.886 [2024-07-15 15:34:48.548989] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.886 [2024-07-15 15:34:48.548998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.886 [2024-07-15 15:34:48.551600] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.886 [2024-07-15 15:34:48.560870] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.886 [2024-07-15 15:34:48.561377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.886 [2024-07-15 15:34:48.561427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.886 [2024-07-15 15:34:48.561460] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.886 [2024-07-15 15:34:48.561999] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.886 [2024-07-15 15:34:48.562157] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.886 [2024-07-15 15:34:48.562168] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.886 [2024-07-15 15:34:48.562176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.886 [2024-07-15 15:34:48.564641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.886 [2024-07-15 15:34:48.573637] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.886 [2024-07-15 15:34:48.574160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.886 [2024-07-15 15:34:48.574213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.886 [2024-07-15 15:34:48.574245] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.886 [2024-07-15 15:34:48.574851] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.886 [2024-07-15 15:34:48.575346] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.886 [2024-07-15 15:34:48.575357] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.886 [2024-07-15 15:34:48.575366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.886 [2024-07-15 15:34:48.577827] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.886 [2024-07-15 15:34:48.586295] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.886 [2024-07-15 15:34:48.586723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.886 [2024-07-15 15:34:48.586741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.886 [2024-07-15 15:34:48.586750] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.886 [2024-07-15 15:34:48.586913] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.886 [2024-07-15 15:34:48.587071] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.886 [2024-07-15 15:34:48.587082] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.886 [2024-07-15 15:34:48.587091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.886 [2024-07-15 15:34:48.589554] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.886 [2024-07-15 15:34:48.598987] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.886 [2024-07-15 15:34:48.599505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.886 [2024-07-15 15:34:48.599545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.886 [2024-07-15 15:34:48.599585] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.886 [2024-07-15 15:34:48.600197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.886 [2024-07-15 15:34:48.600356] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.886 [2024-07-15 15:34:48.600367] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.886 [2024-07-15 15:34:48.600375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.887 [2024-07-15 15:34:48.602915] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.887 [2024-07-15 15:34:48.611670] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.887 [2024-07-15 15:34:48.612169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.887 [2024-07-15 15:34:48.612187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.887 [2024-07-15 15:34:48.612197] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.887 [2024-07-15 15:34:48.612354] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.887 [2024-07-15 15:34:48.612512] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.887 [2024-07-15 15:34:48.612523] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.887 [2024-07-15 15:34:48.612531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.887 [2024-07-15 15:34:48.615106] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.887 [2024-07-15 15:34:48.624391] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.887 [2024-07-15 15:34:48.624913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.887 [2024-07-15 15:34:48.624932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.887 [2024-07-15 15:34:48.624941] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.887 [2024-07-15 15:34:48.625099] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.887 [2024-07-15 15:34:48.625257] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.887 [2024-07-15 15:34:48.625268] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.887 [2024-07-15 15:34:48.625277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.887 [2024-07-15 15:34:48.627739] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.887 [2024-07-15 15:34:48.637110] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.887 [2024-07-15 15:34:48.637627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.887 [2024-07-15 15:34:48.637677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.887 [2024-07-15 15:34:48.637711] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.887 [2024-07-15 15:34:48.638319] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.887 [2024-07-15 15:34:48.638505] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.887 [2024-07-15 15:34:48.638518] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.887 [2024-07-15 15:34:48.638535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.887 [2024-07-15 15:34:48.640996] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.887 [2024-07-15 15:34:48.649844] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.887 [2024-07-15 15:34:48.650362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.887 [2024-07-15 15:34:48.650379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.887 [2024-07-15 15:34:48.650388] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.887 [2024-07-15 15:34:48.650545] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.887 [2024-07-15 15:34:48.650704] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.887 [2024-07-15 15:34:48.650714] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.887 [2024-07-15 15:34:48.650723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.887 [2024-07-15 15:34:48.653272] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.887 [2024-07-15 15:34:48.662564] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.887 [2024-07-15 15:34:48.663073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.887 [2024-07-15 15:34:48.663141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.887 [2024-07-15 15:34:48.663174] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.887 [2024-07-15 15:34:48.663565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.887 [2024-07-15 15:34:48.663723] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.887 [2024-07-15 15:34:48.663733] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.887 [2024-07-15 15:34:48.663742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.887 [2024-07-15 15:34:48.666204] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.887 [2024-07-15 15:34:48.675276] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.887 [2024-07-15 15:34:48.675786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.887 [2024-07-15 15:34:48.675849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.887 [2024-07-15 15:34:48.675882] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.887 [2024-07-15 15:34:48.676284] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.887 [2024-07-15 15:34:48.676442] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.887 [2024-07-15 15:34:48.676453] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.887 [2024-07-15 15:34:48.676461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.887 [2024-07-15 15:34:48.678970] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.887 [2024-07-15 15:34:48.688011] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.887 [2024-07-15 15:34:48.688520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.887 [2024-07-15 15:34:48.688571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.887 [2024-07-15 15:34:48.688604] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.887 [2024-07-15 15:34:48.689213] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.887 [2024-07-15 15:34:48.689605] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.887 [2024-07-15 15:34:48.689616] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.887 [2024-07-15 15:34:48.689625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.887 [2024-07-15 15:34:48.692083] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.887 [2024-07-15 15:34:48.700725] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.887 [2024-07-15 15:34:48.701230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.887 [2024-07-15 15:34:48.701281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.887 [2024-07-15 15:34:48.701313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.887 [2024-07-15 15:34:48.701744] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.887 [2024-07-15 15:34:48.701909] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.887 [2024-07-15 15:34:48.701920] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.887 [2024-07-15 15:34:48.701929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.887 [2024-07-15 15:34:48.704385] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.887 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3219099 Killed "${NVMF_APP[@]}" "$@" 00:29:44.887 15:34:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:29:44.888 [2024-07-15 15:34:48.713522] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.888 15:34:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:44.888 15:34:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:44.888 [2024-07-15 15:34:48.714038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.888 [2024-07-15 15:34:48.714057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.888 [2024-07-15 15:34:48.714067] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.888 15:34:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:44.888 [2024-07-15 15:34:48.714233] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.888 [2024-07-15 15:34:48.714399] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.888 [2024-07-15 15:34:48.714410] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.888 [2024-07-15 15:34:48.714419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.888 15:34:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:44.888 [2024-07-15 15:34:48.717060] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.888 15:34:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3220593 00:29:44.888 15:34:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3220593 00:29:44.888 15:34:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:44.888 15:34:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 3220593 ']' 00:29:44.888 15:34:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:44.888 15:34:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:44.888 15:34:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:44.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:44.888 15:34:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:44.888 15:34:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:44.888 [2024-07-15 15:34:48.726519] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.888 [2024-07-15 15:34:48.727047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.888 [2024-07-15 15:34:48.727065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.888 [2024-07-15 15:34:48.727076] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.888 [2024-07-15 15:34:48.727246] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.888 [2024-07-15 15:34:48.727419] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.888 [2024-07-15 15:34:48.727430] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.888 [2024-07-15 15:34:48.727439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.888 [2024-07-15 15:34:48.730113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.888 [2024-07-15 15:34:48.739412] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.888 [2024-07-15 15:34:48.739917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.888 [2024-07-15 15:34:48.739936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.888 [2024-07-15 15:34:48.739946] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.888 [2024-07-15 15:34:48.740117] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.888 [2024-07-15 15:34:48.740288] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.888 [2024-07-15 15:34:48.740299] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.888 [2024-07-15 15:34:48.740309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.888 [2024-07-15 15:34:48.742983] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.888 [2024-07-15 15:34:48.752282] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.888 [2024-07-15 15:34:48.752815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.888 [2024-07-15 15:34:48.752839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.888 [2024-07-15 15:34:48.752850] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.888 [2024-07-15 15:34:48.753023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.888 [2024-07-15 15:34:48.753194] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.888 [2024-07-15 15:34:48.753205] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.888 [2024-07-15 15:34:48.753215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.888 [2024-07-15 15:34:48.755887] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.888 [2024-07-15 15:34:48.765154] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.888 [2024-07-15 15:34:48.765682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.888 [2024-07-15 15:34:48.765701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.888 [2024-07-15 15:34:48.765711] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.888 [2024-07-15 15:34:48.765881] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.888 [2024-07-15 15:34:48.766049] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.888 [2024-07-15 15:34:48.766060] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.888 [2024-07-15 15:34:48.766069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.888 [2024-07-15 15:34:48.768665] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.888 [2024-07-15 15:34:48.770534] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:29:44.888 [2024-07-15 15:34:48.770579] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:44.888 [2024-07-15 15:34:48.778049] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.888 [2024-07-15 15:34:48.778514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.888 [2024-07-15 15:34:48.778533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.888 [2024-07-15 15:34:48.778543] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.888 [2024-07-15 15:34:48.778713] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.888 [2024-07-15 15:34:48.778890] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.888 [2024-07-15 15:34:48.778901] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.888 [2024-07-15 15:34:48.778911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:44.888 [2024-07-15 15:34:48.781545] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:44.888 [2024-07-15 15:34:48.791038] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:44.888 [2024-07-15 15:34:48.791567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:44.888 [2024-07-15 15:34:48.791585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:44.888 [2024-07-15 15:34:48.791595] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:44.888 [2024-07-15 15:34:48.791769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:44.888 [2024-07-15 15:34:48.791946] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:44.888 [2024-07-15 15:34:48.791957] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:44.888 [2024-07-15 15:34:48.791967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.147 [2024-07-15 15:34:48.794635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.147 [2024-07-15 15:34:48.803966] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.147 [2024-07-15 15:34:48.804364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.147 [2024-07-15 15:34:48.804382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.147 [2024-07-15 15:34:48.804393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.147 [2024-07-15 15:34:48.804563] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.147 [2024-07-15 15:34:48.804734] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.147 [2024-07-15 15:34:48.804745] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.147 [2024-07-15 15:34:48.804755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.147 [2024-07-15 15:34:48.807583] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.147 EAL: No free 2048 kB hugepages reported on node 1 00:29:45.147 [2024-07-15 15:34:48.816839] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.147 [2024-07-15 15:34:48.817219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.148 [2024-07-15 15:34:48.817238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.148 [2024-07-15 15:34:48.817248] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.148 [2024-07-15 15:34:48.817419] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.148 [2024-07-15 15:34:48.817590] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.148 [2024-07-15 15:34:48.817603] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.148 [2024-07-15 15:34:48.817614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.148 [2024-07-15 15:34:48.820288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.148 [2024-07-15 15:34:48.829747] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.148 [2024-07-15 15:34:48.830240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.148 [2024-07-15 15:34:48.830259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.148 [2024-07-15 15:34:48.830269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.148 [2024-07-15 15:34:48.830439] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.148 [2024-07-15 15:34:48.830609] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.148 [2024-07-15 15:34:48.830620] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.148 [2024-07-15 15:34:48.830636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.148 [2024-07-15 15:34:48.833310] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.148 [2024-07-15 15:34:48.842606] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.148 [2024-07-15 15:34:48.843132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.148 [2024-07-15 15:34:48.843150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.148 [2024-07-15 15:34:48.843161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.148 [2024-07-15 15:34:48.843326] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.148 [2024-07-15 15:34:48.843493] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.148 [2024-07-15 15:34:48.843504] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.148 [2024-07-15 15:34:48.843514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.148 [2024-07-15 15:34:48.846114] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.148 [2024-07-15 15:34:48.847152] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:45.148 [2024-07-15 15:34:48.855464] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.148 [2024-07-15 15:34:48.855979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.148 [2024-07-15 15:34:48.855998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.148 [2024-07-15 15:34:48.856009] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.148 [2024-07-15 15:34:48.856174] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.148 [2024-07-15 15:34:48.856341] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.148 [2024-07-15 15:34:48.856353] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.148 [2024-07-15 15:34:48.856362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.148 [2024-07-15 15:34:48.858975] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.148 [2024-07-15 15:34:48.868322] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.148 [2024-07-15 15:34:48.868853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.148 [2024-07-15 15:34:48.868871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.148 [2024-07-15 15:34:48.868881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.148 [2024-07-15 15:34:48.869048] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.148 [2024-07-15 15:34:48.869215] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.148 [2024-07-15 15:34:48.869226] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.148 [2024-07-15 15:34:48.869235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.148 [2024-07-15 15:34:48.871838] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.148 [2024-07-15 15:34:48.881238] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.148 [2024-07-15 15:34:48.881774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.148 [2024-07-15 15:34:48.881792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.148 [2024-07-15 15:34:48.881801] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.148 [2024-07-15 15:34:48.881974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.148 [2024-07-15 15:34:48.882141] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.148 [2024-07-15 15:34:48.882152] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.148 [2024-07-15 15:34:48.882161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.148 [2024-07-15 15:34:48.884761] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.148 [2024-07-15 15:34:48.894138] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.148 [2024-07-15 15:34:48.894538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.148 [2024-07-15 15:34:48.894561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.148 [2024-07-15 15:34:48.894572] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.148 [2024-07-15 15:34:48.894742] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.148 [2024-07-15 15:34:48.894920] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.148 [2024-07-15 15:34:48.894931] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.148 [2024-07-15 15:34:48.894941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.148 [2024-07-15 15:34:48.897609] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.148 [2024-07-15 15:34:48.907064] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.148 [2024-07-15 15:34:48.907601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.148 [2024-07-15 15:34:48.907620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.148 [2024-07-15 15:34:48.907630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.148 [2024-07-15 15:34:48.907800] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.148 [2024-07-15 15:34:48.907977] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.148 [2024-07-15 15:34:48.907989] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.148 [2024-07-15 15:34:48.907999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.148 [2024-07-15 15:34:48.910667] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.148 [2024-07-15 15:34:48.919972] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.148 [2024-07-15 15:34:48.920475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.148 [2024-07-15 15:34:48.920494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.148 [2024-07-15 15:34:48.920503] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.148 [2024-07-15 15:34:48.920665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.148 [2024-07-15 15:34:48.920823] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.148 [2024-07-15 15:34:48.920839] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.148 [2024-07-15 15:34:48.920849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.148 [2024-07-15 15:34:48.921131] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:45.148 [2024-07-15 15:34:48.921158] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:45.148 [2024-07-15 15:34:48.921168] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:45.148 [2024-07-15 15:34:48.921177] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:45.148 [2024-07-15 15:34:48.921185] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:45.148 [2024-07-15 15:34:48.921226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:45.148 [2024-07-15 15:34:48.921309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:45.148 [2024-07-15 15:34:48.921310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:45.148 [2024-07-15 15:34:48.923521] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.148 [2024-07-15 15:34:48.932986] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.148 [2024-07-15 15:34:48.933463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.148 [2024-07-15 15:34:48.933483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.148 [2024-07-15 15:34:48.933494] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.148 [2024-07-15 15:34:48.933664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.148 [2024-07-15 15:34:48.933842] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.148 [2024-07-15 15:34:48.933854] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.148 [2024-07-15 15:34:48.933864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.148 [2024-07-15 15:34:48.936532] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.148 [2024-07-15 15:34:48.945995] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.148 [2024-07-15 15:34:48.946470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.149 [2024-07-15 15:34:48.946491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.149 [2024-07-15 15:34:48.946502] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.149 [2024-07-15 15:34:48.946672] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.149 [2024-07-15 15:34:48.946849] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.149 [2024-07-15 15:34:48.946860] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.149 [2024-07-15 15:34:48.946870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.149 [2024-07-15 15:34:48.949537] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.149 [2024-07-15 15:34:48.959017] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.149 [2024-07-15 15:34:48.959551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.149 [2024-07-15 15:34:48.959572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.149 [2024-07-15 15:34:48.959583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.149 [2024-07-15 15:34:48.959756] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.149 [2024-07-15 15:34:48.959932] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.149 [2024-07-15 15:34:48.959943] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.149 [2024-07-15 15:34:48.959954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.149 [2024-07-15 15:34:48.962622] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.149 [2024-07-15 15:34:48.971926] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.149 [2024-07-15 15:34:48.972455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.149 [2024-07-15 15:34:48.972476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.149 [2024-07-15 15:34:48.972487] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.149 [2024-07-15 15:34:48.972657] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.149 [2024-07-15 15:34:48.972830] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.149 [2024-07-15 15:34:48.972846] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.149 [2024-07-15 15:34:48.972856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.149 [2024-07-15 15:34:48.975525] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.149 [2024-07-15 15:34:48.984817] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.149 [2024-07-15 15:34:48.985251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.149 [2024-07-15 15:34:48.985270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.149 [2024-07-15 15:34:48.985280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.149 [2024-07-15 15:34:48.985450] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.149 [2024-07-15 15:34:48.985622] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.149 [2024-07-15 15:34:48.985633] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.149 [2024-07-15 15:34:48.985643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.149 [2024-07-15 15:34:48.988320] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.149 [2024-07-15 15:34:48.997817] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.149 [2024-07-15 15:34:48.998274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.149 [2024-07-15 15:34:48.998294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.149 [2024-07-15 15:34:48.998304] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.149 [2024-07-15 15:34:48.998480] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.149 [2024-07-15 15:34:48.998651] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.149 [2024-07-15 15:34:48.998663] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.149 [2024-07-15 15:34:48.998672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.149 [2024-07-15 15:34:49.001347] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.149 [2024-07-15 15:34:49.010795] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.149 [2024-07-15 15:34:49.011326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.149 [2024-07-15 15:34:49.011344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.149 [2024-07-15 15:34:49.011354] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.149 [2024-07-15 15:34:49.011523] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.149 [2024-07-15 15:34:49.011695] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.149 [2024-07-15 15:34:49.011707] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.149 [2024-07-15 15:34:49.011716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.149 [2024-07-15 15:34:49.014389] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.149 [2024-07-15 15:34:49.023682] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.149 [2024-07-15 15:34:49.024220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.149 [2024-07-15 15:34:49.024239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.149 [2024-07-15 15:34:49.024249] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.149 [2024-07-15 15:34:49.024420] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.149 [2024-07-15 15:34:49.024591] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.149 [2024-07-15 15:34:49.024602] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.149 [2024-07-15 15:34:49.024611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.149 [2024-07-15 15:34:49.027282] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.149 [2024-07-15 15:34:49.036574] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.149 [2024-07-15 15:34:49.036963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.149 [2024-07-15 15:34:49.036982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.149 [2024-07-15 15:34:49.036992] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.149 [2024-07-15 15:34:49.037162] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.149 [2024-07-15 15:34:49.037334] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.149 [2024-07-15 15:34:49.037345] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.149 [2024-07-15 15:34:49.037358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.149 [2024-07-15 15:34:49.040024] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.149 [2024-07-15 15:34:49.049469] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.149 [2024-07-15 15:34:49.049927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.149 [2024-07-15 15:34:49.049946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.149 [2024-07-15 15:34:49.049956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.149 [2024-07-15 15:34:49.050125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.149 [2024-07-15 15:34:49.050296] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.149 [2024-07-15 15:34:49.050307] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.149 [2024-07-15 15:34:49.050316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.149 [2024-07-15 15:34:49.052987] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.408 [2024-07-15 15:34:49.062454] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.408 [2024-07-15 15:34:49.062830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.408 [2024-07-15 15:34:49.062854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.408 [2024-07-15 15:34:49.062864] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.408 [2024-07-15 15:34:49.063034] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.408 [2024-07-15 15:34:49.063206] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.408 [2024-07-15 15:34:49.063218] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.408 [2024-07-15 15:34:49.063227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.408 [2024-07-15 15:34:49.065899] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.408 [2024-07-15 15:34:49.075348] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.408 [2024-07-15 15:34:49.075860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.409 [2024-07-15 15:34:49.075879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.409 [2024-07-15 15:34:49.075890] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.409 [2024-07-15 15:34:49.076060] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.409 [2024-07-15 15:34:49.076231] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.409 [2024-07-15 15:34:49.076242] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.409 [2024-07-15 15:34:49.076251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.409 [2024-07-15 15:34:49.078924] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.409 [2024-07-15 15:34:49.088350] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.409 [2024-07-15 15:34:49.088808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.409 [2024-07-15 15:34:49.088840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.409 [2024-07-15 15:34:49.088852] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.409 [2024-07-15 15:34:49.089040] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.409 [2024-07-15 15:34:49.089212] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.409 [2024-07-15 15:34:49.089224] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.409 [2024-07-15 15:34:49.089233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.409 [2024-07-15 15:34:49.091902] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.409 [2024-07-15 15:34:49.101354] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.409 [2024-07-15 15:34:49.101888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.409 [2024-07-15 15:34:49.101907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.409 [2024-07-15 15:34:49.101917] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.409 [2024-07-15 15:34:49.102087] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.409 [2024-07-15 15:34:49.102259] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.409 [2024-07-15 15:34:49.102271] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.409 [2024-07-15 15:34:49.102281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.409 [2024-07-15 15:34:49.104950] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.409 [2024-07-15 15:34:49.114245] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.409 [2024-07-15 15:34:49.114778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.409 [2024-07-15 15:34:49.114796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.409 [2024-07-15 15:34:49.114806] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.409 [2024-07-15 15:34:49.114982] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.409 [2024-07-15 15:34:49.115153] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.409 [2024-07-15 15:34:49.115165] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.409 [2024-07-15 15:34:49.115175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.409 [2024-07-15 15:34:49.117850] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.409 [2024-07-15 15:34:49.127141] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.409 [2024-07-15 15:34:49.127599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.409 [2024-07-15 15:34:49.127617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.409 [2024-07-15 15:34:49.127627] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.409 [2024-07-15 15:34:49.127797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.409 [2024-07-15 15:34:49.127977] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.409 [2024-07-15 15:34:49.127988] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.409 [2024-07-15 15:34:49.127997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.409 [2024-07-15 15:34:49.130666] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.409 [2024-07-15 15:34:49.140112] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.409 [2024-07-15 15:34:49.140637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.409 [2024-07-15 15:34:49.140655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.409 [2024-07-15 15:34:49.140665] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.409 [2024-07-15 15:34:49.140840] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.409 [2024-07-15 15:34:49.141012] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.409 [2024-07-15 15:34:49.141023] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.409 [2024-07-15 15:34:49.141032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.409 [2024-07-15 15:34:49.143699] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.409 [2024-07-15 15:34:49.152988] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.409 [2024-07-15 15:34:49.153509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.409 [2024-07-15 15:34:49.153527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.409 [2024-07-15 15:34:49.153537] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.409 [2024-07-15 15:34:49.153706] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.409 [2024-07-15 15:34:49.153882] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.409 [2024-07-15 15:34:49.153894] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.409 [2024-07-15 15:34:49.153903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.409 [2024-07-15 15:34:49.156571] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.409 [2024-07-15 15:34:49.165879] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.409 [2024-07-15 15:34:49.166386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.409 [2024-07-15 15:34:49.166404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.409 [2024-07-15 15:34:49.166414] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.409 [2024-07-15 15:34:49.166584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.409 [2024-07-15 15:34:49.166756] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.409 [2024-07-15 15:34:49.166767] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.409 [2024-07-15 15:34:49.166776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.409 [2024-07-15 15:34:49.169450] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.409 [2024-07-15 15:34:49.178743] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.409 [2024-07-15 15:34:49.179278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.409 [2024-07-15 15:34:49.179296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.409 [2024-07-15 15:34:49.179307] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.409 [2024-07-15 15:34:49.179477] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.409 [2024-07-15 15:34:49.179649] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.409 [2024-07-15 15:34:49.179660] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.409 [2024-07-15 15:34:49.179669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.409 [2024-07-15 15:34:49.182341] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.409 [2024-07-15 15:34:49.191636] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.409 [2024-07-15 15:34:49.192171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.409 [2024-07-15 15:34:49.192191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.409 [2024-07-15 15:34:49.192201] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.409 [2024-07-15 15:34:49.192372] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.410 [2024-07-15 15:34:49.192543] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.410 [2024-07-15 15:34:49.192555] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.410 [2024-07-15 15:34:49.192564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.410 [2024-07-15 15:34:49.195236] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.410 [2024-07-15 15:34:49.204525] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.410 [2024-07-15 15:34:49.205052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.410 [2024-07-15 15:34:49.205071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.410 [2024-07-15 15:34:49.205081] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.410 [2024-07-15 15:34:49.205252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.410 [2024-07-15 15:34:49.205423] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.410 [2024-07-15 15:34:49.205434] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.410 [2024-07-15 15:34:49.205444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.410 [2024-07-15 15:34:49.208116] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.410 [2024-07-15 15:34:49.217408] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.410 [2024-07-15 15:34:49.217933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.410 [2024-07-15 15:34:49.217951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.410 [2024-07-15 15:34:49.217965] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.410 [2024-07-15 15:34:49.218136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.410 [2024-07-15 15:34:49.218306] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.410 [2024-07-15 15:34:49.218318] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.410 [2024-07-15 15:34:49.218327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.410 [2024-07-15 15:34:49.220993] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.410 [2024-07-15 15:34:49.230276] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.410 [2024-07-15 15:34:49.230795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.410 [2024-07-15 15:34:49.230813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.410 [2024-07-15 15:34:49.230823] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.410 [2024-07-15 15:34:49.230998] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.410 [2024-07-15 15:34:49.231168] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.410 [2024-07-15 15:34:49.231180] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.410 [2024-07-15 15:34:49.231188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.410 [2024-07-15 15:34:49.233860] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.410 [2024-07-15 15:34:49.243151] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.410 [2024-07-15 15:34:49.243675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.410 [2024-07-15 15:34:49.243694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.410 [2024-07-15 15:34:49.243704] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.410 [2024-07-15 15:34:49.243878] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.410 [2024-07-15 15:34:49.244048] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.410 [2024-07-15 15:34:49.244060] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.410 [2024-07-15 15:34:49.244069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.410 [2024-07-15 15:34:49.246736] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.410 [2024-07-15 15:34:49.256030] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.410 [2024-07-15 15:34:49.256493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.410 [2024-07-15 15:34:49.256511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.410 [2024-07-15 15:34:49.256522] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.410 [2024-07-15 15:34:49.256693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.410 [2024-07-15 15:34:49.256867] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.410 [2024-07-15 15:34:49.256882] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.410 [2024-07-15 15:34:49.256891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.410 [2024-07-15 15:34:49.259562] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.410 [2024-07-15 15:34:49.269014] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.410 [2024-07-15 15:34:49.269521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.410 [2024-07-15 15:34:49.269539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.410 [2024-07-15 15:34:49.269548] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.410 [2024-07-15 15:34:49.269717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.410 [2024-07-15 15:34:49.269893] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.410 [2024-07-15 15:34:49.269904] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.410 [2024-07-15 15:34:49.269914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.410 [2024-07-15 15:34:49.272579] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.410 [2024-07-15 15:34:49.282024] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.410 [2024-07-15 15:34:49.282435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.410 [2024-07-15 15:34:49.282453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.410 [2024-07-15 15:34:49.282463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.410 [2024-07-15 15:34:49.282633] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.410 [2024-07-15 15:34:49.282803] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.410 [2024-07-15 15:34:49.282815] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.410 [2024-07-15 15:34:49.282825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.410 [2024-07-15 15:34:49.285493] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.410 [2024-07-15 15:34:49.294943] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.410 [2024-07-15 15:34:49.295452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.410 [2024-07-15 15:34:49.295470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.410 [2024-07-15 15:34:49.295480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.410 [2024-07-15 15:34:49.295649] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.410 [2024-07-15 15:34:49.295820] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.410 [2024-07-15 15:34:49.295831] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.410 [2024-07-15 15:34:49.295846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.410 [2024-07-15 15:34:49.298513] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.410 [2024-07-15 15:34:49.307960] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.410 [2024-07-15 15:34:49.308507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.410 [2024-07-15 15:34:49.308525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.410 [2024-07-15 15:34:49.308535] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.410 [2024-07-15 15:34:49.308704] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.410 [2024-07-15 15:34:49.308878] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.410 [2024-07-15 15:34:49.308889] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.410 [2024-07-15 15:34:49.308899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.410 [2024-07-15 15:34:49.311565] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.671 [2024-07-15 15:34:49.320853] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.671 [2024-07-15 15:34:49.321396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.671 [2024-07-15 15:34:49.321414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.671 [2024-07-15 15:34:49.321424] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.671 [2024-07-15 15:34:49.321595] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.671 [2024-07-15 15:34:49.321766] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.671 [2024-07-15 15:34:49.321777] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.671 [2024-07-15 15:34:49.321787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.671 [2024-07-15 15:34:49.324458] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.671 [2024-07-15 15:34:49.333737] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.671 [2024-07-15 15:34:49.334196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.671 [2024-07-15 15:34:49.334214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.671 [2024-07-15 15:34:49.334224] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.671 [2024-07-15 15:34:49.334393] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.671 [2024-07-15 15:34:49.334563] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.671 [2024-07-15 15:34:49.334574] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.671 [2024-07-15 15:34:49.334583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.671 [2024-07-15 15:34:49.337253] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.671 [2024-07-15 15:34:49.346694] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.671 [2024-07-15 15:34:49.347195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.671 [2024-07-15 15:34:49.347214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.671 [2024-07-15 15:34:49.347225] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.671 [2024-07-15 15:34:49.347398] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.671 [2024-07-15 15:34:49.347568] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.671 [2024-07-15 15:34:49.347580] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.671 [2024-07-15 15:34:49.347590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.671 [2024-07-15 15:34:49.350261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.671 [2024-07-15 15:34:49.359705] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.671 [2024-07-15 15:34:49.360236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.671 [2024-07-15 15:34:49.360254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.671 [2024-07-15 15:34:49.360264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.671 [2024-07-15 15:34:49.360434] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.671 [2024-07-15 15:34:49.360603] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.671 [2024-07-15 15:34:49.360614] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.671 [2024-07-15 15:34:49.360623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.671 [2024-07-15 15:34:49.363294] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.671 [2024-07-15 15:34:49.372586] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.671 [2024-07-15 15:34:49.373114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.671 [2024-07-15 15:34:49.373132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.671 [2024-07-15 15:34:49.373142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.671 [2024-07-15 15:34:49.373313] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.671 [2024-07-15 15:34:49.373482] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.671 [2024-07-15 15:34:49.373492] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.671 [2024-07-15 15:34:49.373502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.671 [2024-07-15 15:34:49.376181] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.672 [2024-07-15 15:34:49.385467] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.672 [2024-07-15 15:34:49.385996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.672 [2024-07-15 15:34:49.386015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.672 [2024-07-15 15:34:49.386025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.672 [2024-07-15 15:34:49.386195] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.672 [2024-07-15 15:34:49.386366] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.672 [2024-07-15 15:34:49.386377] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.672 [2024-07-15 15:34:49.386391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.672 [2024-07-15 15:34:49.389059] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.672 [2024-07-15 15:34:49.398349] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.672 [2024-07-15 15:34:49.398876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.672 [2024-07-15 15:34:49.398895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.672 [2024-07-15 15:34:49.398905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.672 [2024-07-15 15:34:49.399075] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.672 [2024-07-15 15:34:49.399247] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.672 [2024-07-15 15:34:49.399259] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.672 [2024-07-15 15:34:49.399269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.672 [2024-07-15 15:34:49.401939] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.672 [2024-07-15 15:34:49.411232] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.672 [2024-07-15 15:34:49.411762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.672 [2024-07-15 15:34:49.411780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.672 [2024-07-15 15:34:49.411790] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.672 [2024-07-15 15:34:49.411965] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.672 [2024-07-15 15:34:49.412137] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.672 [2024-07-15 15:34:49.412149] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.672 [2024-07-15 15:34:49.412157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.672 [2024-07-15 15:34:49.414824] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.672 [2024-07-15 15:34:49.424123] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.672 [2024-07-15 15:34:49.424651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.672 [2024-07-15 15:34:49.424669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.672 [2024-07-15 15:34:49.424680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.672 [2024-07-15 15:34:49.424856] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.672 [2024-07-15 15:34:49.425027] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.672 [2024-07-15 15:34:49.425040] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.672 [2024-07-15 15:34:49.425050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.672 [2024-07-15 15:34:49.427719] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.672 [2024-07-15 15:34:49.437016] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.672 [2024-07-15 15:34:49.437412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.672 [2024-07-15 15:34:49.437430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.672 [2024-07-15 15:34:49.437440] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.672 [2024-07-15 15:34:49.437610] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.672 [2024-07-15 15:34:49.437780] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.672 [2024-07-15 15:34:49.437791] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.672 [2024-07-15 15:34:49.437800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.672 [2024-07-15 15:34:49.440471] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.672 [2024-07-15 15:34:49.449924] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.672 [2024-07-15 15:34:49.450459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.672 [2024-07-15 15:34:49.450478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.672 [2024-07-15 15:34:49.450488] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.672 [2024-07-15 15:34:49.450657] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.672 [2024-07-15 15:34:49.450827] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.672 [2024-07-15 15:34:49.450844] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.672 [2024-07-15 15:34:49.450854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.672 [2024-07-15 15:34:49.453523] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.672 [2024-07-15 15:34:49.462836] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.672 [2024-07-15 15:34:49.463337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.672 [2024-07-15 15:34:49.463355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.672 [2024-07-15 15:34:49.463366] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.672 [2024-07-15 15:34:49.463536] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.672 [2024-07-15 15:34:49.463707] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.672 [2024-07-15 15:34:49.463719] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.672 [2024-07-15 15:34:49.463727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.672 [2024-07-15 15:34:49.466401] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.672 [2024-07-15 15:34:49.475856] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.672 [2024-07-15 15:34:49.476381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.672 [2024-07-15 15:34:49.476399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.672 [2024-07-15 15:34:49.476412] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.672 [2024-07-15 15:34:49.476588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.672 [2024-07-15 15:34:49.476759] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.672 [2024-07-15 15:34:49.476772] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.672 [2024-07-15 15:34:49.476783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.672 [2024-07-15 15:34:49.479456] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.672 [2024-07-15 15:34:49.488748] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.672 [2024-07-15 15:34:49.489259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.672 [2024-07-15 15:34:49.489279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.672 [2024-07-15 15:34:49.489292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.672 [2024-07-15 15:34:49.489463] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.672 [2024-07-15 15:34:49.489634] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.672 [2024-07-15 15:34:49.489646] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.672 [2024-07-15 15:34:49.489657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.672 [2024-07-15 15:34:49.492514] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.672 [2024-07-15 15:34:49.501712] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.672 [2024-07-15 15:34:49.502228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.672 [2024-07-15 15:34:49.502249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.672 [2024-07-15 15:34:49.502260] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.672 [2024-07-15 15:34:49.502431] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.672 [2024-07-15 15:34:49.502602] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.672 [2024-07-15 15:34:49.502615] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.672 [2024-07-15 15:34:49.502626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.672 [2024-07-15 15:34:49.505300] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.672 [2024-07-15 15:34:49.514600] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.672 [2024-07-15 15:34:49.514994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.672 [2024-07-15 15:34:49.515015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.672 [2024-07-15 15:34:49.515025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.672 [2024-07-15 15:34:49.515196] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.672 [2024-07-15 15:34:49.515367] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.672 [2024-07-15 15:34:49.515380] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.672 [2024-07-15 15:34:49.515393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.672 [2024-07-15 15:34:49.518065] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.673 [2024-07-15 15:34:49.527536] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.673 [2024-07-15 15:34:49.527978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.673 [2024-07-15 15:34:49.527998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.673 [2024-07-15 15:34:49.528008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.673 [2024-07-15 15:34:49.528179] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.673 [2024-07-15 15:34:49.528350] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.673 [2024-07-15 15:34:49.528362] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.673 [2024-07-15 15:34:49.528371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.673 [2024-07-15 15:34:49.531043] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.673 [2024-07-15 15:34:49.540499] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.673 [2024-07-15 15:34:49.540994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.673 [2024-07-15 15:34:49.541013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.673 [2024-07-15 15:34:49.541025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.673 [2024-07-15 15:34:49.541195] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.673 [2024-07-15 15:34:49.541368] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.673 [2024-07-15 15:34:49.541380] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.673 [2024-07-15 15:34:49.541389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.673 [2024-07-15 15:34:49.544066] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.673 [2024-07-15 15:34:49.553523] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.673 [2024-07-15 15:34:49.554005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.673 [2024-07-15 15:34:49.554025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.673 [2024-07-15 15:34:49.554035] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.673 [2024-07-15 15:34:49.554206] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.673 [2024-07-15 15:34:49.554377] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.673 [2024-07-15 15:34:49.554388] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.673 [2024-07-15 15:34:49.554398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.673 [2024-07-15 15:34:49.557074] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.673 [2024-07-15 15:34:49.566531] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.673 [2024-07-15 15:34:49.567029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.673 [2024-07-15 15:34:49.567053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.673 [2024-07-15 15:34:49.567064] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.673 [2024-07-15 15:34:49.567237] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.673 [2024-07-15 15:34:49.567408] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.673 [2024-07-15 15:34:49.567420] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.673 [2024-07-15 15:34:49.567429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.673 [2024-07-15 15:34:49.570106] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.932 [2024-07-15 15:34:49.579409] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.932 [2024-07-15 15:34:49.579741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.932 [2024-07-15 15:34:49.579759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.932 [2024-07-15 15:34:49.579770] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.932 [2024-07-15 15:34:49.579947] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.932 [2024-07-15 15:34:49.580118] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.932 [2024-07-15 15:34:49.580129] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.932 [2024-07-15 15:34:49.580139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.932 15:34:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:45.932 [2024-07-15 15:34:49.582808] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.932 15:34:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:29:45.932 15:34:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:45.932 15:34:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:45.932 15:34:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:45.932 [2024-07-15 15:34:49.592433] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.932 [2024-07-15 15:34:49.592878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.933 [2024-07-15 15:34:49.592899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.933 [2024-07-15 15:34:49.592909] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.933 [2024-07-15 15:34:49.593079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.933 [2024-07-15 15:34:49.593249] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.933 [2024-07-15 15:34:49.593260] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.933 [2024-07-15 15:34:49.593271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.933 [2024-07-15 15:34:49.595944] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.933 [2024-07-15 15:34:49.605423] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.933 [2024-07-15 15:34:49.605865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.933 [2024-07-15 15:34:49.605887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.933 [2024-07-15 15:34:49.605898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.933 [2024-07-15 15:34:49.606069] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.933 [2024-07-15 15:34:49.606239] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.933 [2024-07-15 15:34:49.606251] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.933 [2024-07-15 15:34:49.606260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.933 [2024-07-15 15:34:49.608934] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.933 [2024-07-15 15:34:49.618387] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.933 [2024-07-15 15:34:49.618907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.933 [2024-07-15 15:34:49.618927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.933 [2024-07-15 15:34:49.618937] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.933 [2024-07-15 15:34:49.619108] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.933 [2024-07-15 15:34:49.619278] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.933 [2024-07-15 15:34:49.619290] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.933 [2024-07-15 15:34:49.619299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.933 [2024-07-15 15:34:49.621973] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.933 15:34:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:45.933 15:34:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:45.933 15:34:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:45.933 15:34:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:45.933 [2024-07-15 15:34:49.631272] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.933 [2024-07-15 15:34:49.631731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.933 [2024-07-15 15:34:49.631751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.933 [2024-07-15 15:34:49.631761] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.933 [2024-07-15 15:34:49.631936] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.933 [2024-07-15 15:34:49.631999] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:45.933 [2024-07-15 15:34:49.632107] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.933 [2024-07-15 15:34:49.632119] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.933 [2024-07-15 15:34:49.632129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.933 [2024-07-15 15:34:49.634799] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.933 15:34:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:45.933 15:34:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:45.933 15:34:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:45.933 15:34:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:45.933 [2024-07-15 15:34:49.644259] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.933 [2024-07-15 15:34:49.644697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.933 [2024-07-15 15:34:49.644715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.933 [2024-07-15 15:34:49.644725] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.933 [2024-07-15 15:34:49.644901] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.933 [2024-07-15 15:34:49.645072] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.933 [2024-07-15 15:34:49.645083] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.933 [2024-07-15 15:34:49.645092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.933 [2024-07-15 15:34:49.647758] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.933 [2024-07-15 15:34:49.657210] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.933 [2024-07-15 15:34:49.657715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.933 [2024-07-15 15:34:49.657734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.933 [2024-07-15 15:34:49.657744] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.933 [2024-07-15 15:34:49.657928] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.933 [2024-07-15 15:34:49.658099] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.933 [2024-07-15 15:34:49.658111] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.933 [2024-07-15 15:34:49.658121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.933 [2024-07-15 15:34:49.660785] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.933 [2024-07-15 15:34:49.670086] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.933 [2024-07-15 15:34:49.670530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.933 [2024-07-15 15:34:49.670549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.933 [2024-07-15 15:34:49.670560] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.933 [2024-07-15 15:34:49.670731] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.933 [2024-07-15 15:34:49.670906] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.933 [2024-07-15 15:34:49.670918] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.933 [2024-07-15 15:34:49.670929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.933 [2024-07-15 15:34:49.673599] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.933 Malloc0 00:29:45.933 15:34:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:45.933 15:34:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:45.933 15:34:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:45.933 15:34:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:45.933 [2024-07-15 15:34:49.683061] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.933 [2024-07-15 15:34:49.683617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.933 [2024-07-15 15:34:49.683636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.933 [2024-07-15 15:34:49.683646] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.933 [2024-07-15 15:34:49.683816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.933 [2024-07-15 15:34:49.683991] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.933 [2024-07-15 15:34:49.684003] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.933 [2024-07-15 15:34:49.684012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.933 [2024-07-15 15:34:49.686681] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.933 15:34:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:45.933 15:34:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:45.933 15:34:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:45.933 15:34:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:45.933 [2024-07-15 15:34:49.695982] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.933 [2024-07-15 15:34:49.696478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.933 [2024-07-15 15:34:49.696497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19eca70 with addr=10.0.0.2, port=4420 00:29:45.933 [2024-07-15 15:34:49.696507] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eca70 is same with the state(5) to be set 00:29:45.933 15:34:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:45.933 [2024-07-15 15:34:49.696679] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eca70 (9): Bad file descriptor 00:29:45.933 [2024-07-15 15:34:49.696862] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:45.933 [2024-07-15 15:34:49.696875] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:45.933 [2024-07-15 15:34:49.696884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:45.933 15:34:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:45.933 15:34:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:45.933 15:34:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:45.933 [2024-07-15 15:34:49.699552] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:45.933 [2024-07-15 15:34:49.700127] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:45.933 15:34:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:45.933 15:34:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3219530 00:29:45.933 [2024-07-15 15:34:49.708864] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:45.934 [2024-07-15 15:34:49.741496] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:55.956 00:29:55.956 Latency(us) 00:29:55.956 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:55.956 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:55.956 Verification LBA range: start 0x0 length 0x4000 00:29:55.956 Nvme1n1 : 15.01 8826.67 34.48 13283.05 0.00 5769.84 825.75 17825.79 00:29:55.956 =================================================================================================================== 00:29:55.956 Total : 8826.67 34.48 13283.05 0.00 5769.84 825.75 17825.79 00:29:55.956 15:34:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:29:55.956 15:34:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:55.956 15:34:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:55.956 15:34:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:55.956 15:34:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:55.956 15:34:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:55.956 15:34:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:55.956 15:34:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:55.956 15:34:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:29:55.956 15:34:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:55.956 15:34:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:29:55.956 15:34:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:55.956 15:34:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:55.956 rmmod nvme_tcp 00:29:55.956 rmmod nvme_fabrics 00:29:55.956 rmmod nvme_keyring 00:29:55.956 15:34:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:55.956 15:34:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:29:55.956 15:34:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:29:55.956 15:34:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 3220593 ']' 00:29:55.956 15:34:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 3220593 00:29:55.956 15:34:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 3220593 ']' 00:29:55.956 15:34:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 3220593 00:29:55.956 15:34:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:29:55.956 15:34:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:55.956 15:34:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3220593 00:29:55.956 15:34:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:55.956 15:34:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:55.956 15:34:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3220593' 00:29:55.956 killing process with pid 3220593 00:29:55.956 15:34:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 3220593 00:29:55.956 15:34:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 3220593 00:29:55.956 15:34:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:55.956 15:34:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:55.956 15:34:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:55.956 15:34:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:55.956 15:34:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:55.956 15:34:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:55.957 15:34:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:55.957 15:34:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:57.334 15:35:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:57.334 00:29:57.334 real 0m27.680s 00:29:57.334 user 1m2.395s 00:29:57.334 sys 0m8.346s 00:29:57.334 15:35:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:57.334 15:35:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:57.334 ************************************ 00:29:57.334 END TEST nvmf_bdevperf 00:29:57.334 ************************************ 00:29:57.334 15:35:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:57.334 15:35:00 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:57.334 15:35:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:57.334 15:35:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:57.334 15:35:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:57.334 ************************************ 00:29:57.334 START TEST nvmf_target_disconnect 00:29:57.334 ************************************ 00:29:57.334 15:35:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:57.334 * Looking for test storage... 00:29:57.334 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:57.334 15:35:01 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:57.334 15:35:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:29:57.334 15:35:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:57.334 15:35:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:57.334 15:35:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:57.334 15:35:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:57.334 15:35:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:57.334 15:35:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:57.334 15:35:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:57.334 15:35:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:57.334 15:35:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:57.334 15:35:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:57.334 15:35:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:29:57.334 15:35:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:29:57.334 15:35:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:57.334 15:35:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:57.334 15:35:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:57.334 15:35:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:57.334 15:35:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:57.334 15:35:01 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:57.334 15:35:01 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:57.334 15:35:01 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:57.334 15:35:01 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.334 15:35:01 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.334 15:35:01 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.334 15:35:01 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:29:57.334 15:35:01 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.334 15:35:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:29:57.334 15:35:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:57.334 15:35:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:57.334 15:35:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:57.334 15:35:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:57.334 15:35:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:57.334 15:35:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:57.334 15:35:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:57.334 15:35:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:57.334 15:35:01 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:57.334 15:35:01 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:57.334 15:35:01 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:57.334 15:35:01 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:29:57.334 15:35:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:57.334 15:35:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:57.334 15:35:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:57.334 15:35:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:57.334 15:35:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:57.334 15:35:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:57.334 15:35:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:57.334 15:35:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:57.335 15:35:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:57.335 15:35:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:57.335 15:35:01 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:29:57.335 15:35:01 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:03.905 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:03.905 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:30:03.905 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:03.905 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:03.905 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:03.905 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:03.905 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:03.905 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:30:03.905 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:03.905 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:30:03.905 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:30:03.905 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:30:03.905 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:30:03.905 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:30:03.905 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:30:03.905 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:03.905 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:03.905 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:03.905 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:03.905 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:03.905 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:03.905 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:03.905 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:03.905 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:03.905 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:03.905 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:03.905 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:03.905 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:03.905 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:03.905 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:03.905 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:03.905 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:03.905 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:03.905 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:03.905 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:03.905 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:03.905 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:03.905 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:03.905 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:03.905 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:03.905 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:03.906 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:03.906 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:03.906 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:03.906 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:03.906 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:03.906 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:03.906 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:03.906 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:03.906 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:03.906 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:03.906 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:03.906 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:03.906 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:03.906 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:03.906 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:03.906 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:03.906 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:03.906 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:03.906 Found net devices under 0000:af:00.0: cvl_0_0 00:30:03.906 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:03.906 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:03.906 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:03.906 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:03.906 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:03.906 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:03.906 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:03.906 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:03.906 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:03.906 Found net devices under 0000:af:00.1: cvl_0_1 00:30:03.906 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:03.906 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:03.906 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:30:03.906 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:03.906 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:03.906 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:03.906 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:03.906 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:03.906 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:03.906 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:03.906 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:03.906 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:03.906 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:03.906 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:03.906 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:03.906 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:03.906 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:03.906 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:03.906 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:03.906 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:03.906 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:03.906 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:03.906 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:03.906 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:03.906 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:04.165 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:04.165 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:04.165 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:30:04.165 00:30:04.165 --- 10.0.0.2 ping statistics --- 00:30:04.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:04.165 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:30:04.165 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:04.165 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:04.165 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:30:04.165 00:30:04.165 --- 10.0.0.1 ping statistics --- 00:30:04.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:04.165 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:30:04.165 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:04.165 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:30:04.165 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:04.165 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:04.165 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:04.165 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:04.165 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:04.165 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:04.165 15:35:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:04.165 15:35:07 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:30:04.165 15:35:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:04.165 15:35:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:04.165 15:35:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:04.165 ************************************ 00:30:04.165 START TEST nvmf_target_disconnect_tc1 00:30:04.165 ************************************ 00:30:04.165 15:35:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:30:04.165 15:35:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:04.165 15:35:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:30:04.165 15:35:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:04.165 15:35:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:04.165 15:35:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:04.165 15:35:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:04.165 15:35:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:04.165 15:35:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:04.165 15:35:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:04.165 15:35:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:04.165 15:35:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:30:04.165 15:35:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:04.165 EAL: No free 2048 kB hugepages reported on node 1 00:30:04.165 [2024-07-15 15:35:08.017335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.165 [2024-07-15 15:35:08.017384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1867140 with addr=10.0.0.2, port=4420 00:30:04.165 [2024-07-15 15:35:08.017407] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:04.165 [2024-07-15 15:35:08.017421] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:04.165 [2024-07-15 15:35:08.017429] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:30:04.165 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:30:04.165 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:30:04.165 Initializing NVMe Controllers 00:30:04.165 15:35:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:30:04.165 15:35:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:04.165 15:35:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:04.165 15:35:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:04.165 00:30:04.165 real 0m0.119s 00:30:04.165 user 0m0.042s 00:30:04.165 sys 0m0.076s 00:30:04.165 15:35:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:04.165 15:35:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:04.165 ************************************ 00:30:04.165 END TEST nvmf_target_disconnect_tc1 00:30:04.165 ************************************ 00:30:04.165 15:35:08 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:30:04.165 15:35:08 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:30:04.165 15:35:08 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:04.165 15:35:08 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:04.165 15:35:08 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:04.425 ************************************ 00:30:04.425 START TEST nvmf_target_disconnect_tc2 00:30:04.425 ************************************ 00:30:04.425 15:35:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:30:04.425 15:35:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:30:04.425 15:35:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:04.425 15:35:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:04.425 15:35:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:04.425 15:35:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:04.425 15:35:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3225893 00:30:04.425 15:35:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3225893 00:30:04.425 15:35:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3225893 ']' 00:30:04.425 15:35:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:04.425 15:35:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:04.425 15:35:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:04.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:04.425 15:35:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:04.425 15:35:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:04.425 15:35:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:04.425 [2024-07-15 15:35:08.165034] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:30:04.425 [2024-07-15 15:35:08.165082] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:04.425 EAL: No free 2048 kB hugepages reported on node 1 00:30:04.425 [2024-07-15 15:35:08.254308] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:04.425 [2024-07-15 15:35:08.326775] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:04.425 [2024-07-15 15:35:08.326815] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:04.425 [2024-07-15 15:35:08.326824] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:04.425 [2024-07-15 15:35:08.326835] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:04.425 [2024-07-15 15:35:08.326843] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:04.425 [2024-07-15 15:35:08.326921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:30:04.425 [2024-07-15 15:35:08.327012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:30:04.425 [2024-07-15 15:35:08.327142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:30:04.425 [2024-07-15 15:35:08.327144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:30:05.361 15:35:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:05.362 15:35:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:30:05.362 15:35:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:05.362 15:35:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:05.362 15:35:08 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:05.362 15:35:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:05.362 15:35:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:05.362 15:35:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:05.362 15:35:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:05.362 Malloc0 00:30:05.362 15:35:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:05.362 15:35:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:05.362 15:35:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:05.362 15:35:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:05.362 [2024-07-15 15:35:09.031554] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:05.362 15:35:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:05.362 15:35:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:05.362 15:35:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:05.362 15:35:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:05.362 15:35:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:05.362 15:35:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:05.362 15:35:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:05.362 15:35:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:05.362 15:35:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:05.362 15:35:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:05.362 15:35:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:05.362 15:35:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:05.362 [2024-07-15 15:35:09.059807] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:05.362 15:35:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:05.362 15:35:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:05.362 15:35:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:05.362 15:35:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:05.362 15:35:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:05.362 15:35:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3226061 00:30:05.362 15:35:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:30:05.362 15:35:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:05.362 EAL: No free 2048 kB hugepages reported on node 1 00:30:07.271 15:35:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3225893 00:30:07.271 15:35:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:30:07.271 Read completed with error (sct=0, sc=8) 00:30:07.271 starting I/O failed 00:30:07.271 Read completed with error (sct=0, sc=8) 00:30:07.271 starting I/O failed 00:30:07.271 Write completed with error (sct=0, sc=8) 00:30:07.271 starting I/O failed 00:30:07.271 Write completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Write completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Write completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Write completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Write completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Write completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Write completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Write completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Write completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Write completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Write completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Write completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Write completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 [2024-07-15 15:35:11.086563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:07.272 starting I/O failed 00:30:07.272 Write completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Write completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Write completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Write completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Write completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Write completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Write completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Write completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Write completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Write completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Write completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Write completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Write completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Write completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Write completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 [2024-07-15 15:35:11.086790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Write completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Write completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Write completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Write completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Write completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Write completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Write completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Write completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Write completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Write completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 [2024-07-15 15:35:11.087022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Write completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Write completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Write completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Write completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Write completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Write completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Write completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Write completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Write completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Write completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Write completed with error (sct=0, sc=8) 00:30:07.272 starting I/O failed 00:30:07.272 Read completed with error (sct=0, sc=8) 00:30:07.273 starting I/O failed 00:30:07.273 Write completed with error (sct=0, sc=8) 00:30:07.273 starting I/O failed 00:30:07.273 Write completed with error (sct=0, sc=8) 00:30:07.273 starting I/O failed 00:30:07.273 Write completed with error (sct=0, sc=8) 00:30:07.273 starting I/O failed 00:30:07.273 Write completed with error (sct=0, sc=8) 00:30:07.273 starting I/O failed 00:30:07.273 Write completed with error (sct=0, sc=8) 00:30:07.273 starting I/O failed 00:30:07.273 Write completed with error (sct=0, sc=8) 00:30:07.273 starting I/O failed 00:30:07.273 Write completed with error (sct=0, sc=8) 00:30:07.273 starting I/O failed 00:30:07.273 [2024-07-15 15:35:11.087243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.273 [2024-07-15 15:35:11.087536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.273 [2024-07-15 15:35:11.087556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:07.273 qpair failed and we were unable to recover it. 00:30:07.273 [2024-07-15 15:35:11.087818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.273 [2024-07-15 15:35:11.087838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:07.273 qpair failed and we were unable to recover it. 00:30:07.273 [2024-07-15 15:35:11.088043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.273 [2024-07-15 15:35:11.088056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:07.273 qpair failed and we were unable to recover it. 00:30:07.273 [2024-07-15 15:35:11.088258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.273 [2024-07-15 15:35:11.088272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:07.273 qpair failed and we were unable to recover it. 00:30:07.273 [2024-07-15 15:35:11.088637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.273 [2024-07-15 15:35:11.088677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:07.273 qpair failed and we were unable to recover it. 00:30:07.273 [2024-07-15 15:35:11.089011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.273 [2024-07-15 15:35:11.089052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:07.273 qpair failed and we were unable to recover it. 00:30:07.273 [2024-07-15 15:35:11.089427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.273 [2024-07-15 15:35:11.089467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:07.273 qpair failed and we were unable to recover it. 00:30:07.273 [2024-07-15 15:35:11.089841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.273 [2024-07-15 15:35:11.089881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:07.273 qpair failed and we were unable to recover it. 00:30:07.273 [2024-07-15 15:35:11.090196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.273 [2024-07-15 15:35:11.090236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:07.273 qpair failed and we were unable to recover it. 00:30:07.273 [2024-07-15 15:35:11.090490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.273 [2024-07-15 15:35:11.090530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:07.273 qpair failed and we were unable to recover it. 00:30:07.273 [2024-07-15 15:35:11.090926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.273 [2024-07-15 15:35:11.090967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:07.273 qpair failed and we were unable to recover it. 00:30:07.273 [2024-07-15 15:35:11.091295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.273 [2024-07-15 15:35:11.091335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:07.273 qpair failed and we were unable to recover it. 00:30:07.273 [2024-07-15 15:35:11.091651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.273 [2024-07-15 15:35:11.091691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:07.273 qpair failed and we were unable to recover it. 00:30:07.273 [2024-07-15 15:35:11.091922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.273 [2024-07-15 15:35:11.091935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:07.273 qpair failed and we were unable to recover it. 00:30:07.273 [2024-07-15 15:35:11.092229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.273 [2024-07-15 15:35:11.092284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.273 qpair failed and we were unable to recover it. 00:30:07.273 [2024-07-15 15:35:11.092695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.273 [2024-07-15 15:35:11.092737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.273 qpair failed and we were unable to recover it. 00:30:07.273 [2024-07-15 15:35:11.093072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.273 [2024-07-15 15:35:11.093113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.273 qpair failed and we were unable to recover it. 00:30:07.273 [2024-07-15 15:35:11.093509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.273 [2024-07-15 15:35:11.093549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.273 qpair failed and we were unable to recover it. 00:30:07.273 [2024-07-15 15:35:11.093938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.273 [2024-07-15 15:35:11.093979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.273 qpair failed and we were unable to recover it. 00:30:07.273 [2024-07-15 15:35:11.094321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.273 [2024-07-15 15:35:11.094361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.273 qpair failed and we were unable to recover it. 00:30:07.273 [2024-07-15 15:35:11.094684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.273 [2024-07-15 15:35:11.094723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.273 qpair failed and we were unable to recover it. 00:30:07.273 [2024-07-15 15:35:11.095104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.273 [2024-07-15 15:35:11.095117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.273 qpair failed and we were unable to recover it. 00:30:07.273 [2024-07-15 15:35:11.095368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.273 [2024-07-15 15:35:11.095408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.273 qpair failed and we were unable to recover it. 00:30:07.273 [2024-07-15 15:35:11.095782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.273 [2024-07-15 15:35:11.095822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.273 qpair failed and we were unable to recover it. 00:30:07.273 [2024-07-15 15:35:11.096227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.273 [2024-07-15 15:35:11.096267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.273 qpair failed and we were unable to recover it. 00:30:07.273 [2024-07-15 15:35:11.096506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.273 [2024-07-15 15:35:11.096546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.273 qpair failed and we were unable to recover it. 00:30:07.273 [2024-07-15 15:35:11.096889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.273 [2024-07-15 15:35:11.096930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.273 qpair failed and we were unable to recover it. 00:30:07.273 [2024-07-15 15:35:11.097273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.273 [2024-07-15 15:35:11.097314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.273 qpair failed and we were unable to recover it. 00:30:07.273 [2024-07-15 15:35:11.097729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.273 [2024-07-15 15:35:11.097769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.273 qpair failed and we were unable to recover it. 00:30:07.273 [2024-07-15 15:35:11.098149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.273 [2024-07-15 15:35:11.098164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.273 qpair failed and we were unable to recover it. 00:30:07.273 [2024-07-15 15:35:11.098470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.273 [2024-07-15 15:35:11.098484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.273 qpair failed and we were unable to recover it. 00:30:07.273 [2024-07-15 15:35:11.098664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.273 [2024-07-15 15:35:11.098677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.273 qpair failed and we were unable to recover it. 00:30:07.273 [2024-07-15 15:35:11.099006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.273 [2024-07-15 15:35:11.099019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.273 qpair failed and we were unable to recover it. 00:30:07.273 [2024-07-15 15:35:11.099313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.273 [2024-07-15 15:35:11.099353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.273 qpair failed and we were unable to recover it. 00:30:07.273 [2024-07-15 15:35:11.099746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.273 [2024-07-15 15:35:11.099785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.273 qpair failed and we were unable to recover it. 00:30:07.273 [2024-07-15 15:35:11.100183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.273 [2024-07-15 15:35:11.100223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.273 qpair failed and we were unable to recover it. 00:30:07.273 [2024-07-15 15:35:11.100592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.273 [2024-07-15 15:35:11.100632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.273 qpair failed and we were unable to recover it. 00:30:07.273 [2024-07-15 15:35:11.100934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.274 [2024-07-15 15:35:11.100975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.274 qpair failed and we were unable to recover it. 00:30:07.274 [2024-07-15 15:35:11.101370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.274 [2024-07-15 15:35:11.101409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.274 qpair failed and we were unable to recover it. 00:30:07.274 [2024-07-15 15:35:11.101738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.274 [2024-07-15 15:35:11.101778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.274 qpair failed and we were unable to recover it. 00:30:07.274 [2024-07-15 15:35:11.102107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.274 [2024-07-15 15:35:11.102124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.274 qpair failed and we were unable to recover it. 00:30:07.274 [2024-07-15 15:35:11.102305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.274 [2024-07-15 15:35:11.102323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.274 qpair failed and we were unable to recover it. 00:30:07.274 [2024-07-15 15:35:11.102612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.274 [2024-07-15 15:35:11.102652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.274 qpair failed and we were unable to recover it. 00:30:07.274 [2024-07-15 15:35:11.102981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.274 [2024-07-15 15:35:11.102998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.274 qpair failed and we were unable to recover it. 00:30:07.274 [2024-07-15 15:35:11.103267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.274 [2024-07-15 15:35:11.103284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.274 qpair failed and we were unable to recover it. 00:30:07.274 [2024-07-15 15:35:11.103542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.274 [2024-07-15 15:35:11.103559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.274 qpair failed and we were unable to recover it. 00:30:07.274 [2024-07-15 15:35:11.103754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.274 [2024-07-15 15:35:11.103771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.274 qpair failed and we were unable to recover it. 00:30:07.274 [2024-07-15 15:35:11.103974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.274 [2024-07-15 15:35:11.103991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.274 qpair failed and we were unable to recover it. 00:30:07.274 [2024-07-15 15:35:11.104330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.274 [2024-07-15 15:35:11.104369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.274 qpair failed and we were unable to recover it. 00:30:07.274 [2024-07-15 15:35:11.104761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.274 [2024-07-15 15:35:11.104800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.274 qpair failed and we were unable to recover it. 00:30:07.274 [2024-07-15 15:35:11.105115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.274 [2024-07-15 15:35:11.105146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.274 qpair failed and we were unable to recover it. 00:30:07.274 [2024-07-15 15:35:11.105511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.274 [2024-07-15 15:35:11.105550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.274 qpair failed and we were unable to recover it. 00:30:07.274 [2024-07-15 15:35:11.105949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.274 [2024-07-15 15:35:11.105991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.274 qpair failed and we were unable to recover it. 00:30:07.274 [2024-07-15 15:35:11.106373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.274 [2024-07-15 15:35:11.106390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.274 qpair failed and we were unable to recover it. 00:30:07.274 [2024-07-15 15:35:11.106775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.274 [2024-07-15 15:35:11.106814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.274 qpair failed and we were unable to recover it. 00:30:07.274 [2024-07-15 15:35:11.107193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.274 [2024-07-15 15:35:11.107234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.274 qpair failed and we were unable to recover it. 00:30:07.274 [2024-07-15 15:35:11.107539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.274 [2024-07-15 15:35:11.107580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.274 qpair failed and we were unable to recover it. 00:30:07.274 [2024-07-15 15:35:11.107970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.274 [2024-07-15 15:35:11.108010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.274 qpair failed and we were unable to recover it. 00:30:07.274 [2024-07-15 15:35:11.108422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.274 [2024-07-15 15:35:11.108461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.274 qpair failed and we were unable to recover it. 00:30:07.274 [2024-07-15 15:35:11.108844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.274 [2024-07-15 15:35:11.108886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.274 qpair failed and we were unable to recover it. 00:30:07.274 [2024-07-15 15:35:11.109245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.274 [2024-07-15 15:35:11.109287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.274 qpair failed and we were unable to recover it. 00:30:07.274 [2024-07-15 15:35:11.109648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.274 [2024-07-15 15:35:11.109688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.274 qpair failed and we were unable to recover it. 00:30:07.274 [2024-07-15 15:35:11.109989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.274 [2024-07-15 15:35:11.110029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.274 qpair failed and we were unable to recover it. 00:30:07.274 [2024-07-15 15:35:11.110400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.274 [2024-07-15 15:35:11.110440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.274 qpair failed and we were unable to recover it. 00:30:07.274 [2024-07-15 15:35:11.110688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.274 [2024-07-15 15:35:11.110727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.274 qpair failed and we were unable to recover it. 00:30:07.274 [2024-07-15 15:35:11.111065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.274 [2024-07-15 15:35:11.111106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.274 qpair failed and we were unable to recover it. 00:30:07.274 [2024-07-15 15:35:11.111427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.274 [2024-07-15 15:35:11.111468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.274 qpair failed and we were unable to recover it. 00:30:07.274 [2024-07-15 15:35:11.111712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.274 [2024-07-15 15:35:11.111751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.274 qpair failed and we were unable to recover it. 00:30:07.274 [2024-07-15 15:35:11.112095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.274 [2024-07-15 15:35:11.112141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.274 qpair failed and we were unable to recover it. 00:30:07.274 [2024-07-15 15:35:11.112476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.274 [2024-07-15 15:35:11.112516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.274 qpair failed and we were unable to recover it. 00:30:07.274 [2024-07-15 15:35:11.112858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.274 [2024-07-15 15:35:11.112899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.274 qpair failed and we were unable to recover it. 00:30:07.274 [2024-07-15 15:35:11.113229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.274 [2024-07-15 15:35:11.113247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.274 qpair failed and we were unable to recover it. 00:30:07.274 [2024-07-15 15:35:11.113491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.274 [2024-07-15 15:35:11.113509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.274 qpair failed and we were unable to recover it. 00:30:07.274 [2024-07-15 15:35:11.113849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.274 [2024-07-15 15:35:11.113890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.274 qpair failed and we were unable to recover it. 00:30:07.274 [2024-07-15 15:35:11.114201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.274 [2024-07-15 15:35:11.114242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.274 qpair failed and we were unable to recover it. 00:30:07.274 [2024-07-15 15:35:11.114625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.274 [2024-07-15 15:35:11.114665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.274 qpair failed and we were unable to recover it. 00:30:07.274 [2024-07-15 15:35:11.115055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.274 [2024-07-15 15:35:11.115097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.274 qpair failed and we were unable to recover it. 00:30:07.274 [2024-07-15 15:35:11.115491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.274 [2024-07-15 15:35:11.115508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.274 qpair failed and we were unable to recover it. 00:30:07.275 [2024-07-15 15:35:11.115741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.275 [2024-07-15 15:35:11.115781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.275 qpair failed and we were unable to recover it. 00:30:07.275 [2024-07-15 15:35:11.116183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.275 [2024-07-15 15:35:11.116224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.275 qpair failed and we were unable to recover it. 00:30:07.275 [2024-07-15 15:35:11.116613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.275 [2024-07-15 15:35:11.116652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.275 qpair failed and we were unable to recover it. 00:30:07.275 [2024-07-15 15:35:11.117062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.275 [2024-07-15 15:35:11.117103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.275 qpair failed and we were unable to recover it. 00:30:07.275 [2024-07-15 15:35:11.117396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.275 [2024-07-15 15:35:11.117414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.275 qpair failed and we were unable to recover it. 00:30:07.275 [2024-07-15 15:35:11.117699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.275 [2024-07-15 15:35:11.117739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.275 qpair failed and we were unable to recover it. 00:30:07.275 [2024-07-15 15:35:11.118076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.275 [2024-07-15 15:35:11.118117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.275 qpair failed and we were unable to recover it. 00:30:07.275 [2024-07-15 15:35:11.118458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.275 [2024-07-15 15:35:11.118498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.275 qpair failed and we were unable to recover it. 00:30:07.275 [2024-07-15 15:35:11.118795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.275 [2024-07-15 15:35:11.118847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.275 qpair failed and we were unable to recover it. 00:30:07.275 [2024-07-15 15:35:11.119160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.275 [2024-07-15 15:35:11.119178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.275 qpair failed and we were unable to recover it. 00:30:07.275 [2024-07-15 15:35:11.119526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.275 [2024-07-15 15:35:11.119566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.275 qpair failed and we were unable to recover it. 00:30:07.275 [2024-07-15 15:35:11.119945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.275 [2024-07-15 15:35:11.119963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.275 qpair failed and we were unable to recover it. 00:30:07.275 [2024-07-15 15:35:11.120273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.275 [2024-07-15 15:35:11.120291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.275 qpair failed and we were unable to recover it. 00:30:07.275 [2024-07-15 15:35:11.120555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.275 [2024-07-15 15:35:11.120573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.275 qpair failed and we were unable to recover it. 00:30:07.275 [2024-07-15 15:35:11.120841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.275 [2024-07-15 15:35:11.120859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.275 qpair failed and we were unable to recover it. 00:30:07.275 [2024-07-15 15:35:11.121075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.275 [2024-07-15 15:35:11.121116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.275 qpair failed and we were unable to recover it. 00:30:07.275 [2024-07-15 15:35:11.121501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.275 [2024-07-15 15:35:11.121541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.275 qpair failed and we were unable to recover it. 00:30:07.275 [2024-07-15 15:35:11.121930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.275 [2024-07-15 15:35:11.121970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.275 qpair failed and we were unable to recover it. 00:30:07.275 [2024-07-15 15:35:11.122340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.275 [2024-07-15 15:35:11.122380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.275 qpair failed and we were unable to recover it. 00:30:07.275 [2024-07-15 15:35:11.122770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.275 [2024-07-15 15:35:11.122810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.275 qpair failed and we were unable to recover it. 00:30:07.275 [2024-07-15 15:35:11.123201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.275 [2024-07-15 15:35:11.123242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.275 qpair failed and we were unable to recover it. 00:30:07.275 [2024-07-15 15:35:11.123560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.275 [2024-07-15 15:35:11.123599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.275 qpair failed and we were unable to recover it. 00:30:07.275 [2024-07-15 15:35:11.123981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.275 [2024-07-15 15:35:11.123999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.275 qpair failed and we were unable to recover it. 00:30:07.275 [2024-07-15 15:35:11.124341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.275 [2024-07-15 15:35:11.124381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.275 qpair failed and we were unable to recover it. 00:30:07.275 [2024-07-15 15:35:11.124766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.275 [2024-07-15 15:35:11.124805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.275 qpair failed and we were unable to recover it. 00:30:07.275 [2024-07-15 15:35:11.125121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.275 [2024-07-15 15:35:11.125139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.275 qpair failed and we were unable to recover it. 00:30:07.275 [2024-07-15 15:35:11.125456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.275 [2024-07-15 15:35:11.125496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.275 qpair failed and we were unable to recover it. 00:30:07.275 [2024-07-15 15:35:11.125793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.275 [2024-07-15 15:35:11.125844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.275 qpair failed and we were unable to recover it. 00:30:07.275 [2024-07-15 15:35:11.126062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.275 [2024-07-15 15:35:11.126080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.275 qpair failed and we were unable to recover it. 00:30:07.275 [2024-07-15 15:35:11.126404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.275 [2024-07-15 15:35:11.126443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.275 qpair failed and we were unable to recover it. 00:30:07.275 [2024-07-15 15:35:11.126788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.275 [2024-07-15 15:35:11.126828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.275 qpair failed and we were unable to recover it. 00:30:07.275 [2024-07-15 15:35:11.127231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.275 [2024-07-15 15:35:11.127310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.275 qpair failed and we were unable to recover it. 00:30:07.275 [2024-07-15 15:35:11.127648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.275 [2024-07-15 15:35:11.127691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.275 qpair failed and we were unable to recover it. 00:30:07.275 [2024-07-15 15:35:11.128078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.275 [2024-07-15 15:35:11.128098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.275 qpair failed and we were unable to recover it. 00:30:07.275 [2024-07-15 15:35:11.128443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.275 [2024-07-15 15:35:11.128484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.275 qpair failed and we were unable to recover it. 00:30:07.275 [2024-07-15 15:35:11.128794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.275 [2024-07-15 15:35:11.128844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.275 qpair failed and we were unable to recover it. 00:30:07.275 [2024-07-15 15:35:11.129118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.275 [2024-07-15 15:35:11.129135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.275 qpair failed and we were unable to recover it. 00:30:07.275 [2024-07-15 15:35:11.129419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.275 [2024-07-15 15:35:11.129460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.275 qpair failed and we were unable to recover it. 00:30:07.275 [2024-07-15 15:35:11.129823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.275 [2024-07-15 15:35:11.129871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.275 qpair failed and we were unable to recover it. 00:30:07.275 [2024-07-15 15:35:11.130229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.275 [2024-07-15 15:35:11.130309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:07.275 qpair failed and we were unable to recover it. 00:30:07.275 [2024-07-15 15:35:11.130742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.275 [2024-07-15 15:35:11.130784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.275 qpair failed and we were unable to recover it. 00:30:07.275 [2024-07-15 15:35:11.131192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.276 [2024-07-15 15:35:11.131234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.276 qpair failed and we were unable to recover it. 00:30:07.276 [2024-07-15 15:35:11.131598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.276 [2024-07-15 15:35:11.131638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.276 qpair failed and we were unable to recover it. 00:30:07.276 [2024-07-15 15:35:11.131953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.276 [2024-07-15 15:35:11.131994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.276 qpair failed and we were unable to recover it. 00:30:07.276 [2024-07-15 15:35:11.132324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.276 [2024-07-15 15:35:11.132341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.276 qpair failed and we were unable to recover it. 00:30:07.276 [2024-07-15 15:35:11.132739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.276 [2024-07-15 15:35:11.132779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.276 qpair failed and we were unable to recover it. 00:30:07.276 [2024-07-15 15:35:11.133110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.276 [2024-07-15 15:35:11.133150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.276 qpair failed and we were unable to recover it. 00:30:07.276 [2024-07-15 15:35:11.133536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.276 [2024-07-15 15:35:11.133576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.276 qpair failed and we were unable to recover it. 00:30:07.276 [2024-07-15 15:35:11.133830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.276 [2024-07-15 15:35:11.133881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.276 qpair failed and we were unable to recover it. 00:30:07.276 [2024-07-15 15:35:11.134189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.276 [2024-07-15 15:35:11.134229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.276 qpair failed and we were unable to recover it. 00:30:07.276 [2024-07-15 15:35:11.134566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.276 [2024-07-15 15:35:11.134607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.276 qpair failed and we were unable to recover it. 00:30:07.276 [2024-07-15 15:35:11.134993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.276 [2024-07-15 15:35:11.135011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.276 qpair failed and we were unable to recover it. 00:30:07.276 [2024-07-15 15:35:11.135352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.276 [2024-07-15 15:35:11.135392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.276 qpair failed and we were unable to recover it. 00:30:07.276 [2024-07-15 15:35:11.135757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.276 [2024-07-15 15:35:11.135797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.276 qpair failed and we were unable to recover it. 00:30:07.276 [2024-07-15 15:35:11.136134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.276 [2024-07-15 15:35:11.136175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.276 qpair failed and we were unable to recover it. 00:30:07.276 [2024-07-15 15:35:11.136556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.276 [2024-07-15 15:35:11.136595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.276 qpair failed and we were unable to recover it. 00:30:07.276 [2024-07-15 15:35:11.136911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.276 [2024-07-15 15:35:11.136953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.276 qpair failed and we were unable to recover it. 00:30:07.276 [2024-07-15 15:35:11.137321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.276 [2024-07-15 15:35:11.137360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.276 qpair failed and we were unable to recover it. 00:30:07.276 [2024-07-15 15:35:11.137678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.276 [2024-07-15 15:35:11.137718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.276 qpair failed and we were unable to recover it. 00:30:07.276 [2024-07-15 15:35:11.138122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.276 [2024-07-15 15:35:11.138140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.276 qpair failed and we were unable to recover it. 00:30:07.276 [2024-07-15 15:35:11.138477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.276 [2024-07-15 15:35:11.138495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.276 qpair failed and we were unable to recover it. 00:30:07.276 [2024-07-15 15:35:11.138850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.276 [2024-07-15 15:35:11.138868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.276 qpair failed and we were unable to recover it. 00:30:07.276 [2024-07-15 15:35:11.139210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.276 [2024-07-15 15:35:11.139251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.276 qpair failed and we were unable to recover it. 00:30:07.276 [2024-07-15 15:35:11.139548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.276 [2024-07-15 15:35:11.139588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.276 qpair failed and we were unable to recover it. 00:30:07.276 [2024-07-15 15:35:11.139904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.276 [2024-07-15 15:35:11.139945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.276 qpair failed and we were unable to recover it. 00:30:07.276 [2024-07-15 15:35:11.140268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.276 [2024-07-15 15:35:11.140308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.276 qpair failed and we were unable to recover it. 00:30:07.276 [2024-07-15 15:35:11.140695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.276 [2024-07-15 15:35:11.140734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.276 qpair failed and we were unable to recover it. 00:30:07.276 [2024-07-15 15:35:11.141071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.276 [2024-07-15 15:35:11.141112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.276 qpair failed and we were unable to recover it. 00:30:07.276 [2024-07-15 15:35:11.141473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.276 [2024-07-15 15:35:11.141491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.276 qpair failed and we were unable to recover it. 00:30:07.276 [2024-07-15 15:35:11.141810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.276 [2024-07-15 15:35:11.141860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.276 qpair failed and we were unable to recover it. 00:30:07.276 [2024-07-15 15:35:11.142261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.276 [2024-07-15 15:35:11.142301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.276 qpair failed and we were unable to recover it. 00:30:07.276 [2024-07-15 15:35:11.142555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.276 [2024-07-15 15:35:11.142573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.276 qpair failed and we were unable to recover it. 00:30:07.276 [2024-07-15 15:35:11.142914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.276 [2024-07-15 15:35:11.142955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.276 qpair failed and we were unable to recover it. 00:30:07.276 [2024-07-15 15:35:11.143292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.276 [2024-07-15 15:35:11.143350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.276 qpair failed and we were unable to recover it. 00:30:07.276 [2024-07-15 15:35:11.143669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.277 [2024-07-15 15:35:11.143708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.277 qpair failed and we were unable to recover it. 00:30:07.277 [2024-07-15 15:35:11.143897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.277 [2024-07-15 15:35:11.143938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.277 qpair failed and we were unable to recover it. 00:30:07.277 [2024-07-15 15:35:11.144251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.277 [2024-07-15 15:35:11.144291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.277 qpair failed and we were unable to recover it. 00:30:07.277 [2024-07-15 15:35:11.144452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.277 [2024-07-15 15:35:11.144492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.277 qpair failed and we were unable to recover it. 00:30:07.277 [2024-07-15 15:35:11.144807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.277 [2024-07-15 15:35:11.144861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.277 qpair failed and we were unable to recover it. 00:30:07.277 [2024-07-15 15:35:11.145157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.277 [2024-07-15 15:35:11.145175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.277 qpair failed and we were unable to recover it. 00:30:07.277 [2024-07-15 15:35:11.145454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.277 [2024-07-15 15:35:11.145472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.277 qpair failed and we were unable to recover it. 00:30:07.277 [2024-07-15 15:35:11.145725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.277 [2024-07-15 15:35:11.145743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.277 qpair failed and we were unable to recover it. 00:30:07.277 [2024-07-15 15:35:11.146009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.277 [2024-07-15 15:35:11.146042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.277 qpair failed and we were unable to recover it. 00:30:07.277 [2024-07-15 15:35:11.146349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.277 [2024-07-15 15:35:11.146389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.277 qpair failed and we were unable to recover it. 00:30:07.277 [2024-07-15 15:35:11.146694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.277 [2024-07-15 15:35:11.146733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.277 qpair failed and we were unable to recover it. 00:30:07.277 [2024-07-15 15:35:11.147115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.277 [2024-07-15 15:35:11.147161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.277 qpair failed and we were unable to recover it. 00:30:07.277 [2024-07-15 15:35:11.147391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.277 [2024-07-15 15:35:11.147408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.277 qpair failed and we were unable to recover it. 00:30:07.277 [2024-07-15 15:35:11.147663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.277 [2024-07-15 15:35:11.147702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.277 qpair failed and we were unable to recover it. 00:30:07.277 [2024-07-15 15:35:11.148025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.277 [2024-07-15 15:35:11.148067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.277 qpair failed and we were unable to recover it. 00:30:07.277 [2024-07-15 15:35:11.148254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.277 [2024-07-15 15:35:11.148271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.277 qpair failed and we were unable to recover it. 00:30:07.277 [2024-07-15 15:35:11.148583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.277 [2024-07-15 15:35:11.148601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.277 qpair failed and we were unable to recover it. 00:30:07.277 [2024-07-15 15:35:11.148947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.277 [2024-07-15 15:35:11.148988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.277 qpair failed and we were unable to recover it. 00:30:07.277 [2024-07-15 15:35:11.149382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.277 [2024-07-15 15:35:11.149422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.277 qpair failed and we were unable to recover it. 00:30:07.277 [2024-07-15 15:35:11.149816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.277 [2024-07-15 15:35:11.149868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.277 qpair failed and we were unable to recover it. 00:30:07.277 [2024-07-15 15:35:11.150121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.277 [2024-07-15 15:35:11.150162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.277 qpair failed and we were unable to recover it. 00:30:07.277 [2024-07-15 15:35:11.150527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.277 [2024-07-15 15:35:11.150544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.277 qpair failed and we were unable to recover it. 00:30:07.277 [2024-07-15 15:35:11.150798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.277 [2024-07-15 15:35:11.150853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.277 qpair failed and we were unable to recover it. 00:30:07.277 [2024-07-15 15:35:11.151244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.277 [2024-07-15 15:35:11.151284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.277 qpair failed and we were unable to recover it. 00:30:07.277 [2024-07-15 15:35:11.151669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.277 [2024-07-15 15:35:11.151709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.277 qpair failed and we were unable to recover it. 00:30:07.277 [2024-07-15 15:35:11.151956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.277 [2024-07-15 15:35:11.151997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.277 qpair failed and we were unable to recover it. 00:30:07.277 [2024-07-15 15:35:11.152296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.277 [2024-07-15 15:35:11.152337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.277 qpair failed and we were unable to recover it. 00:30:07.277 [2024-07-15 15:35:11.152635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.277 [2024-07-15 15:35:11.152675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.277 qpair failed and we were unable to recover it. 00:30:07.277 [2024-07-15 15:35:11.152943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.277 [2024-07-15 15:35:11.152984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.277 qpair failed and we were unable to recover it. 00:30:07.277 [2024-07-15 15:35:11.153324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.277 [2024-07-15 15:35:11.153364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.277 qpair failed and we were unable to recover it. 00:30:07.277 [2024-07-15 15:35:11.153685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.277 [2024-07-15 15:35:11.153725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.277 qpair failed and we were unable to recover it. 00:30:07.277 [2024-07-15 15:35:11.154105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.277 [2024-07-15 15:35:11.154123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.277 qpair failed and we were unable to recover it. 00:30:07.277 [2024-07-15 15:35:11.154401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.277 [2024-07-15 15:35:11.154418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.277 qpair failed and we were unable to recover it. 00:30:07.277 [2024-07-15 15:35:11.154675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.277 [2024-07-15 15:35:11.154715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.277 qpair failed and we were unable to recover it. 00:30:07.277 [2024-07-15 15:35:11.155013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.277 [2024-07-15 15:35:11.155054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.277 qpair failed and we were unable to recover it. 00:30:07.277 [2024-07-15 15:35:11.155375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.277 [2024-07-15 15:35:11.155414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.277 qpair failed and we were unable to recover it. 00:30:07.277 [2024-07-15 15:35:11.155747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.277 [2024-07-15 15:35:11.155788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.277 qpair failed and we were unable to recover it. 00:30:07.277 [2024-07-15 15:35:11.156043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.277 [2024-07-15 15:35:11.156062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.277 qpair failed and we were unable to recover it. 00:30:07.277 [2024-07-15 15:35:11.156377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.277 [2024-07-15 15:35:11.156422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.277 qpair failed and we were unable to recover it. 00:30:07.277 [2024-07-15 15:35:11.156764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.277 [2024-07-15 15:35:11.156803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.277 qpair failed and we were unable to recover it. 00:30:07.277 [2024-07-15 15:35:11.157149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.277 [2024-07-15 15:35:11.157189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.277 qpair failed and we were unable to recover it. 00:30:07.277 [2024-07-15 15:35:11.157507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.278 [2024-07-15 15:35:11.157546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.278 qpair failed and we were unable to recover it. 00:30:07.278 [2024-07-15 15:35:11.157883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.278 [2024-07-15 15:35:11.157916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.278 qpair failed and we were unable to recover it. 00:30:07.278 [2024-07-15 15:35:11.158166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.278 [2024-07-15 15:35:11.158203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.278 qpair failed and we were unable to recover it. 00:30:07.278 [2024-07-15 15:35:11.158449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.278 [2024-07-15 15:35:11.158490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.278 qpair failed and we were unable to recover it. 00:30:07.278 [2024-07-15 15:35:11.158816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.278 [2024-07-15 15:35:11.158882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.278 qpair failed and we were unable to recover it. 00:30:07.278 [2024-07-15 15:35:11.159175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.278 [2024-07-15 15:35:11.159215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.278 qpair failed and we were unable to recover it. 00:30:07.278 [2024-07-15 15:35:11.159479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.278 [2024-07-15 15:35:11.159520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.278 qpair failed and we were unable to recover it. 00:30:07.278 [2024-07-15 15:35:11.159851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.278 [2024-07-15 15:35:11.159892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.278 qpair failed and we were unable to recover it. 00:30:07.278 [2024-07-15 15:35:11.160140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.278 [2024-07-15 15:35:11.160180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.278 qpair failed and we were unable to recover it. 00:30:07.278 [2024-07-15 15:35:11.160489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.278 [2024-07-15 15:35:11.160529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.278 qpair failed and we were unable to recover it. 00:30:07.278 [2024-07-15 15:35:11.160758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.278 [2024-07-15 15:35:11.160798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.278 qpair failed and we were unable to recover it. 00:30:07.278 [2024-07-15 15:35:11.161200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.278 [2024-07-15 15:35:11.161218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.278 qpair failed and we were unable to recover it. 00:30:07.278 [2024-07-15 15:35:11.161395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.278 [2024-07-15 15:35:11.161413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.278 qpair failed and we were unable to recover it. 00:30:07.278 [2024-07-15 15:35:11.161757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.278 [2024-07-15 15:35:11.161796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.278 qpair failed and we were unable to recover it. 00:30:07.278 [2024-07-15 15:35:11.162197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.278 [2024-07-15 15:35:11.162248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.278 qpair failed and we were unable to recover it. 00:30:07.278 [2024-07-15 15:35:11.162591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.278 [2024-07-15 15:35:11.162608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.278 qpair failed and we were unable to recover it. 00:30:07.278 [2024-07-15 15:35:11.162865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.278 [2024-07-15 15:35:11.162883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.278 qpair failed and we were unable to recover it. 00:30:07.278 [2024-07-15 15:35:11.163130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.278 [2024-07-15 15:35:11.163171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.278 qpair failed and we were unable to recover it. 00:30:07.278 [2024-07-15 15:35:11.163478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.278 [2024-07-15 15:35:11.163517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.278 qpair failed and we were unable to recover it. 00:30:07.278 [2024-07-15 15:35:11.163852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.278 [2024-07-15 15:35:11.163893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.278 qpair failed and we were unable to recover it. 00:30:07.278 [2024-07-15 15:35:11.164278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.278 [2024-07-15 15:35:11.164316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.278 qpair failed and we were unable to recover it. 00:30:07.278 [2024-07-15 15:35:11.164626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.278 [2024-07-15 15:35:11.164643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.278 qpair failed and we were unable to recover it. 00:30:07.278 [2024-07-15 15:35:11.164928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.278 [2024-07-15 15:35:11.164946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.278 qpair failed and we were unable to recover it. 00:30:07.278 [2024-07-15 15:35:11.165279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.278 [2024-07-15 15:35:11.165319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.278 qpair failed and we were unable to recover it. 00:30:07.278 [2024-07-15 15:35:11.165665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.278 [2024-07-15 15:35:11.165711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.278 qpair failed and we were unable to recover it. 00:30:07.278 [2024-07-15 15:35:11.166061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.278 [2024-07-15 15:35:11.166103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.278 qpair failed and we were unable to recover it. 00:30:07.278 [2024-07-15 15:35:11.166470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.278 [2024-07-15 15:35:11.166510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.278 qpair failed and we were unable to recover it. 00:30:07.278 [2024-07-15 15:35:11.166872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.278 [2024-07-15 15:35:11.166914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.278 qpair failed and we were unable to recover it. 00:30:07.278 [2024-07-15 15:35:11.167292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.278 [2024-07-15 15:35:11.167331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.278 qpair failed and we were unable to recover it. 00:30:07.278 [2024-07-15 15:35:11.167601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.278 [2024-07-15 15:35:11.167641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.278 qpair failed and we were unable to recover it. 00:30:07.278 [2024-07-15 15:35:11.168023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.278 [2024-07-15 15:35:11.168065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.278 qpair failed and we were unable to recover it. 00:30:07.278 [2024-07-15 15:35:11.168445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.278 [2024-07-15 15:35:11.168463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.278 qpair failed and we were unable to recover it. 00:30:07.278 [2024-07-15 15:35:11.168799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.278 [2024-07-15 15:35:11.168816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.278 qpair failed and we were unable to recover it. 00:30:07.278 [2024-07-15 15:35:11.169160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.278 [2024-07-15 15:35:11.169178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.278 qpair failed and we were unable to recover it. 00:30:07.278 [2024-07-15 15:35:11.169421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.278 [2024-07-15 15:35:11.169438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.278 qpair failed and we were unable to recover it. 00:30:07.278 [2024-07-15 15:35:11.169701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.278 [2024-07-15 15:35:11.169719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.278 qpair failed and we were unable to recover it. 00:30:07.278 [2024-07-15 15:35:11.169981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.278 [2024-07-15 15:35:11.169999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.278 qpair failed and we were unable to recover it. 00:30:07.278 [2024-07-15 15:35:11.170248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.278 [2024-07-15 15:35:11.170265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.278 qpair failed and we were unable to recover it. 00:30:07.278 [2024-07-15 15:35:11.170517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.278 [2024-07-15 15:35:11.170552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.278 qpair failed and we were unable to recover it. 00:30:07.278 [2024-07-15 15:35:11.170940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.278 [2024-07-15 15:35:11.170981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.278 qpair failed and we were unable to recover it. 00:30:07.278 [2024-07-15 15:35:11.171287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.278 [2024-07-15 15:35:11.171304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.278 qpair failed and we were unable to recover it. 00:30:07.278 [2024-07-15 15:35:11.171646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.278 [2024-07-15 15:35:11.171664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.279 qpair failed and we were unable to recover it. 00:30:07.279 [2024-07-15 15:35:11.171985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.279 [2024-07-15 15:35:11.172002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.279 qpair failed and we were unable to recover it. 00:30:07.279 [2024-07-15 15:35:11.172382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.279 [2024-07-15 15:35:11.172421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.279 qpair failed and we were unable to recover it. 00:30:07.279 [2024-07-15 15:35:11.172842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.279 [2024-07-15 15:35:11.172883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.279 qpair failed and we were unable to recover it. 00:30:07.279 [2024-07-15 15:35:11.173249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.279 [2024-07-15 15:35:11.173290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.279 qpair failed and we were unable to recover it. 00:30:07.279 [2024-07-15 15:35:11.173661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.279 [2024-07-15 15:35:11.173679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.279 qpair failed and we were unable to recover it. 00:30:07.556 [2024-07-15 15:35:11.173921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.556 [2024-07-15 15:35:11.173939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.556 qpair failed and we were unable to recover it. 00:30:07.556 [2024-07-15 15:35:11.174283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.556 [2024-07-15 15:35:11.174301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.556 qpair failed and we were unable to recover it. 00:30:07.556 [2024-07-15 15:35:11.174642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.556 [2024-07-15 15:35:11.174659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.556 qpair failed and we were unable to recover it. 00:30:07.556 [2024-07-15 15:35:11.174999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.556 [2024-07-15 15:35:11.175017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.556 qpair failed and we were unable to recover it. 00:30:07.556 [2024-07-15 15:35:11.175294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.556 [2024-07-15 15:35:11.175312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.556 qpair failed and we were unable to recover it. 00:30:07.556 [2024-07-15 15:35:11.175682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.556 [2024-07-15 15:35:11.175721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.556 qpair failed and we were unable to recover it. 00:30:07.556 [2024-07-15 15:35:11.176105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.556 [2024-07-15 15:35:11.176146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.556 qpair failed and we were unable to recover it. 00:30:07.556 [2024-07-15 15:35:11.176534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.556 [2024-07-15 15:35:11.176573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.556 qpair failed and we were unable to recover it. 00:30:07.556 [2024-07-15 15:35:11.176870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.556 [2024-07-15 15:35:11.176910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.556 qpair failed and we were unable to recover it. 00:30:07.556 [2024-07-15 15:35:11.177296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.556 [2024-07-15 15:35:11.177335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.556 qpair failed and we were unable to recover it. 00:30:07.556 [2024-07-15 15:35:11.177637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.556 [2024-07-15 15:35:11.177677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.556 qpair failed and we were unable to recover it. 00:30:07.556 [2024-07-15 15:35:11.178062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.556 [2024-07-15 15:35:11.178103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.556 qpair failed and we were unable to recover it. 00:30:07.556 [2024-07-15 15:35:11.178463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.556 [2024-07-15 15:35:11.178481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.556 qpair failed and we were unable to recover it. 00:30:07.556 [2024-07-15 15:35:11.178866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.556 [2024-07-15 15:35:11.178907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.556 qpair failed and we were unable to recover it. 00:30:07.556 [2024-07-15 15:35:11.179205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.556 [2024-07-15 15:35:11.179245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.556 qpair failed and we were unable to recover it. 00:30:07.556 [2024-07-15 15:35:11.179631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.556 [2024-07-15 15:35:11.179671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.556 qpair failed and we were unable to recover it. 00:30:07.556 [2024-07-15 15:35:11.180052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.556 [2024-07-15 15:35:11.180093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.556 qpair failed and we were unable to recover it. 00:30:07.556 [2024-07-15 15:35:11.180467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.556 [2024-07-15 15:35:11.180484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.556 qpair failed and we were unable to recover it. 00:30:07.556 [2024-07-15 15:35:11.180828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.556 [2024-07-15 15:35:11.180878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.556 qpair failed and we were unable to recover it. 00:30:07.556 [2024-07-15 15:35:11.181264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.556 [2024-07-15 15:35:11.181304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.556 qpair failed and we were unable to recover it. 00:30:07.556 [2024-07-15 15:35:11.181646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.556 [2024-07-15 15:35:11.181686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.556 qpair failed and we were unable to recover it. 00:30:07.556 [2024-07-15 15:35:11.182072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.556 [2024-07-15 15:35:11.182113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.556 qpair failed and we were unable to recover it. 00:30:07.556 [2024-07-15 15:35:11.182403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.556 [2024-07-15 15:35:11.182421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.556 qpair failed and we were unable to recover it. 00:30:07.556 [2024-07-15 15:35:11.182681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.556 [2024-07-15 15:35:11.182699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.556 qpair failed and we were unable to recover it. 00:30:07.556 [2024-07-15 15:35:11.183034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.556 [2024-07-15 15:35:11.183053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.556 qpair failed and we were unable to recover it. 00:30:07.556 [2024-07-15 15:35:11.183338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.556 [2024-07-15 15:35:11.183378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.556 qpair failed and we were unable to recover it. 00:30:07.556 [2024-07-15 15:35:11.183674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.556 [2024-07-15 15:35:11.183714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.556 qpair failed and we were unable to recover it. 00:30:07.556 [2024-07-15 15:35:11.184099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.556 [2024-07-15 15:35:11.184140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.556 qpair failed and we were unable to recover it. 00:30:07.556 [2024-07-15 15:35:11.184444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.556 [2024-07-15 15:35:11.184483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.556 qpair failed and we were unable to recover it. 00:30:07.556 [2024-07-15 15:35:11.184871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.556 [2024-07-15 15:35:11.184912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.556 qpair failed and we were unable to recover it. 00:30:07.556 [2024-07-15 15:35:11.185217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.556 [2024-07-15 15:35:11.185257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.556 qpair failed and we were unable to recover it. 00:30:07.556 [2024-07-15 15:35:11.185557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.557 [2024-07-15 15:35:11.185596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.557 qpair failed and we were unable to recover it. 00:30:07.557 [2024-07-15 15:35:11.185848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.557 [2024-07-15 15:35:11.185889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.557 qpair failed and we were unable to recover it. 00:30:07.557 [2024-07-15 15:35:11.186281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.557 [2024-07-15 15:35:11.186321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.557 qpair failed and we were unable to recover it. 00:30:07.557 [2024-07-15 15:35:11.186622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.557 [2024-07-15 15:35:11.186662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.557 qpair failed and we were unable to recover it. 00:30:07.557 [2024-07-15 15:35:11.186984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.557 [2024-07-15 15:35:11.187025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.557 qpair failed and we were unable to recover it. 00:30:07.557 [2024-07-15 15:35:11.187318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.557 [2024-07-15 15:35:11.187335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.557 qpair failed and we were unable to recover it. 00:30:07.557 [2024-07-15 15:35:11.187649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.557 [2024-07-15 15:35:11.187695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.557 qpair failed and we were unable to recover it. 00:30:07.557 [2024-07-15 15:35:11.188078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.557 [2024-07-15 15:35:11.188120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.557 qpair failed and we were unable to recover it. 00:30:07.557 [2024-07-15 15:35:11.188478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.557 [2024-07-15 15:35:11.188496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.557 qpair failed and we were unable to recover it. 00:30:07.557 [2024-07-15 15:35:11.188786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.557 [2024-07-15 15:35:11.188804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.557 qpair failed and we were unable to recover it. 00:30:07.557 [2024-07-15 15:35:11.189063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.557 [2024-07-15 15:35:11.189081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.557 qpair failed and we were unable to recover it. 00:30:07.557 [2024-07-15 15:35:11.189353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.557 [2024-07-15 15:35:11.189393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.557 qpair failed and we were unable to recover it. 00:30:07.557 [2024-07-15 15:35:11.189800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.557 [2024-07-15 15:35:11.189849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.557 qpair failed and we were unable to recover it. 00:30:07.557 [2024-07-15 15:35:11.190186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.557 [2024-07-15 15:35:11.190226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.557 qpair failed and we were unable to recover it. 00:30:07.557 [2024-07-15 15:35:11.190631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.557 [2024-07-15 15:35:11.190676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.557 qpair failed and we were unable to recover it. 00:30:07.557 [2024-07-15 15:35:11.191058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.557 [2024-07-15 15:35:11.191099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.557 qpair failed and we were unable to recover it. 00:30:07.557 [2024-07-15 15:35:11.191413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.557 [2024-07-15 15:35:11.191453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.557 qpair failed and we were unable to recover it. 00:30:07.557 [2024-07-15 15:35:11.191848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.557 [2024-07-15 15:35:11.191889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.557 qpair failed and we were unable to recover it. 00:30:07.557 [2024-07-15 15:35:11.192289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.557 [2024-07-15 15:35:11.192329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.557 qpair failed and we were unable to recover it. 00:30:07.557 [2024-07-15 15:35:11.192780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.557 [2024-07-15 15:35:11.192820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.557 qpair failed and we were unable to recover it. 00:30:07.557 [2024-07-15 15:35:11.193069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.557 [2024-07-15 15:35:11.193116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.557 qpair failed and we were unable to recover it. 00:30:07.557 [2024-07-15 15:35:11.193502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.557 [2024-07-15 15:35:11.193542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.557 qpair failed and we were unable to recover it. 00:30:07.557 [2024-07-15 15:35:11.193908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.557 [2024-07-15 15:35:11.193949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.557 qpair failed and we were unable to recover it. 00:30:07.557 [2024-07-15 15:35:11.194336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.557 [2024-07-15 15:35:11.194376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.557 qpair failed and we were unable to recover it. 00:30:07.557 [2024-07-15 15:35:11.194716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.557 [2024-07-15 15:35:11.194756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.557 qpair failed and we were unable to recover it. 00:30:07.557 [2024-07-15 15:35:11.195109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.557 [2024-07-15 15:35:11.195149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.557 qpair failed and we were unable to recover it. 00:30:07.557 [2024-07-15 15:35:11.195491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.557 [2024-07-15 15:35:11.195531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.557 qpair failed and we were unable to recover it. 00:30:07.557 [2024-07-15 15:35:11.195858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.557 [2024-07-15 15:35:11.195899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.557 qpair failed and we were unable to recover it. 00:30:07.557 [2024-07-15 15:35:11.196176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.557 [2024-07-15 15:35:11.196215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.557 qpair failed and we were unable to recover it. 00:30:07.557 [2024-07-15 15:35:11.196510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.557 [2024-07-15 15:35:11.196549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.557 qpair failed and we were unable to recover it. 00:30:07.557 [2024-07-15 15:35:11.196864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.557 [2024-07-15 15:35:11.196905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.557 qpair failed and we were unable to recover it. 00:30:07.557 [2024-07-15 15:35:11.197292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.557 [2024-07-15 15:35:11.197340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.557 qpair failed and we were unable to recover it. 00:30:07.557 [2024-07-15 15:35:11.197733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.557 [2024-07-15 15:35:11.197773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.557 qpair failed and we were unable to recover it. 00:30:07.557 [2024-07-15 15:35:11.198178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.557 [2024-07-15 15:35:11.198219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.557 qpair failed and we were unable to recover it. 00:30:07.557 [2024-07-15 15:35:11.198634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.557 [2024-07-15 15:35:11.198673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.557 qpair failed and we were unable to recover it. 00:30:07.557 [2024-07-15 15:35:11.198985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.557 [2024-07-15 15:35:11.199026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.557 qpair failed and we were unable to recover it. 00:30:07.557 [2024-07-15 15:35:11.199277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.557 [2024-07-15 15:35:11.199316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.557 qpair failed and we were unable to recover it. 00:30:07.557 [2024-07-15 15:35:11.199629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.557 [2024-07-15 15:35:11.199669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.557 qpair failed and we were unable to recover it. 00:30:07.557 [2024-07-15 15:35:11.199987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.557 [2024-07-15 15:35:11.200038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.557 qpair failed and we were unable to recover it. 00:30:07.557 [2024-07-15 15:35:11.200363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.557 [2024-07-15 15:35:11.200402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.557 qpair failed and we were unable to recover it. 00:30:07.557 [2024-07-15 15:35:11.200767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.557 [2024-07-15 15:35:11.200808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.557 qpair failed and we were unable to recover it. 00:30:07.558 [2024-07-15 15:35:11.201160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.558 [2024-07-15 15:35:11.201214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.558 qpair failed and we were unable to recover it. 00:30:07.558 [2024-07-15 15:35:11.201597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.558 [2024-07-15 15:35:11.201614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.558 qpair failed and we were unable to recover it. 00:30:07.558 [2024-07-15 15:35:11.201951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.558 [2024-07-15 15:35:11.201992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.558 qpair failed and we were unable to recover it. 00:30:07.558 [2024-07-15 15:35:11.202378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.558 [2024-07-15 15:35:11.202418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.558 qpair failed and we were unable to recover it. 00:30:07.558 [2024-07-15 15:35:11.202789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.558 [2024-07-15 15:35:11.202828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.558 qpair failed and we were unable to recover it. 00:30:07.558 [2024-07-15 15:35:11.203168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.558 [2024-07-15 15:35:11.203209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.558 qpair failed and we were unable to recover it. 00:30:07.558 [2024-07-15 15:35:11.203621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.558 [2024-07-15 15:35:11.203639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.558 qpair failed and we were unable to recover it. 00:30:07.558 [2024-07-15 15:35:11.203985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.558 [2024-07-15 15:35:11.204026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.558 qpair failed and we were unable to recover it. 00:30:07.558 [2024-07-15 15:35:11.204317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.558 [2024-07-15 15:35:11.204335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.558 qpair failed and we were unable to recover it. 00:30:07.558 [2024-07-15 15:35:11.204687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.558 [2024-07-15 15:35:11.204727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.558 qpair failed and we were unable to recover it. 00:30:07.558 [2024-07-15 15:35:11.204969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.558 [2024-07-15 15:35:11.205010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.558 qpair failed and we were unable to recover it. 00:30:07.558 [2024-07-15 15:35:11.205362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.558 [2024-07-15 15:35:11.205402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.558 qpair failed and we were unable to recover it. 00:30:07.558 [2024-07-15 15:35:11.205780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.558 [2024-07-15 15:35:11.205820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.558 qpair failed and we were unable to recover it. 00:30:07.558 [2024-07-15 15:35:11.206148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.558 [2024-07-15 15:35:11.206189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.558 qpair failed and we were unable to recover it. 00:30:07.558 [2024-07-15 15:35:11.206611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.558 [2024-07-15 15:35:11.206652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.558 qpair failed and we were unable to recover it. 00:30:07.558 [2024-07-15 15:35:11.207080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.558 [2024-07-15 15:35:11.207121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.558 qpair failed and we were unable to recover it. 00:30:07.558 [2024-07-15 15:35:11.207447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.558 [2024-07-15 15:35:11.207465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.558 qpair failed and we were unable to recover it. 00:30:07.558 [2024-07-15 15:35:11.207806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.558 [2024-07-15 15:35:11.207823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.558 qpair failed and we were unable to recover it. 00:30:07.558 [2024-07-15 15:35:11.208131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.558 [2024-07-15 15:35:11.208173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.558 qpair failed and we were unable to recover it. 00:30:07.558 [2024-07-15 15:35:11.208484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.558 [2024-07-15 15:35:11.208524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.558 qpair failed and we were unable to recover it. 00:30:07.558 [2024-07-15 15:35:11.208823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.558 [2024-07-15 15:35:11.208875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.558 qpair failed and we were unable to recover it. 00:30:07.558 [2024-07-15 15:35:11.209171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.558 [2024-07-15 15:35:11.209212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.558 qpair failed and we were unable to recover it. 00:30:07.558 [2024-07-15 15:35:11.209531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.558 [2024-07-15 15:35:11.209572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.558 qpair failed and we were unable to recover it. 00:30:07.558 [2024-07-15 15:35:11.209913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.558 [2024-07-15 15:35:11.209955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.558 qpair failed and we were unable to recover it. 00:30:07.558 [2024-07-15 15:35:11.210298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.558 [2024-07-15 15:35:11.210339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.558 qpair failed and we were unable to recover it. 00:30:07.558 [2024-07-15 15:35:11.210724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.558 [2024-07-15 15:35:11.210764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.558 qpair failed and we were unable to recover it. 00:30:07.558 [2024-07-15 15:35:11.211146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.558 [2024-07-15 15:35:11.211189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.558 qpair failed and we were unable to recover it. 00:30:07.558 [2024-07-15 15:35:11.211557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.558 [2024-07-15 15:35:11.211596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.558 qpair failed and we were unable to recover it. 00:30:07.558 [2024-07-15 15:35:11.211969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.558 [2024-07-15 15:35:11.212010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.558 qpair failed and we were unable to recover it. 00:30:07.558 [2024-07-15 15:35:11.212365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.558 [2024-07-15 15:35:11.212384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.558 qpair failed and we were unable to recover it. 00:30:07.558 [2024-07-15 15:35:11.212638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.558 [2024-07-15 15:35:11.212684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.558 qpair failed and we were unable to recover it. 00:30:07.558 [2024-07-15 15:35:11.213053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.558 [2024-07-15 15:35:11.213094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.558 qpair failed and we were unable to recover it. 00:30:07.558 [2024-07-15 15:35:11.213414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.558 [2024-07-15 15:35:11.213432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.558 qpair failed and we were unable to recover it. 00:30:07.558 [2024-07-15 15:35:11.213785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.558 [2024-07-15 15:35:11.213824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.558 qpair failed and we were unable to recover it. 00:30:07.558 [2024-07-15 15:35:11.214140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.558 [2024-07-15 15:35:11.214187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.558 qpair failed and we were unable to recover it. 00:30:07.558 [2024-07-15 15:35:11.214401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.558 [2024-07-15 15:35:11.214418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.558 qpair failed and we were unable to recover it. 00:30:07.558 [2024-07-15 15:35:11.214613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.558 [2024-07-15 15:35:11.214631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.558 qpair failed and we were unable to recover it. 00:30:07.558 [2024-07-15 15:35:11.214971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.558 [2024-07-15 15:35:11.215012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.558 qpair failed and we were unable to recover it. 00:30:07.558 [2024-07-15 15:35:11.215350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.558 [2024-07-15 15:35:11.215391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.558 qpair failed and we were unable to recover it. 00:30:07.559 [2024-07-15 15:35:11.215774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.559 [2024-07-15 15:35:11.215813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.559 qpair failed and we were unable to recover it. 00:30:07.559 [2024-07-15 15:35:11.216080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.559 [2024-07-15 15:35:11.216121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.559 qpair failed and we were unable to recover it. 00:30:07.559 [2024-07-15 15:35:11.216440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.559 [2024-07-15 15:35:11.216480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.559 qpair failed and we were unable to recover it. 00:30:07.559 [2024-07-15 15:35:11.216866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.559 [2024-07-15 15:35:11.216907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.559 qpair failed and we were unable to recover it. 00:30:07.559 [2024-07-15 15:35:11.217249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.559 [2024-07-15 15:35:11.217290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.559 qpair failed and we were unable to recover it. 00:30:07.559 [2024-07-15 15:35:11.217659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.559 [2024-07-15 15:35:11.217699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.559 qpair failed and we were unable to recover it. 00:30:07.559 [2024-07-15 15:35:11.217939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.559 [2024-07-15 15:35:11.217982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.559 qpair failed and we were unable to recover it. 00:30:07.559 [2024-07-15 15:35:11.218370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.559 [2024-07-15 15:35:11.218410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.559 qpair failed and we were unable to recover it. 00:30:07.559 [2024-07-15 15:35:11.218727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.559 [2024-07-15 15:35:11.218766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.559 qpair failed and we were unable to recover it. 00:30:07.559 [2024-07-15 15:35:11.219102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.559 [2024-07-15 15:35:11.219143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.559 qpair failed and we were unable to recover it. 00:30:07.559 [2024-07-15 15:35:11.219522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.559 [2024-07-15 15:35:11.219539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.559 qpair failed and we were unable to recover it. 00:30:07.559 [2024-07-15 15:35:11.219863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.559 [2024-07-15 15:35:11.219881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.559 qpair failed and we were unable to recover it. 00:30:07.559 [2024-07-15 15:35:11.220157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.559 [2024-07-15 15:35:11.220197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.559 qpair failed and we were unable to recover it. 00:30:07.559 [2024-07-15 15:35:11.220507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.559 [2024-07-15 15:35:11.220547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.559 qpair failed and we were unable to recover it. 00:30:07.559 [2024-07-15 15:35:11.220897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.559 [2024-07-15 15:35:11.220938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.559 qpair failed and we were unable to recover it. 00:30:07.559 [2024-07-15 15:35:11.221238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.559 [2024-07-15 15:35:11.221278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.559 qpair failed and we were unable to recover it. 00:30:07.559 [2024-07-15 15:35:11.221524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.559 [2024-07-15 15:35:11.221542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.559 qpair failed and we were unable to recover it. 00:30:07.559 [2024-07-15 15:35:11.221882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.559 [2024-07-15 15:35:11.221923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.559 qpair failed and we were unable to recover it. 00:30:07.559 [2024-07-15 15:35:11.222266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.559 [2024-07-15 15:35:11.222305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.559 qpair failed and we were unable to recover it. 00:30:07.559 [2024-07-15 15:35:11.222649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.559 [2024-07-15 15:35:11.222689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.559 qpair failed and we were unable to recover it. 00:30:07.559 [2024-07-15 15:35:11.223017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.559 [2024-07-15 15:35:11.223058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.559 qpair failed and we were unable to recover it. 00:30:07.559 [2024-07-15 15:35:11.223359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.559 [2024-07-15 15:35:11.223399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.559 qpair failed and we were unable to recover it. 00:30:07.559 [2024-07-15 15:35:11.223731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.559 [2024-07-15 15:35:11.223770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.559 qpair failed and we were unable to recover it. 00:30:07.559 [2024-07-15 15:35:11.224100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.559 [2024-07-15 15:35:11.224141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.559 qpair failed and we were unable to recover it. 00:30:07.559 [2024-07-15 15:35:11.224533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.559 [2024-07-15 15:35:11.224574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.559 qpair failed and we were unable to recover it. 00:30:07.559 [2024-07-15 15:35:11.224964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.559 [2024-07-15 15:35:11.225006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.559 qpair failed and we were unable to recover it. 00:30:07.559 [2024-07-15 15:35:11.225301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.559 [2024-07-15 15:35:11.225341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.559 qpair failed and we were unable to recover it. 00:30:07.559 [2024-07-15 15:35:11.225745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.559 [2024-07-15 15:35:11.225786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.559 qpair failed and we were unable to recover it. 00:30:07.559 [2024-07-15 15:35:11.226197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.559 [2024-07-15 15:35:11.226238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.559 qpair failed and we were unable to recover it. 00:30:07.559 [2024-07-15 15:35:11.226634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.559 [2024-07-15 15:35:11.226655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.559 qpair failed and we were unable to recover it. 00:30:07.559 [2024-07-15 15:35:11.226969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.559 [2024-07-15 15:35:11.226987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.559 qpair failed and we were unable to recover it. 00:30:07.559 [2024-07-15 15:35:11.227272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.559 [2024-07-15 15:35:11.227312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.559 qpair failed and we were unable to recover it. 00:30:07.559 [2024-07-15 15:35:11.227658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.559 [2024-07-15 15:35:11.227698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.559 qpair failed and we were unable to recover it. 00:30:07.560 [2024-07-15 15:35:11.228072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.560 [2024-07-15 15:35:11.228113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.560 qpair failed and we were unable to recover it. 00:30:07.560 [2024-07-15 15:35:11.228435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.560 [2024-07-15 15:35:11.228476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.560 qpair failed and we were unable to recover it. 00:30:07.560 [2024-07-15 15:35:11.228863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.560 [2024-07-15 15:35:11.228904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.560 qpair failed and we were unable to recover it. 00:30:07.560 [2024-07-15 15:35:11.229295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.560 [2024-07-15 15:35:11.229334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.560 qpair failed and we were unable to recover it. 00:30:07.560 [2024-07-15 15:35:11.229705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.560 [2024-07-15 15:35:11.229746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.560 qpair failed and we were unable to recover it. 00:30:07.560 [2024-07-15 15:35:11.230112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.560 [2024-07-15 15:35:11.230153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.560 qpair failed and we were unable to recover it. 00:30:07.560 [2024-07-15 15:35:11.230549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.560 [2024-07-15 15:35:11.230589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.560 qpair failed and we were unable to recover it. 00:30:07.560 [2024-07-15 15:35:11.230877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.560 [2024-07-15 15:35:11.230918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.560 qpair failed and we were unable to recover it. 00:30:07.560 [2024-07-15 15:35:11.231301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.560 [2024-07-15 15:35:11.231341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.560 qpair failed and we were unable to recover it. 00:30:07.560 [2024-07-15 15:35:11.231742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.560 [2024-07-15 15:35:11.231782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.560 qpair failed and we were unable to recover it. 00:30:07.560 [2024-07-15 15:35:11.232114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.560 [2024-07-15 15:35:11.232154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.560 qpair failed and we were unable to recover it. 00:30:07.560 [2024-07-15 15:35:11.232405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.560 [2024-07-15 15:35:11.232423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.560 qpair failed and we were unable to recover it. 00:30:07.560 [2024-07-15 15:35:11.232705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.560 [2024-07-15 15:35:11.232745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.560 qpair failed and we were unable to recover it. 00:30:07.560 [2024-07-15 15:35:11.233170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.560 [2024-07-15 15:35:11.233211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.560 qpair failed and we were unable to recover it. 00:30:07.560 [2024-07-15 15:35:11.233594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.560 [2024-07-15 15:35:11.233634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.560 qpair failed and we were unable to recover it. 00:30:07.560 [2024-07-15 15:35:11.234014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.560 [2024-07-15 15:35:11.234056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.560 qpair failed and we were unable to recover it. 00:30:07.560 [2024-07-15 15:35:11.234424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.560 [2024-07-15 15:35:11.234464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.560 qpair failed and we were unable to recover it. 00:30:07.560 [2024-07-15 15:35:11.234716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.560 [2024-07-15 15:35:11.234734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.560 qpair failed and we were unable to recover it. 00:30:07.560 [2024-07-15 15:35:11.235000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.560 [2024-07-15 15:35:11.235018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.560 qpair failed and we were unable to recover it. 00:30:07.560 [2024-07-15 15:35:11.235367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.560 [2024-07-15 15:35:11.235407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.560 qpair failed and we were unable to recover it. 00:30:07.560 [2024-07-15 15:35:11.235775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.560 [2024-07-15 15:35:11.235814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.560 qpair failed and we were unable to recover it. 00:30:07.560 [2024-07-15 15:35:11.236155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.560 [2024-07-15 15:35:11.236196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.560 qpair failed and we were unable to recover it. 00:30:07.560 [2024-07-15 15:35:11.236599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.560 [2024-07-15 15:35:11.236617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.560 qpair failed and we were unable to recover it. 00:30:07.560 [2024-07-15 15:35:11.236872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.560 [2024-07-15 15:35:11.236920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.560 qpair failed and we were unable to recover it. 00:30:07.560 [2024-07-15 15:35:11.237180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.560 [2024-07-15 15:35:11.237221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.560 qpair failed and we were unable to recover it. 00:30:07.560 [2024-07-15 15:35:11.237543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.560 [2024-07-15 15:35:11.237582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.560 qpair failed and we were unable to recover it. 00:30:07.560 [2024-07-15 15:35:11.237958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.560 [2024-07-15 15:35:11.237999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.560 qpair failed and we were unable to recover it. 00:30:07.560 [2024-07-15 15:35:11.238392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.560 [2024-07-15 15:35:11.238432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.560 qpair failed and we were unable to recover it. 00:30:07.560 [2024-07-15 15:35:11.238767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.560 [2024-07-15 15:35:11.238806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.560 qpair failed and we were unable to recover it. 00:30:07.560 [2024-07-15 15:35:11.239182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.560 [2024-07-15 15:35:11.239222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.560 qpair failed and we were unable to recover it. 00:30:07.560 [2024-07-15 15:35:11.239614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.561 [2024-07-15 15:35:11.239654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.561 qpair failed and we were unable to recover it. 00:30:07.561 [2024-07-15 15:35:11.239938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.561 [2024-07-15 15:35:11.239979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.561 qpair failed and we were unable to recover it. 00:30:07.561 [2024-07-15 15:35:11.240371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.561 [2024-07-15 15:35:11.240411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.561 qpair failed and we were unable to recover it. 00:30:07.561 [2024-07-15 15:35:11.240730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.561 [2024-07-15 15:35:11.240770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.561 qpair failed and we were unable to recover it. 00:30:07.561 [2024-07-15 15:35:11.241193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.561 [2024-07-15 15:35:11.241234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.561 qpair failed and we were unable to recover it. 00:30:07.561 [2024-07-15 15:35:11.241611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.561 [2024-07-15 15:35:11.241629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.561 qpair failed and we were unable to recover it. 00:30:07.561 [2024-07-15 15:35:11.241892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.561 [2024-07-15 15:35:11.241910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.561 qpair failed and we were unable to recover it. 00:30:07.561 [2024-07-15 15:35:11.242190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.561 [2024-07-15 15:35:11.242234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.561 qpair failed and we were unable to recover it. 00:30:07.561 [2024-07-15 15:35:11.242584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.561 [2024-07-15 15:35:11.242623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.561 qpair failed and we were unable to recover it. 00:30:07.561 [2024-07-15 15:35:11.242954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.561 [2024-07-15 15:35:11.242995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.561 qpair failed and we were unable to recover it. 00:30:07.561 [2024-07-15 15:35:11.243384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.561 [2024-07-15 15:35:11.243424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.561 qpair failed and we were unable to recover it. 00:30:07.561 [2024-07-15 15:35:11.243813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.561 [2024-07-15 15:35:11.243864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.561 qpair failed and we were unable to recover it. 00:30:07.561 [2024-07-15 15:35:11.244173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.561 [2024-07-15 15:35:11.244213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.561 qpair failed and we were unable to recover it. 00:30:07.561 [2024-07-15 15:35:11.244525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.561 [2024-07-15 15:35:11.244544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.561 qpair failed and we were unable to recover it. 00:30:07.561 [2024-07-15 15:35:11.244898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.561 [2024-07-15 15:35:11.244940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.561 qpair failed and we were unable to recover it. 00:30:07.561 [2024-07-15 15:35:11.245278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.561 [2024-07-15 15:35:11.245318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.561 qpair failed and we were unable to recover it. 00:30:07.561 [2024-07-15 15:35:11.245688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.561 [2024-07-15 15:35:11.245728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.561 qpair failed and we were unable to recover it. 00:30:07.561 [2024-07-15 15:35:11.246099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.561 [2024-07-15 15:35:11.246141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.561 qpair failed and we were unable to recover it. 00:30:07.561 [2024-07-15 15:35:11.246505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.561 [2024-07-15 15:35:11.246523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.561 qpair failed and we were unable to recover it. 00:30:07.561 [2024-07-15 15:35:11.246790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.561 [2024-07-15 15:35:11.246830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.561 qpair failed and we were unable to recover it. 00:30:07.561 [2024-07-15 15:35:11.247236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.561 [2024-07-15 15:35:11.247287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.561 qpair failed and we were unable to recover it. 00:30:07.561 [2024-07-15 15:35:11.247626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.561 [2024-07-15 15:35:11.247667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.561 qpair failed and we were unable to recover it. 00:30:07.561 [2024-07-15 15:35:11.248079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.561 [2024-07-15 15:35:11.248120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.561 qpair failed and we were unable to recover it. 00:30:07.561 [2024-07-15 15:35:11.248513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.561 [2024-07-15 15:35:11.248557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.561 qpair failed and we were unable to recover it. 00:30:07.561 [2024-07-15 15:35:11.248922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.561 [2024-07-15 15:35:11.248963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.561 qpair failed and we were unable to recover it. 00:30:07.561 [2024-07-15 15:35:11.249361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.561 [2024-07-15 15:35:11.249401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.561 qpair failed and we were unable to recover it. 00:30:07.561 [2024-07-15 15:35:11.249791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.561 [2024-07-15 15:35:11.249846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.561 qpair failed and we were unable to recover it. 00:30:07.561 [2024-07-15 15:35:11.250172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.561 [2024-07-15 15:35:11.250212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.561 qpair failed and we were unable to recover it. 00:30:07.561 [2024-07-15 15:35:11.250595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.561 [2024-07-15 15:35:11.250635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.561 qpair failed and we were unable to recover it. 00:30:07.561 [2024-07-15 15:35:11.251030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.561 [2024-07-15 15:35:11.251072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.561 qpair failed and we were unable to recover it. 00:30:07.561 [2024-07-15 15:35:11.251465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.561 [2024-07-15 15:35:11.251505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.561 qpair failed and we were unable to recover it. 00:30:07.561 [2024-07-15 15:35:11.251874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.561 [2024-07-15 15:35:11.251915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.561 qpair failed and we were unable to recover it. 00:30:07.561 [2024-07-15 15:35:11.252291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.561 [2024-07-15 15:35:11.252331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.561 qpair failed and we were unable to recover it. 00:30:07.561 [2024-07-15 15:35:11.252740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.561 [2024-07-15 15:35:11.252779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.561 qpair failed and we were unable to recover it. 00:30:07.561 [2024-07-15 15:35:11.253180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.561 [2024-07-15 15:35:11.253222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.561 qpair failed and we were unable to recover it. 00:30:07.561 [2024-07-15 15:35:11.253635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.562 [2024-07-15 15:35:11.253674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.562 qpair failed and we were unable to recover it. 00:30:07.562 [2024-07-15 15:35:11.253939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.562 [2024-07-15 15:35:11.253981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.562 qpair failed and we were unable to recover it. 00:30:07.562 [2024-07-15 15:35:11.254370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.562 [2024-07-15 15:35:11.254410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.562 qpair failed and we were unable to recover it. 00:30:07.562 [2024-07-15 15:35:11.254824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.562 [2024-07-15 15:35:11.254875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.562 qpair failed and we were unable to recover it. 00:30:07.562 [2024-07-15 15:35:11.255197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.562 [2024-07-15 15:35:11.255238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.562 qpair failed and we were unable to recover it. 00:30:07.562 [2024-07-15 15:35:11.255539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.562 [2024-07-15 15:35:11.255579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.562 qpair failed and we were unable to recover it. 00:30:07.562 [2024-07-15 15:35:11.255846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.562 [2024-07-15 15:35:11.255888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.562 qpair failed and we were unable to recover it. 00:30:07.562 [2024-07-15 15:35:11.256276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.562 [2024-07-15 15:35:11.256315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.562 qpair failed and we were unable to recover it. 00:30:07.562 [2024-07-15 15:35:11.256699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.562 [2024-07-15 15:35:11.256738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.562 qpair failed and we were unable to recover it. 00:30:07.562 [2024-07-15 15:35:11.257131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.562 [2024-07-15 15:35:11.257172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.562 qpair failed and we were unable to recover it. 00:30:07.562 [2024-07-15 15:35:11.257460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.562 [2024-07-15 15:35:11.257478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.562 qpair failed and we were unable to recover it. 00:30:07.562 [2024-07-15 15:35:11.257799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.562 [2024-07-15 15:35:11.257847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.562 qpair failed and we were unable to recover it. 00:30:07.562 [2024-07-15 15:35:11.258240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.562 [2024-07-15 15:35:11.258280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.562 qpair failed and we were unable to recover it. 00:30:07.562 [2024-07-15 15:35:11.258638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.562 [2024-07-15 15:35:11.258678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.562 qpair failed and we were unable to recover it. 00:30:07.562 [2024-07-15 15:35:11.259088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.562 [2024-07-15 15:35:11.259130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.562 qpair failed and we were unable to recover it. 00:30:07.562 [2024-07-15 15:35:11.259483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.562 [2024-07-15 15:35:11.259501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.562 qpair failed and we were unable to recover it. 00:30:07.562 [2024-07-15 15:35:11.259841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.562 [2024-07-15 15:35:11.259860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.562 qpair failed and we were unable to recover it. 00:30:07.562 [2024-07-15 15:35:11.260194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.562 [2024-07-15 15:35:11.260234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.562 qpair failed and we were unable to recover it. 00:30:07.562 [2024-07-15 15:35:11.260649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.562 [2024-07-15 15:35:11.260688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.562 qpair failed and we were unable to recover it. 00:30:07.562 [2024-07-15 15:35:11.261083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.562 [2024-07-15 15:35:11.261126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.562 qpair failed and we were unable to recover it. 00:30:07.562 [2024-07-15 15:35:11.261459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.562 [2024-07-15 15:35:11.261500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.562 qpair failed and we were unable to recover it. 00:30:07.562 [2024-07-15 15:35:11.261887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.562 [2024-07-15 15:35:11.261929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.562 qpair failed and we were unable to recover it. 00:30:07.562 [2024-07-15 15:35:11.262174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.562 [2024-07-15 15:35:11.262214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.562 qpair failed and we were unable to recover it. 00:30:07.562 [2024-07-15 15:35:11.262631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.562 [2024-07-15 15:35:11.262649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.562 qpair failed and we were unable to recover it. 00:30:07.562 [2024-07-15 15:35:11.262940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.562 [2024-07-15 15:35:11.262959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.562 qpair failed and we were unable to recover it. 00:30:07.562 [2024-07-15 15:35:11.263170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.562 [2024-07-15 15:35:11.263188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.562 qpair failed and we were unable to recover it. 00:30:07.562 [2024-07-15 15:35:11.263399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.562 [2024-07-15 15:35:11.263416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.562 qpair failed and we were unable to recover it. 00:30:07.562 [2024-07-15 15:35:11.263676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.562 [2024-07-15 15:35:11.263727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.562 qpair failed and we were unable to recover it. 00:30:07.562 [2024-07-15 15:35:11.264030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.562 [2024-07-15 15:35:11.264070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.562 qpair failed and we were unable to recover it. 00:30:07.562 [2024-07-15 15:35:11.264413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.562 [2024-07-15 15:35:11.264454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.562 qpair failed and we were unable to recover it. 00:30:07.562 [2024-07-15 15:35:11.264756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.562 [2024-07-15 15:35:11.264796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.562 qpair failed and we were unable to recover it. 00:30:07.562 [2024-07-15 15:35:11.265200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.562 [2024-07-15 15:35:11.265240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.562 qpair failed and we were unable to recover it. 00:30:07.562 [2024-07-15 15:35:11.265539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.562 [2024-07-15 15:35:11.265579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.562 qpair failed and we were unable to recover it. 00:30:07.562 [2024-07-15 15:35:11.265880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.562 [2024-07-15 15:35:11.265922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.562 qpair failed and we were unable to recover it. 00:30:07.562 [2024-07-15 15:35:11.266259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.562 [2024-07-15 15:35:11.266299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.562 qpair failed and we were unable to recover it. 00:30:07.562 [2024-07-15 15:35:11.266590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.562 [2024-07-15 15:35:11.266608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.562 qpair failed and we were unable to recover it. 00:30:07.562 [2024-07-15 15:35:11.266925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.562 [2024-07-15 15:35:11.266965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.562 qpair failed and we were unable to recover it. 00:30:07.562 [2024-07-15 15:35:11.267276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.562 [2024-07-15 15:35:11.267316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.562 qpair failed and we were unable to recover it. 00:30:07.562 [2024-07-15 15:35:11.267697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.563 [2024-07-15 15:35:11.267738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.563 qpair failed and we were unable to recover it. 00:30:07.563 [2024-07-15 15:35:11.268040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.563 [2024-07-15 15:35:11.268081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.563 qpair failed and we were unable to recover it. 00:30:07.563 [2024-07-15 15:35:11.268432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.563 [2024-07-15 15:35:11.268473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.563 qpair failed and we were unable to recover it. 00:30:07.563 [2024-07-15 15:35:11.268780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.563 [2024-07-15 15:35:11.268821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.563 qpair failed and we were unable to recover it. 00:30:07.563 [2024-07-15 15:35:11.269316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.563 [2024-07-15 15:35:11.269358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.563 qpair failed and we were unable to recover it. 00:30:07.563 [2024-07-15 15:35:11.269675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.563 [2024-07-15 15:35:11.269716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.563 qpair failed and we were unable to recover it. 00:30:07.563 [2024-07-15 15:35:11.270092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.563 [2024-07-15 15:35:11.270134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.563 qpair failed and we were unable to recover it. 00:30:07.563 [2024-07-15 15:35:11.270440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.563 [2024-07-15 15:35:11.270477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.563 qpair failed and we were unable to recover it. 00:30:07.563 [2024-07-15 15:35:11.270811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.563 [2024-07-15 15:35:11.270861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.563 qpair failed and we were unable to recover it. 00:30:07.563 [2024-07-15 15:35:11.271183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.563 [2024-07-15 15:35:11.271223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.563 qpair failed and we were unable to recover it. 00:30:07.563 [2024-07-15 15:35:11.271607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.563 [2024-07-15 15:35:11.271647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.563 qpair failed and we were unable to recover it. 00:30:07.563 [2024-07-15 15:35:11.272017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.563 [2024-07-15 15:35:11.272059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.563 qpair failed and we were unable to recover it. 00:30:07.563 [2024-07-15 15:35:11.272366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.563 [2024-07-15 15:35:11.272384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.563 qpair failed and we were unable to recover it. 00:30:07.563 [2024-07-15 15:35:11.272779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.563 [2024-07-15 15:35:11.272819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.563 qpair failed and we were unable to recover it. 00:30:07.563 [2024-07-15 15:35:11.273219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.563 [2024-07-15 15:35:11.273259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.563 qpair failed and we were unable to recover it. 00:30:07.563 [2024-07-15 15:35:11.273650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.563 [2024-07-15 15:35:11.273696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.563 qpair failed and we were unable to recover it. 00:30:07.563 [2024-07-15 15:35:11.274000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.563 [2024-07-15 15:35:11.274041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.563 qpair failed and we were unable to recover it. 00:30:07.563 [2024-07-15 15:35:11.274431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.563 [2024-07-15 15:35:11.274472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.563 qpair failed and we were unable to recover it. 00:30:07.563 [2024-07-15 15:35:11.274810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.563 [2024-07-15 15:35:11.274861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.563 qpair failed and we were unable to recover it. 00:30:07.563 [2024-07-15 15:35:11.275253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.563 [2024-07-15 15:35:11.275293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.563 qpair failed and we were unable to recover it. 00:30:07.563 [2024-07-15 15:35:11.275632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.563 [2024-07-15 15:35:11.275672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.563 qpair failed and we were unable to recover it. 00:30:07.563 [2024-07-15 15:35:11.276102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.563 [2024-07-15 15:35:11.276143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.563 qpair failed and we were unable to recover it. 00:30:07.563 [2024-07-15 15:35:11.276415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.563 [2024-07-15 15:35:11.276453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.563 qpair failed and we were unable to recover it. 00:30:07.563 [2024-07-15 15:35:11.276777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.563 [2024-07-15 15:35:11.276817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.563 qpair failed and we were unable to recover it. 00:30:07.563 [2024-07-15 15:35:11.277236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.563 [2024-07-15 15:35:11.277277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.563 qpair failed and we were unable to recover it. 00:30:07.563 [2024-07-15 15:35:11.277574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.563 [2024-07-15 15:35:11.277592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.563 qpair failed and we were unable to recover it. 00:30:07.563 [2024-07-15 15:35:11.277961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.563 [2024-07-15 15:35:11.277982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.563 qpair failed and we were unable to recover it. 00:30:07.563 [2024-07-15 15:35:11.278255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.563 [2024-07-15 15:35:11.278274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.563 qpair failed and we were unable to recover it. 00:30:07.563 [2024-07-15 15:35:11.278605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.563 [2024-07-15 15:35:11.278646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.563 qpair failed and we were unable to recover it. 00:30:07.563 [2024-07-15 15:35:11.279024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.563 [2024-07-15 15:35:11.279066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.563 qpair failed and we were unable to recover it. 00:30:07.563 [2024-07-15 15:35:11.279303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.563 [2024-07-15 15:35:11.279344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.563 qpair failed and we were unable to recover it. 00:30:07.563 [2024-07-15 15:35:11.279752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.563 [2024-07-15 15:35:11.279791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.563 qpair failed and we were unable to recover it. 00:30:07.563 [2024-07-15 15:35:11.280148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.563 [2024-07-15 15:35:11.280194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.563 qpair failed and we were unable to recover it. 00:30:07.563 [2024-07-15 15:35:11.280445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.563 [2024-07-15 15:35:11.280464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.563 qpair failed and we were unable to recover it. 00:30:07.563 [2024-07-15 15:35:11.280759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.563 [2024-07-15 15:35:11.280778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.563 qpair failed and we were unable to recover it. 00:30:07.563 [2024-07-15 15:35:11.281129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.563 [2024-07-15 15:35:11.281171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.563 qpair failed and we were unable to recover it. 00:30:07.563 [2024-07-15 15:35:11.281470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.563 [2024-07-15 15:35:11.281523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.563 qpair failed and we were unable to recover it. 00:30:07.563 [2024-07-15 15:35:11.281869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.563 [2024-07-15 15:35:11.281914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.563 qpair failed and we were unable to recover it. 00:30:07.563 [2024-07-15 15:35:11.282314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.563 [2024-07-15 15:35:11.282355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.563 qpair failed and we were unable to recover it. 00:30:07.563 [2024-07-15 15:35:11.282764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.563 [2024-07-15 15:35:11.282803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.563 qpair failed and we were unable to recover it. 00:30:07.563 [2024-07-15 15:35:11.283093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.563 [2024-07-15 15:35:11.283133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.563 qpair failed and we were unable to recover it. 00:30:07.564 [2024-07-15 15:35:11.283467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.564 [2024-07-15 15:35:11.283486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.564 qpair failed and we were unable to recover it. 00:30:07.564 [2024-07-15 15:35:11.283668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.564 [2024-07-15 15:35:11.283689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.564 qpair failed and we were unable to recover it. 00:30:07.564 [2024-07-15 15:35:11.284057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.564 [2024-07-15 15:35:11.284076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.564 qpair failed and we were unable to recover it. 00:30:07.564 [2024-07-15 15:35:11.284404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.564 [2024-07-15 15:35:11.284449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.564 qpair failed and we were unable to recover it. 00:30:07.564 [2024-07-15 15:35:11.284854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.564 [2024-07-15 15:35:11.284896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.564 qpair failed and we were unable to recover it. 00:30:07.564 [2024-07-15 15:35:11.285289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.564 [2024-07-15 15:35:11.285330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.564 qpair failed and we were unable to recover it. 00:30:07.564 [2024-07-15 15:35:11.285635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.564 [2024-07-15 15:35:11.285676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.564 qpair failed and we were unable to recover it. 00:30:07.564 [2024-07-15 15:35:11.286030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.564 [2024-07-15 15:35:11.286073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.564 qpair failed and we were unable to recover it. 00:30:07.564 [2024-07-15 15:35:11.286338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.564 [2024-07-15 15:35:11.286377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.564 qpair failed and we were unable to recover it. 00:30:07.564 [2024-07-15 15:35:11.286712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.564 [2024-07-15 15:35:11.286750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.564 qpair failed and we were unable to recover it. 00:30:07.564 [2024-07-15 15:35:11.287168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.564 [2024-07-15 15:35:11.287209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.564 qpair failed and we were unable to recover it. 00:30:07.564 [2024-07-15 15:35:11.287595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.564 [2024-07-15 15:35:11.287635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.564 qpair failed and we were unable to recover it. 00:30:07.564 [2024-07-15 15:35:11.287970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.564 [2024-07-15 15:35:11.288012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.564 qpair failed and we were unable to recover it. 00:30:07.564 [2024-07-15 15:35:11.288348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.564 [2024-07-15 15:35:11.288388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.564 qpair failed and we were unable to recover it. 00:30:07.564 [2024-07-15 15:35:11.288784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.564 [2024-07-15 15:35:11.288825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.564 qpair failed and we were unable to recover it. 00:30:07.564 [2024-07-15 15:35:11.289232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.564 [2024-07-15 15:35:11.289274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.564 qpair failed and we were unable to recover it. 00:30:07.564 [2024-07-15 15:35:11.289560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.564 [2024-07-15 15:35:11.289578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.564 qpair failed and we were unable to recover it. 00:30:07.564 [2024-07-15 15:35:11.289904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.564 [2024-07-15 15:35:11.289946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.564 qpair failed and we were unable to recover it. 00:30:07.564 [2024-07-15 15:35:11.290275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.564 [2024-07-15 15:35:11.290316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.564 qpair failed and we were unable to recover it. 00:30:07.564 [2024-07-15 15:35:11.290715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.564 [2024-07-15 15:35:11.290757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.564 qpair failed and we were unable to recover it. 00:30:07.564 [2024-07-15 15:35:11.291107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.564 [2024-07-15 15:35:11.291149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.564 qpair failed and we were unable to recover it. 00:30:07.564 [2024-07-15 15:35:11.291547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.564 [2024-07-15 15:35:11.291588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.564 qpair failed and we were unable to recover it. 00:30:07.564 [2024-07-15 15:35:11.291939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.564 [2024-07-15 15:35:11.291981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.564 qpair failed and we were unable to recover it. 00:30:07.564 [2024-07-15 15:35:11.292373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.564 [2024-07-15 15:35:11.292414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.564 qpair failed and we were unable to recover it. 00:30:07.564 [2024-07-15 15:35:11.292734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.564 [2024-07-15 15:35:11.292774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.564 qpair failed and we were unable to recover it. 00:30:07.564 [2024-07-15 15:35:11.293182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.564 [2024-07-15 15:35:11.293223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.564 qpair failed and we were unable to recover it. 00:30:07.564 [2024-07-15 15:35:11.293403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.564 [2024-07-15 15:35:11.293443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.564 qpair failed and we were unable to recover it. 00:30:07.564 [2024-07-15 15:35:11.293764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.564 [2024-07-15 15:35:11.293805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.564 qpair failed and we were unable to recover it. 00:30:07.564 [2024-07-15 15:35:11.294145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.564 [2024-07-15 15:35:11.294186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.564 qpair failed and we were unable to recover it. 00:30:07.564 [2024-07-15 15:35:11.294477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.564 [2024-07-15 15:35:11.294518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.564 qpair failed and we were unable to recover it. 00:30:07.564 [2024-07-15 15:35:11.294845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.564 [2024-07-15 15:35:11.294887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.564 qpair failed and we were unable to recover it. 00:30:07.564 [2024-07-15 15:35:11.295304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.564 [2024-07-15 15:35:11.295344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.564 qpair failed and we were unable to recover it. 00:30:07.564 [2024-07-15 15:35:11.295666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.564 [2024-07-15 15:35:11.295685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.564 qpair failed and we were unable to recover it. 00:30:07.564 [2024-07-15 15:35:11.295943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.564 [2024-07-15 15:35:11.295962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.564 qpair failed and we were unable to recover it. 00:30:07.564 [2024-07-15 15:35:11.296246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.564 [2024-07-15 15:35:11.296286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.564 qpair failed and we were unable to recover it. 00:30:07.564 [2024-07-15 15:35:11.296663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.564 [2024-07-15 15:35:11.296703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.564 qpair failed and we were unable to recover it. 00:30:07.564 [2024-07-15 15:35:11.297074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.564 [2024-07-15 15:35:11.297115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.564 qpair failed and we were unable to recover it. 00:30:07.564 [2024-07-15 15:35:11.297515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.564 [2024-07-15 15:35:11.297556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.564 qpair failed and we were unable to recover it. 00:30:07.564 [2024-07-15 15:35:11.297919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.564 [2024-07-15 15:35:11.297961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.564 qpair failed and we were unable to recover it. 00:30:07.564 [2024-07-15 15:35:11.298222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.564 [2024-07-15 15:35:11.298263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.564 qpair failed and we were unable to recover it. 00:30:07.564 [2024-07-15 15:35:11.298519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.564 [2024-07-15 15:35:11.298558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.564 qpair failed and we were unable to recover it. 00:30:07.565 [2024-07-15 15:35:11.298843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.565 [2024-07-15 15:35:11.298884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.565 qpair failed and we were unable to recover it. 00:30:07.565 [2024-07-15 15:35:11.299263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.565 [2024-07-15 15:35:11.299305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.565 qpair failed and we were unable to recover it. 00:30:07.565 [2024-07-15 15:35:11.299563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.565 [2024-07-15 15:35:11.299603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.565 qpair failed and we were unable to recover it. 00:30:07.565 [2024-07-15 15:35:11.299915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.565 [2024-07-15 15:35:11.299973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.565 qpair failed and we were unable to recover it. 00:30:07.565 [2024-07-15 15:35:11.300298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.565 [2024-07-15 15:35:11.300339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.565 qpair failed and we were unable to recover it. 00:30:07.565 [2024-07-15 15:35:11.300740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.565 [2024-07-15 15:35:11.300781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.565 qpair failed and we were unable to recover it. 00:30:07.565 [2024-07-15 15:35:11.301100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.565 [2024-07-15 15:35:11.301141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.565 qpair failed and we were unable to recover it. 00:30:07.565 [2024-07-15 15:35:11.301402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.565 [2024-07-15 15:35:11.301443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.565 qpair failed and we were unable to recover it. 00:30:07.565 [2024-07-15 15:35:11.301715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.565 [2024-07-15 15:35:11.301755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.565 qpair failed and we were unable to recover it. 00:30:07.565 [2024-07-15 15:35:11.302069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.565 [2024-07-15 15:35:11.302111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.565 qpair failed and we were unable to recover it. 00:30:07.565 [2024-07-15 15:35:11.302429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.565 [2024-07-15 15:35:11.302469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.565 qpair failed and we were unable to recover it. 00:30:07.565 [2024-07-15 15:35:11.302789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.565 [2024-07-15 15:35:11.302808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.565 qpair failed and we were unable to recover it. 00:30:07.565 [2024-07-15 15:35:11.303173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.565 [2024-07-15 15:35:11.303214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.565 qpair failed and we were unable to recover it. 00:30:07.565 [2024-07-15 15:35:11.303471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.565 [2024-07-15 15:35:11.303510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.565 qpair failed and we were unable to recover it. 00:30:07.565 [2024-07-15 15:35:11.303814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.565 [2024-07-15 15:35:11.303867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.565 qpair failed and we were unable to recover it. 00:30:07.565 [2024-07-15 15:35:11.304191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.565 [2024-07-15 15:35:11.304231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.565 qpair failed and we were unable to recover it. 00:30:07.565 [2024-07-15 15:35:11.304536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.565 [2024-07-15 15:35:11.304577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.565 qpair failed and we were unable to recover it. 00:30:07.565 [2024-07-15 15:35:11.304918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.565 [2024-07-15 15:35:11.304960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.565 qpair failed and we were unable to recover it. 00:30:07.565 [2024-07-15 15:35:11.305215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.565 [2024-07-15 15:35:11.305255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.565 qpair failed and we were unable to recover it. 00:30:07.565 [2024-07-15 15:35:11.305622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.565 [2024-07-15 15:35:11.305662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.565 qpair failed and we were unable to recover it. 00:30:07.565 [2024-07-15 15:35:11.305967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.565 [2024-07-15 15:35:11.306008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.565 qpair failed and we were unable to recover it. 00:30:07.565 [2024-07-15 15:35:11.306322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.565 [2024-07-15 15:35:11.306362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.565 qpair failed and we were unable to recover it. 00:30:07.565 [2024-07-15 15:35:11.306611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.565 [2024-07-15 15:35:11.306652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.565 qpair failed and we were unable to recover it. 00:30:07.565 [2024-07-15 15:35:11.306967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.565 [2024-07-15 15:35:11.306985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.565 qpair failed and we were unable to recover it. 00:30:07.565 [2024-07-15 15:35:11.307264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.565 [2024-07-15 15:35:11.307304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.565 qpair failed and we were unable to recover it. 00:30:07.565 [2024-07-15 15:35:11.307542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.565 [2024-07-15 15:35:11.307583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.565 qpair failed and we were unable to recover it. 00:30:07.565 [2024-07-15 15:35:11.307900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.565 [2024-07-15 15:35:11.307941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.565 qpair failed and we were unable to recover it. 00:30:07.565 [2024-07-15 15:35:11.308265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.565 [2024-07-15 15:35:11.308304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.565 qpair failed and we were unable to recover it. 00:30:07.565 [2024-07-15 15:35:11.308699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.565 [2024-07-15 15:35:11.308745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.565 qpair failed and we were unable to recover it. 00:30:07.565 [2024-07-15 15:35:11.309066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.565 [2024-07-15 15:35:11.309108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.565 qpair failed and we were unable to recover it. 00:30:07.565 [2024-07-15 15:35:11.309429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.565 [2024-07-15 15:35:11.309468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.565 qpair failed and we were unable to recover it. 00:30:07.565 [2024-07-15 15:35:11.309767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.565 [2024-07-15 15:35:11.309807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.565 qpair failed and we were unable to recover it. 00:30:07.565 [2024-07-15 15:35:11.310087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.565 [2024-07-15 15:35:11.310127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.565 qpair failed and we were unable to recover it. 00:30:07.565 [2024-07-15 15:35:11.310522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.565 [2024-07-15 15:35:11.310562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.565 qpair failed and we were unable to recover it. 00:30:07.565 [2024-07-15 15:35:11.310902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.565 [2024-07-15 15:35:11.310942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.565 qpair failed and we were unable to recover it. 00:30:07.566 [2024-07-15 15:35:11.311191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.566 [2024-07-15 15:35:11.311231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.566 qpair failed and we were unable to recover it. 00:30:07.566 [2024-07-15 15:35:11.311530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.566 [2024-07-15 15:35:11.311570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.566 qpair failed and we were unable to recover it. 00:30:07.566 [2024-07-15 15:35:11.311922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.566 [2024-07-15 15:35:11.311964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.566 qpair failed and we were unable to recover it. 00:30:07.566 [2024-07-15 15:35:11.312317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.566 [2024-07-15 15:35:11.312358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.566 qpair failed and we were unable to recover it. 00:30:07.566 [2024-07-15 15:35:11.312771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.566 [2024-07-15 15:35:11.312811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.566 qpair failed and we were unable to recover it. 00:30:07.566 [2024-07-15 15:35:11.313238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.566 [2024-07-15 15:35:11.313280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.566 qpair failed and we were unable to recover it. 00:30:07.566 [2024-07-15 15:35:11.313653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.566 [2024-07-15 15:35:11.313693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.566 qpair failed and we were unable to recover it. 00:30:07.566 [2024-07-15 15:35:11.314018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.566 [2024-07-15 15:35:11.314059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.566 qpair failed and we were unable to recover it. 00:30:07.566 [2024-07-15 15:35:11.314479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.566 [2024-07-15 15:35:11.314518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.566 qpair failed and we were unable to recover it. 00:30:07.566 [2024-07-15 15:35:11.314762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.566 [2024-07-15 15:35:11.314802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.566 qpair failed and we were unable to recover it. 00:30:07.566 [2024-07-15 15:35:11.315147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.566 [2024-07-15 15:35:11.315187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.566 qpair failed and we were unable to recover it. 00:30:07.566 [2024-07-15 15:35:11.315545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.566 [2024-07-15 15:35:11.315585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.566 qpair failed and we were unable to recover it. 00:30:07.566 [2024-07-15 15:35:11.315769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.566 [2024-07-15 15:35:11.315809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.566 qpair failed and we were unable to recover it. 00:30:07.566 [2024-07-15 15:35:11.316124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.566 [2024-07-15 15:35:11.316164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.566 qpair failed and we were unable to recover it. 00:30:07.566 [2024-07-15 15:35:11.316535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.566 [2024-07-15 15:35:11.316575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.566 qpair failed and we were unable to recover it. 00:30:07.566 [2024-07-15 15:35:11.316894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.566 [2024-07-15 15:35:11.316936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.566 qpair failed and we were unable to recover it. 00:30:07.566 [2024-07-15 15:35:11.317266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.566 [2024-07-15 15:35:11.317307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.566 qpair failed and we were unable to recover it. 00:30:07.566 [2024-07-15 15:35:11.317696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.566 [2024-07-15 15:35:11.317735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.566 qpair failed and we were unable to recover it. 00:30:07.566 [2024-07-15 15:35:11.318134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.566 [2024-07-15 15:35:11.318175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.566 qpair failed and we were unable to recover it. 00:30:07.566 [2024-07-15 15:35:11.318476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.566 [2024-07-15 15:35:11.318517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.566 qpair failed and we were unable to recover it. 00:30:07.566 [2024-07-15 15:35:11.318882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.566 [2024-07-15 15:35:11.318942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.566 qpair failed and we were unable to recover it. 00:30:07.566 [2024-07-15 15:35:11.319249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.566 [2024-07-15 15:35:11.319290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.566 qpair failed and we were unable to recover it. 00:30:07.566 [2024-07-15 15:35:11.319624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.566 [2024-07-15 15:35:11.319664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.566 qpair failed and we were unable to recover it. 00:30:07.566 [2024-07-15 15:35:11.320077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.566 [2024-07-15 15:35:11.320119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.566 qpair failed and we were unable to recover it. 00:30:07.566 [2024-07-15 15:35:11.320500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.566 [2024-07-15 15:35:11.320539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.566 qpair failed and we were unable to recover it. 00:30:07.566 [2024-07-15 15:35:11.320916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.566 [2024-07-15 15:35:11.320957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.566 qpair failed and we were unable to recover it. 00:30:07.566 [2024-07-15 15:35:11.321200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.566 [2024-07-15 15:35:11.321241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.566 qpair failed and we were unable to recover it. 00:30:07.566 [2024-07-15 15:35:11.321566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.566 [2024-07-15 15:35:11.321607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.566 qpair failed and we were unable to recover it. 00:30:07.566 [2024-07-15 15:35:11.321854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.566 [2024-07-15 15:35:11.321895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.566 qpair failed and we were unable to recover it. 00:30:07.566 [2024-07-15 15:35:11.322147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.566 [2024-07-15 15:35:11.322187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.566 qpair failed and we were unable to recover it. 00:30:07.566 [2024-07-15 15:35:11.322507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.566 [2024-07-15 15:35:11.322547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.566 qpair failed and we were unable to recover it. 00:30:07.566 [2024-07-15 15:35:11.322875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.566 [2024-07-15 15:35:11.322916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.566 qpair failed and we were unable to recover it. 00:30:07.566 [2024-07-15 15:35:11.323123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.566 [2024-07-15 15:35:11.323162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.566 qpair failed and we were unable to recover it. 00:30:07.566 [2024-07-15 15:35:11.323494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.566 [2024-07-15 15:35:11.323534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.566 qpair failed and we were unable to recover it. 00:30:07.566 [2024-07-15 15:35:11.323884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.566 [2024-07-15 15:35:11.323925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.566 qpair failed and we were unable to recover it. 00:30:07.566 [2024-07-15 15:35:11.324252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.566 [2024-07-15 15:35:11.324292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.566 qpair failed and we were unable to recover it. 00:30:07.566 [2024-07-15 15:35:11.324536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.566 [2024-07-15 15:35:11.324576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.566 qpair failed and we were unable to recover it. 00:30:07.566 [2024-07-15 15:35:11.324892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.566 [2024-07-15 15:35:11.324932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.566 qpair failed and we were unable to recover it. 00:30:07.566 [2024-07-15 15:35:11.325253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.566 [2024-07-15 15:35:11.325293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.566 qpair failed and we were unable to recover it. 00:30:07.566 [2024-07-15 15:35:11.325684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.566 [2024-07-15 15:35:11.325724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.566 qpair failed and we were unable to recover it. 00:30:07.566 [2024-07-15 15:35:11.326095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.566 [2024-07-15 15:35:11.326136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.566 qpair failed and we were unable to recover it. 00:30:07.566 [2024-07-15 15:35:11.326406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.567 [2024-07-15 15:35:11.326446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.567 qpair failed and we were unable to recover it. 00:30:07.567 [2024-07-15 15:35:11.326712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.567 [2024-07-15 15:35:11.326730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.567 qpair failed and we were unable to recover it. 00:30:07.567 [2024-07-15 15:35:11.327001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.567 [2024-07-15 15:35:11.327019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.567 qpair failed and we were unable to recover it. 00:30:07.567 [2024-07-15 15:35:11.327140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.567 [2024-07-15 15:35:11.327158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.567 qpair failed and we were unable to recover it. 00:30:07.567 [2024-07-15 15:35:11.327445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.567 [2024-07-15 15:35:11.327485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.567 qpair failed and we were unable to recover it. 00:30:07.567 [2024-07-15 15:35:11.327748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.567 [2024-07-15 15:35:11.327788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.567 qpair failed and we were unable to recover it. 00:30:07.567 [2024-07-15 15:35:11.328123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.567 [2024-07-15 15:35:11.328170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.567 qpair failed and we were unable to recover it. 00:30:07.567 [2024-07-15 15:35:11.328515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.567 [2024-07-15 15:35:11.328555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.567 qpair failed and we were unable to recover it. 00:30:07.567 [2024-07-15 15:35:11.328986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.567 [2024-07-15 15:35:11.329028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.567 qpair failed and we were unable to recover it. 00:30:07.567 [2024-07-15 15:35:11.329275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.567 [2024-07-15 15:35:11.329315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.567 qpair failed and we were unable to recover it. 00:30:07.567 [2024-07-15 15:35:11.329641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.567 [2024-07-15 15:35:11.329681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.567 qpair failed and we were unable to recover it. 00:30:07.567 [2024-07-15 15:35:11.330055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.567 [2024-07-15 15:35:11.330096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.567 qpair failed and we were unable to recover it. 00:30:07.567 [2024-07-15 15:35:11.330469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.567 [2024-07-15 15:35:11.330508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.567 qpair failed and we were unable to recover it. 00:30:07.567 [2024-07-15 15:35:11.330878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.567 [2024-07-15 15:35:11.330920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.567 qpair failed and we were unable to recover it. 00:30:07.567 [2024-07-15 15:35:11.331224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.567 [2024-07-15 15:35:11.331264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.567 qpair failed and we were unable to recover it. 00:30:07.567 [2024-07-15 15:35:11.331582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.567 [2024-07-15 15:35:11.331622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.567 qpair failed and we were unable to recover it. 00:30:07.567 [2024-07-15 15:35:11.331930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.567 [2024-07-15 15:35:11.331970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.567 qpair failed and we were unable to recover it. 00:30:07.567 [2024-07-15 15:35:11.332365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.567 [2024-07-15 15:35:11.332406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.567 qpair failed and we were unable to recover it. 00:30:07.567 [2024-07-15 15:35:11.332739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.567 [2024-07-15 15:35:11.332756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.567 qpair failed and we were unable to recover it. 00:30:07.567 [2024-07-15 15:35:11.333084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.567 [2024-07-15 15:35:11.333125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.567 qpair failed and we were unable to recover it. 00:30:07.567 [2024-07-15 15:35:11.333503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.567 [2024-07-15 15:35:11.333544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.567 qpair failed and we were unable to recover it. 00:30:07.567 [2024-07-15 15:35:11.333852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.567 [2024-07-15 15:35:11.333893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.567 qpair failed and we were unable to recover it. 00:30:07.567 [2024-07-15 15:35:11.334190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.567 [2024-07-15 15:35:11.334230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.567 qpair failed and we were unable to recover it. 00:30:07.567 [2024-07-15 15:35:11.334549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.567 [2024-07-15 15:35:11.334566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.567 qpair failed and we were unable to recover it. 00:30:07.567 [2024-07-15 15:35:11.334870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.567 [2024-07-15 15:35:11.334912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.567 qpair failed and we were unable to recover it. 00:30:07.567 [2024-07-15 15:35:11.335313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.567 [2024-07-15 15:35:11.335353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.567 qpair failed and we were unable to recover it. 00:30:07.567 [2024-07-15 15:35:11.335603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.567 [2024-07-15 15:35:11.335643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.567 qpair failed and we were unable to recover it. 00:30:07.567 [2024-07-15 15:35:11.335945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.567 [2024-07-15 15:35:11.335987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.567 qpair failed and we were unable to recover it. 00:30:07.567 [2024-07-15 15:35:11.336299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.567 [2024-07-15 15:35:11.336338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.567 qpair failed and we were unable to recover it. 00:30:07.567 [2024-07-15 15:35:11.336581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.567 [2024-07-15 15:35:11.336621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.567 qpair failed and we were unable to recover it. 00:30:07.567 [2024-07-15 15:35:11.336942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.567 [2024-07-15 15:35:11.336982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.567 qpair failed and we were unable to recover it. 00:30:07.567 [2024-07-15 15:35:11.337292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.567 [2024-07-15 15:35:11.337332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.567 qpair failed and we were unable to recover it. 00:30:07.567 [2024-07-15 15:35:11.337653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.567 [2024-07-15 15:35:11.337694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.567 qpair failed and we were unable to recover it. 00:30:07.567 [2024-07-15 15:35:11.337958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.567 [2024-07-15 15:35:11.337976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.567 qpair failed and we were unable to recover it. 00:30:07.567 [2024-07-15 15:35:11.338327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.567 [2024-07-15 15:35:11.338365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.567 qpair failed and we were unable to recover it. 00:30:07.567 [2024-07-15 15:35:11.338625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.567 [2024-07-15 15:35:11.338642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.567 qpair failed and we were unable to recover it. 00:30:07.567 [2024-07-15 15:35:11.338819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.567 [2024-07-15 15:35:11.338844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.567 qpair failed and we were unable to recover it. 00:30:07.567 [2024-07-15 15:35:11.339172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.567 [2024-07-15 15:35:11.339212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.567 qpair failed and we were unable to recover it. 00:30:07.567 [2024-07-15 15:35:11.339594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.567 [2024-07-15 15:35:11.339634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.567 qpair failed and we were unable to recover it. 00:30:07.567 [2024-07-15 15:35:11.339954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.567 [2024-07-15 15:35:11.339995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.567 qpair failed and we were unable to recover it. 00:30:07.567 [2024-07-15 15:35:11.340344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.567 [2024-07-15 15:35:11.340384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.567 qpair failed and we were unable to recover it. 00:30:07.567 [2024-07-15 15:35:11.340718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.567 [2024-07-15 15:35:11.340758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.568 qpair failed and we were unable to recover it. 00:30:07.568 [2024-07-15 15:35:11.341158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.568 [2024-07-15 15:35:11.341200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.568 qpair failed and we were unable to recover it. 00:30:07.568 [2024-07-15 15:35:11.341589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.568 [2024-07-15 15:35:11.341628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.568 qpair failed and we were unable to recover it. 00:30:07.568 [2024-07-15 15:35:11.342030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.568 [2024-07-15 15:35:11.342071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.568 qpair failed and we were unable to recover it. 00:30:07.568 [2024-07-15 15:35:11.342381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.568 [2024-07-15 15:35:11.342433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.568 qpair failed and we were unable to recover it. 00:30:07.568 [2024-07-15 15:35:11.342768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.568 [2024-07-15 15:35:11.342785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.568 qpair failed and we were unable to recover it. 00:30:07.568 [2024-07-15 15:35:11.343154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.568 [2024-07-15 15:35:11.343172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.568 qpair failed and we were unable to recover it. 00:30:07.568 [2024-07-15 15:35:11.343511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.568 [2024-07-15 15:35:11.343529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.568 qpair failed and we were unable to recover it. 00:30:07.568 [2024-07-15 15:35:11.343797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.568 [2024-07-15 15:35:11.343849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.568 qpair failed and we were unable to recover it. 00:30:07.568 [2024-07-15 15:35:11.344243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.568 [2024-07-15 15:35:11.344284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.568 qpair failed and we were unable to recover it. 00:30:07.568 [2024-07-15 15:35:11.344524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.568 [2024-07-15 15:35:11.344564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.568 qpair failed and we were unable to recover it. 00:30:07.568 [2024-07-15 15:35:11.344893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.568 [2024-07-15 15:35:11.344912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.568 qpair failed and we were unable to recover it. 00:30:07.568 [2024-07-15 15:35:11.345160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.568 [2024-07-15 15:35:11.345200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.568 qpair failed and we were unable to recover it. 00:30:07.568 [2024-07-15 15:35:11.345567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.568 [2024-07-15 15:35:11.345607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.568 qpair failed and we were unable to recover it. 00:30:07.568 [2024-07-15 15:35:11.346013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.568 [2024-07-15 15:35:11.346030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.568 qpair failed and we were unable to recover it. 00:30:07.568 [2024-07-15 15:35:11.346208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.568 [2024-07-15 15:35:11.346226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.568 qpair failed and we were unable to recover it. 00:30:07.568 [2024-07-15 15:35:11.346570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.568 [2024-07-15 15:35:11.346610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.568 qpair failed and we were unable to recover it. 00:30:07.568 [2024-07-15 15:35:11.346948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.568 [2024-07-15 15:35:11.346989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.568 qpair failed and we were unable to recover it. 00:30:07.568 [2024-07-15 15:35:11.347377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.568 [2024-07-15 15:35:11.347416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.568 qpair failed and we were unable to recover it. 00:30:07.568 [2024-07-15 15:35:11.347599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.568 [2024-07-15 15:35:11.347617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.568 qpair failed and we were unable to recover it. 00:30:07.568 [2024-07-15 15:35:11.347968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.568 [2024-07-15 15:35:11.348008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.568 qpair failed and we were unable to recover it. 00:30:07.568 [2024-07-15 15:35:11.348339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.568 [2024-07-15 15:35:11.348378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.568 qpair failed and we were unable to recover it. 00:30:07.568 [2024-07-15 15:35:11.348746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.568 [2024-07-15 15:35:11.348786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.568 qpair failed and we were unable to recover it. 00:30:07.568 [2024-07-15 15:35:11.349036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.568 [2024-07-15 15:35:11.349077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.568 qpair failed and we were unable to recover it. 00:30:07.568 [2024-07-15 15:35:11.349386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.568 [2024-07-15 15:35:11.349425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.568 qpair failed and we were unable to recover it. 00:30:07.568 [2024-07-15 15:35:11.349804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.568 [2024-07-15 15:35:11.349856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.568 qpair failed and we were unable to recover it. 00:30:07.568 [2024-07-15 15:35:11.350163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.568 [2024-07-15 15:35:11.350203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.568 qpair failed and we were unable to recover it. 00:30:07.568 [2024-07-15 15:35:11.350433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.568 [2024-07-15 15:35:11.350472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.568 qpair failed and we were unable to recover it. 00:30:07.568 [2024-07-15 15:35:11.350854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.568 [2024-07-15 15:35:11.350895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.568 qpair failed and we were unable to recover it. 00:30:07.568 [2024-07-15 15:35:11.351263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.568 [2024-07-15 15:35:11.351302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.568 qpair failed and we were unable to recover it. 00:30:07.568 [2024-07-15 15:35:11.351637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.568 [2024-07-15 15:35:11.351676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.568 qpair failed and we were unable to recover it. 00:30:07.568 [2024-07-15 15:35:11.352090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.568 [2024-07-15 15:35:11.352130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.568 qpair failed and we were unable to recover it. 00:30:07.568 [2024-07-15 15:35:11.352528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.568 [2024-07-15 15:35:11.352567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.568 qpair failed and we were unable to recover it. 00:30:07.568 [2024-07-15 15:35:11.352896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.568 [2024-07-15 15:35:11.352916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.568 qpair failed and we were unable to recover it. 00:30:07.568 [2024-07-15 15:35:11.353191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.568 [2024-07-15 15:35:11.353231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.568 qpair failed and we were unable to recover it. 00:30:07.568 [2024-07-15 15:35:11.353599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.568 [2024-07-15 15:35:11.353639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.568 qpair failed and we were unable to recover it. 00:30:07.568 [2024-07-15 15:35:11.354033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.568 [2024-07-15 15:35:11.354074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.568 qpair failed and we were unable to recover it. 00:30:07.568 [2024-07-15 15:35:11.354447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.568 [2024-07-15 15:35:11.354486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.568 qpair failed and we were unable to recover it. 00:30:07.568 [2024-07-15 15:35:11.354797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.568 [2024-07-15 15:35:11.354846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.568 qpair failed and we were unable to recover it. 00:30:07.568 [2024-07-15 15:35:11.355077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.568 [2024-07-15 15:35:11.355117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.568 qpair failed and we were unable to recover it. 00:30:07.568 [2024-07-15 15:35:11.355450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.568 [2024-07-15 15:35:11.355489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.568 qpair failed and we were unable to recover it. 00:30:07.568 [2024-07-15 15:35:11.355764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.568 [2024-07-15 15:35:11.355803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.568 qpair failed and we were unable to recover it. 00:30:07.568 [2024-07-15 15:35:11.356128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.569 [2024-07-15 15:35:11.356169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.569 qpair failed and we were unable to recover it. 00:30:07.569 [2024-07-15 15:35:11.356503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.569 [2024-07-15 15:35:11.356542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.569 qpair failed and we were unable to recover it. 00:30:07.569 [2024-07-15 15:35:11.356860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.569 [2024-07-15 15:35:11.356901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.569 qpair failed and we were unable to recover it. 00:30:07.569 [2024-07-15 15:35:11.357159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.569 [2024-07-15 15:35:11.357200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.569 qpair failed and we were unable to recover it. 00:30:07.569 [2024-07-15 15:35:11.357451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.569 [2024-07-15 15:35:11.357491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.569 qpair failed and we were unable to recover it. 00:30:07.569 [2024-07-15 15:35:11.357741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.569 [2024-07-15 15:35:11.357781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.569 qpair failed and we were unable to recover it. 00:30:07.569 [2024-07-15 15:35:11.358174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.569 [2024-07-15 15:35:11.358191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.569 qpair failed and we were unable to recover it. 00:30:07.569 [2024-07-15 15:35:11.358465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.569 [2024-07-15 15:35:11.358505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.569 qpair failed and we were unable to recover it. 00:30:07.569 [2024-07-15 15:35:11.358727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.569 [2024-07-15 15:35:11.358745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.569 qpair failed and we were unable to recover it. 00:30:07.569 [2024-07-15 15:35:11.358947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.569 [2024-07-15 15:35:11.358987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.569 qpair failed and we were unable to recover it. 00:30:07.569 [2024-07-15 15:35:11.359324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.569 [2024-07-15 15:35:11.359364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.569 qpair failed and we were unable to recover it. 00:30:07.569 [2024-07-15 15:35:11.359738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.569 [2024-07-15 15:35:11.359779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.569 qpair failed and we were unable to recover it. 00:30:07.569 [2024-07-15 15:35:11.360076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.569 [2024-07-15 15:35:11.360116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.569 qpair failed and we were unable to recover it. 00:30:07.569 [2024-07-15 15:35:11.360503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.569 [2024-07-15 15:35:11.360542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.569 qpair failed and we were unable to recover it. 00:30:07.569 [2024-07-15 15:35:11.360862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.569 [2024-07-15 15:35:11.360904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.569 qpair failed and we were unable to recover it. 00:30:07.569 [2024-07-15 15:35:11.361134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.569 [2024-07-15 15:35:11.361173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.569 qpair failed and we were unable to recover it. 00:30:07.569 [2024-07-15 15:35:11.361490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.569 [2024-07-15 15:35:11.361530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.569 qpair failed and we were unable to recover it. 00:30:07.569 [2024-07-15 15:35:11.361844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.569 [2024-07-15 15:35:11.361862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.569 qpair failed and we were unable to recover it. 00:30:07.569 [2024-07-15 15:35:11.362064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.569 [2024-07-15 15:35:11.362084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.569 qpair failed and we were unable to recover it. 00:30:07.569 [2024-07-15 15:35:11.362341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.569 [2024-07-15 15:35:11.362387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.569 qpair failed and we were unable to recover it. 00:30:07.569 [2024-07-15 15:35:11.362693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.569 [2024-07-15 15:35:11.362741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.569 qpair failed and we were unable to recover it. 00:30:07.569 [2024-07-15 15:35:11.363005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.569 [2024-07-15 15:35:11.363046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.569 qpair failed and we were unable to recover it. 00:30:07.569 [2024-07-15 15:35:11.363287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.569 [2024-07-15 15:35:11.363326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.569 qpair failed and we were unable to recover it. 00:30:07.569 [2024-07-15 15:35:11.363644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.569 [2024-07-15 15:35:11.363690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.569 qpair failed and we were unable to recover it. 00:30:07.569 [2024-07-15 15:35:11.363879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.569 [2024-07-15 15:35:11.363896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.569 qpair failed and we were unable to recover it. 00:30:07.569 [2024-07-15 15:35:11.364224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.569 [2024-07-15 15:35:11.364242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.569 qpair failed and we were unable to recover it. 00:30:07.569 [2024-07-15 15:35:11.364495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.569 [2024-07-15 15:35:11.364513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.569 qpair failed and we were unable to recover it. 00:30:07.569 [2024-07-15 15:35:11.364830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.569 [2024-07-15 15:35:11.364886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.569 qpair failed and we were unable to recover it. 00:30:07.569 [2024-07-15 15:35:11.365134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.569 [2024-07-15 15:35:11.365174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.569 qpair failed and we were unable to recover it. 00:30:07.569 [2024-07-15 15:35:11.365486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.569 [2024-07-15 15:35:11.365526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.569 qpair failed and we were unable to recover it. 00:30:07.569 [2024-07-15 15:35:11.365889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.569 [2024-07-15 15:35:11.365930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.569 qpair failed and we were unable to recover it. 00:30:07.569 [2024-07-15 15:35:11.366210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.569 [2024-07-15 15:35:11.366249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.569 qpair failed and we were unable to recover it. 00:30:07.569 [2024-07-15 15:35:11.366653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.569 [2024-07-15 15:35:11.366693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.569 qpair failed and we were unable to recover it. 00:30:07.569 [2024-07-15 15:35:11.368101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.570 [2024-07-15 15:35:11.368136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.570 qpair failed and we were unable to recover it. 00:30:07.570 [2024-07-15 15:35:11.368537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.570 [2024-07-15 15:35:11.368555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.570 qpair failed and we were unable to recover it. 00:30:07.570 [2024-07-15 15:35:11.368821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.570 [2024-07-15 15:35:11.368896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.570 qpair failed and we were unable to recover it. 00:30:07.570 [2024-07-15 15:35:11.369264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.570 [2024-07-15 15:35:11.369303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.570 qpair failed and we were unable to recover it. 00:30:07.570 [2024-07-15 15:35:11.369557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.570 [2024-07-15 15:35:11.369597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.570 qpair failed and we were unable to recover it. 00:30:07.570 [2024-07-15 15:35:11.369896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.570 [2024-07-15 15:35:11.369937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.570 qpair failed and we were unable to recover it. 00:30:07.570 [2024-07-15 15:35:11.370274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.570 [2024-07-15 15:35:11.370314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.570 qpair failed and we were unable to recover it. 00:30:07.570 [2024-07-15 15:35:11.370584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.570 [2024-07-15 15:35:11.370601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.570 qpair failed and we were unable to recover it. 00:30:07.570 [2024-07-15 15:35:11.370938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.570 [2024-07-15 15:35:11.370956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.570 qpair failed and we were unable to recover it. 00:30:07.570 [2024-07-15 15:35:11.371280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.570 [2024-07-15 15:35:11.371321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.570 qpair failed and we were unable to recover it. 00:30:07.570 [2024-07-15 15:35:11.371684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.570 [2024-07-15 15:35:11.371723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.570 qpair failed and we were unable to recover it. 00:30:07.570 [2024-07-15 15:35:11.372030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.570 [2024-07-15 15:35:11.372048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.570 qpair failed and we were unable to recover it. 00:30:07.570 [2024-07-15 15:35:11.372292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.570 [2024-07-15 15:35:11.372332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.570 qpair failed and we were unable to recover it. 00:30:07.570 [2024-07-15 15:35:11.372734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.570 [2024-07-15 15:35:11.372773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.570 qpair failed and we were unable to recover it. 00:30:07.570 [2024-07-15 15:35:11.373014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.570 [2024-07-15 15:35:11.373055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.570 qpair failed and we were unable to recover it. 00:30:07.570 [2024-07-15 15:35:11.373461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.570 [2024-07-15 15:35:11.373501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.570 qpair failed and we were unable to recover it. 00:30:07.570 [2024-07-15 15:35:11.373868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.570 [2024-07-15 15:35:11.373909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.570 qpair failed and we were unable to recover it. 00:30:07.570 [2024-07-15 15:35:11.374300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.570 [2024-07-15 15:35:11.374339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.570 qpair failed and we were unable to recover it. 00:30:07.570 [2024-07-15 15:35:11.374602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.570 [2024-07-15 15:35:11.374641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.570 qpair failed and we were unable to recover it. 00:30:07.570 [2024-07-15 15:35:11.374946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.570 [2024-07-15 15:35:11.374963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.570 qpair failed and we were unable to recover it. 00:30:07.570 [2024-07-15 15:35:11.375198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.570 [2024-07-15 15:35:11.375215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.570 qpair failed and we were unable to recover it. 00:30:07.570 [2024-07-15 15:35:11.375459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.570 [2024-07-15 15:35:11.375476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.570 qpair failed and we were unable to recover it. 00:30:07.570 [2024-07-15 15:35:11.375766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.570 [2024-07-15 15:35:11.375805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.570 qpair failed and we were unable to recover it. 00:30:07.570 [2024-07-15 15:35:11.376092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.570 [2024-07-15 15:35:11.376133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.570 qpair failed and we were unable to recover it. 00:30:07.570 [2024-07-15 15:35:11.376380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.570 [2024-07-15 15:35:11.376419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.570 qpair failed and we were unable to recover it. 00:30:07.570 [2024-07-15 15:35:11.376699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.570 [2024-07-15 15:35:11.376732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.570 qpair failed and we were unable to recover it. 00:30:07.570 [2024-07-15 15:35:11.377059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.570 [2024-07-15 15:35:11.377101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.570 qpair failed and we were unable to recover it. 00:30:07.570 [2024-07-15 15:35:11.377386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.570 [2024-07-15 15:35:11.377426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.570 qpair failed and we were unable to recover it. 00:30:07.570 [2024-07-15 15:35:11.377657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.570 [2024-07-15 15:35:11.377696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.570 qpair failed and we were unable to recover it. 00:30:07.570 [2024-07-15 15:35:11.377955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.570 [2024-07-15 15:35:11.377995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.570 qpair failed and we were unable to recover it. 00:30:07.570 [2024-07-15 15:35:11.378387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.570 [2024-07-15 15:35:11.378427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.570 qpair failed and we were unable to recover it. 00:30:07.570 [2024-07-15 15:35:11.378677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.570 [2024-07-15 15:35:11.378717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.570 qpair failed and we were unable to recover it. 00:30:07.570 [2024-07-15 15:35:11.378964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.570 [2024-07-15 15:35:11.379004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.570 qpair failed and we were unable to recover it. 00:30:07.570 [2024-07-15 15:35:11.379342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.570 [2024-07-15 15:35:11.379382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.570 qpair failed and we were unable to recover it. 00:30:07.570 [2024-07-15 15:35:11.379694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.570 [2024-07-15 15:35:11.379734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.570 qpair failed and we were unable to recover it. 00:30:07.570 [2024-07-15 15:35:11.380104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.570 [2024-07-15 15:35:11.380121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.570 qpair failed and we were unable to recover it. 00:30:07.570 [2024-07-15 15:35:11.380411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.570 [2024-07-15 15:35:11.380450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.570 qpair failed and we were unable to recover it. 00:30:07.570 [2024-07-15 15:35:11.380824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.570 [2024-07-15 15:35:11.380874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.570 qpair failed and we were unable to recover it. 00:30:07.570 [2024-07-15 15:35:11.381128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.571 [2024-07-15 15:35:11.381168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.571 qpair failed and we were unable to recover it. 00:30:07.571 [2024-07-15 15:35:11.381460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.571 [2024-07-15 15:35:11.381500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.571 qpair failed and we were unable to recover it. 00:30:07.571 [2024-07-15 15:35:11.381804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.571 [2024-07-15 15:35:11.381858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.571 qpair failed and we were unable to recover it. 00:30:07.571 [2024-07-15 15:35:11.382205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.571 [2024-07-15 15:35:11.382245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.571 qpair failed and we were unable to recover it. 00:30:07.571 [2024-07-15 15:35:11.382541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.571 [2024-07-15 15:35:11.382580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.571 qpair failed and we were unable to recover it. 00:30:07.571 [2024-07-15 15:35:11.382892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.571 [2024-07-15 15:35:11.382932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.571 qpair failed and we were unable to recover it. 00:30:07.571 [2024-07-15 15:35:11.383266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.571 [2024-07-15 15:35:11.383306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.571 qpair failed and we were unable to recover it. 00:30:07.571 [2024-07-15 15:35:11.383534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.571 [2024-07-15 15:35:11.383574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.571 qpair failed and we were unable to recover it. 00:30:07.571 [2024-07-15 15:35:11.383847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.571 [2024-07-15 15:35:11.383888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.571 qpair failed and we were unable to recover it. 00:30:07.571 [2024-07-15 15:35:11.384181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.571 [2024-07-15 15:35:11.384221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.571 qpair failed and we were unable to recover it. 00:30:07.571 [2024-07-15 15:35:11.384496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.571 [2024-07-15 15:35:11.384535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.571 qpair failed and we were unable to recover it. 00:30:07.571 [2024-07-15 15:35:11.384893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.571 [2024-07-15 15:35:11.384934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.571 qpair failed and we were unable to recover it. 00:30:07.571 [2024-07-15 15:35:11.385253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.571 [2024-07-15 15:35:11.385270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.571 qpair failed and we were unable to recover it. 00:30:07.571 [2024-07-15 15:35:11.385626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.571 [2024-07-15 15:35:11.385643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.571 qpair failed and we were unable to recover it. 00:30:07.571 [2024-07-15 15:35:11.385867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.571 [2024-07-15 15:35:11.385907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.571 qpair failed and we were unable to recover it. 00:30:07.571 [2024-07-15 15:35:11.386159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.571 [2024-07-15 15:35:11.386205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.571 qpair failed and we were unable to recover it. 00:30:07.571 [2024-07-15 15:35:11.386583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.571 [2024-07-15 15:35:11.386623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.571 qpair failed and we were unable to recover it. 00:30:07.571 [2024-07-15 15:35:11.386887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.571 [2024-07-15 15:35:11.386927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.571 qpair failed and we were unable to recover it. 00:30:07.571 [2024-07-15 15:35:11.387231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.571 [2024-07-15 15:35:11.387270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.571 qpair failed and we were unable to recover it. 00:30:07.571 [2024-07-15 15:35:11.387627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.571 [2024-07-15 15:35:11.387666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.571 qpair failed and we were unable to recover it. 00:30:07.571 [2024-07-15 15:35:11.387906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.571 [2024-07-15 15:35:11.387924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.571 qpair failed and we were unable to recover it. 00:30:07.571 [2024-07-15 15:35:11.388241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.571 [2024-07-15 15:35:11.388281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.571 qpair failed and we were unable to recover it. 00:30:07.571 [2024-07-15 15:35:11.388579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.571 [2024-07-15 15:35:11.388619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.571 qpair failed and we were unable to recover it. 00:30:07.571 [2024-07-15 15:35:11.388926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.571 [2024-07-15 15:35:11.388967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.571 qpair failed and we were unable to recover it. 00:30:07.571 [2024-07-15 15:35:11.389296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.571 [2024-07-15 15:35:11.389335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.571 qpair failed and we were unable to recover it. 00:30:07.571 [2024-07-15 15:35:11.389643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.571 [2024-07-15 15:35:11.389682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.571 qpair failed and we were unable to recover it. 00:30:07.571 [2024-07-15 15:35:11.389998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.571 [2024-07-15 15:35:11.390015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.571 qpair failed and we were unable to recover it. 00:30:07.571 [2024-07-15 15:35:11.390336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.571 [2024-07-15 15:35:11.390375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.571 qpair failed and we were unable to recover it. 00:30:07.571 [2024-07-15 15:35:11.390605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.571 [2024-07-15 15:35:11.390644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.571 qpair failed and we were unable to recover it. 00:30:07.571 [2024-07-15 15:35:11.390933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.571 [2024-07-15 15:35:11.390951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.571 qpair failed and we were unable to recover it. 00:30:07.571 [2024-07-15 15:35:11.391286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.571 [2024-07-15 15:35:11.391304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.571 qpair failed and we were unable to recover it. 00:30:07.571 [2024-07-15 15:35:11.391623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.571 [2024-07-15 15:35:11.391662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.571 qpair failed and we were unable to recover it. 00:30:07.571 [2024-07-15 15:35:11.391909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.571 [2024-07-15 15:35:11.391949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.571 qpair failed and we were unable to recover it. 00:30:07.571 [2024-07-15 15:35:11.392254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.571 [2024-07-15 15:35:11.392293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.571 qpair failed and we were unable to recover it. 00:30:07.571 [2024-07-15 15:35:11.392515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.571 [2024-07-15 15:35:11.392555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.571 qpair failed and we were unable to recover it. 00:30:07.571 [2024-07-15 15:35:11.392873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.571 [2024-07-15 15:35:11.392916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.571 qpair failed and we were unable to recover it. 00:30:07.572 [2024-07-15 15:35:11.393301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.572 [2024-07-15 15:35:11.393340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.572 qpair failed and we were unable to recover it. 00:30:07.572 [2024-07-15 15:35:11.393714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.572 [2024-07-15 15:35:11.393753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.572 qpair failed and we were unable to recover it. 00:30:07.572 [2024-07-15 15:35:11.394125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.572 [2024-07-15 15:35:11.394166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.572 qpair failed and we were unable to recover it. 00:30:07.572 [2024-07-15 15:35:11.394407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.572 [2024-07-15 15:35:11.394446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.572 qpair failed and we were unable to recover it. 00:30:07.572 [2024-07-15 15:35:11.394807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.572 [2024-07-15 15:35:11.394860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.572 qpair failed and we were unable to recover it. 00:30:07.572 [2024-07-15 15:35:11.395240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.572 [2024-07-15 15:35:11.395279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.572 qpair failed and we were unable to recover it. 00:30:07.572 [2024-07-15 15:35:11.395588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.572 [2024-07-15 15:35:11.395633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.572 qpair failed and we were unable to recover it. 00:30:07.572 [2024-07-15 15:35:11.395937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.572 [2024-07-15 15:35:11.395977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.572 qpair failed and we were unable to recover it. 00:30:07.572 [2024-07-15 15:35:11.396302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.572 [2024-07-15 15:35:11.396341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.572 qpair failed and we were unable to recover it. 00:30:07.572 [2024-07-15 15:35:11.396631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.572 [2024-07-15 15:35:11.396670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.572 qpair failed and we were unable to recover it. 00:30:07.572 [2024-07-15 15:35:11.396958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.572 [2024-07-15 15:35:11.396975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.572 qpair failed and we were unable to recover it. 00:30:07.572 [2024-07-15 15:35:11.397185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.572 [2024-07-15 15:35:11.397202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.572 qpair failed and we were unable to recover it. 00:30:07.572 [2024-07-15 15:35:11.397458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.572 [2024-07-15 15:35:11.397503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.572 qpair failed and we were unable to recover it. 00:30:07.572 [2024-07-15 15:35:11.397826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.572 [2024-07-15 15:35:11.397876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.572 qpair failed and we were unable to recover it. 00:30:07.572 [2024-07-15 15:35:11.398188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.572 [2024-07-15 15:35:11.398227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.572 qpair failed and we were unable to recover it. 00:30:07.572 [2024-07-15 15:35:11.398540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.572 [2024-07-15 15:35:11.398579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.572 qpair failed and we were unable to recover it. 00:30:07.572 [2024-07-15 15:35:11.398841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.572 [2024-07-15 15:35:11.398882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.572 qpair failed and we were unable to recover it. 00:30:07.572 [2024-07-15 15:35:11.399054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.572 [2024-07-15 15:35:11.399093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.572 qpair failed and we were unable to recover it. 00:30:07.572 [2024-07-15 15:35:11.399427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.572 [2024-07-15 15:35:11.399466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.572 qpair failed and we were unable to recover it. 00:30:07.572 [2024-07-15 15:35:11.399725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.572 [2024-07-15 15:35:11.399742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.572 qpair failed and we were unable to recover it. 00:30:07.572 [2024-07-15 15:35:11.399993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.572 [2024-07-15 15:35:11.400010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.572 qpair failed and we were unable to recover it. 00:30:07.572 [2024-07-15 15:35:11.400312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.572 [2024-07-15 15:35:11.400351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.572 qpair failed and we were unable to recover it. 00:30:07.572 [2024-07-15 15:35:11.400709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.572 [2024-07-15 15:35:11.400726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.572 qpair failed and we were unable to recover it. 00:30:07.572 [2024-07-15 15:35:11.400923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.572 [2024-07-15 15:35:11.400965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.572 qpair failed and we were unable to recover it. 00:30:07.572 [2024-07-15 15:35:11.401382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.572 [2024-07-15 15:35:11.401422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.572 qpair failed and we were unable to recover it. 00:30:07.572 [2024-07-15 15:35:11.401734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.572 [2024-07-15 15:35:11.401751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.572 qpair failed and we were unable to recover it. 00:30:07.572 [2024-07-15 15:35:11.402038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.572 [2024-07-15 15:35:11.402079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.572 qpair failed and we were unable to recover it. 00:30:07.572 [2024-07-15 15:35:11.402320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.572 [2024-07-15 15:35:11.402360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.572 qpair failed and we were unable to recover it. 00:30:07.572 [2024-07-15 15:35:11.402721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.572 [2024-07-15 15:35:11.402760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.572 qpair failed and we were unable to recover it. 00:30:07.572 [2024-07-15 15:35:11.403066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.572 [2024-07-15 15:35:11.403106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.572 qpair failed and we were unable to recover it. 00:30:07.572 [2024-07-15 15:35:11.403402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.572 [2024-07-15 15:35:11.403442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.572 qpair failed and we were unable to recover it. 00:30:07.572 [2024-07-15 15:35:11.403847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.572 [2024-07-15 15:35:11.403888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.572 qpair failed and we were unable to recover it. 00:30:07.572 [2024-07-15 15:35:11.404178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.572 [2024-07-15 15:35:11.404195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.572 qpair failed and we were unable to recover it. 00:30:07.572 [2024-07-15 15:35:11.404525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.572 [2024-07-15 15:35:11.404570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.572 qpair failed and we were unable to recover it. 00:30:07.572 [2024-07-15 15:35:11.404929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.572 [2024-07-15 15:35:11.404970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.572 qpair failed and we were unable to recover it. 00:30:07.572 [2024-07-15 15:35:11.405269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.572 [2024-07-15 15:35:11.405308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.572 qpair failed and we were unable to recover it. 00:30:07.572 [2024-07-15 15:35:11.405609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.572 [2024-07-15 15:35:11.405648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.572 qpair failed and we were unable to recover it. 00:30:07.572 [2024-07-15 15:35:11.405981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.572 [2024-07-15 15:35:11.406022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.572 qpair failed and we were unable to recover it. 00:30:07.572 [2024-07-15 15:35:11.406331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.572 [2024-07-15 15:35:11.406369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.572 qpair failed and we were unable to recover it. 00:30:07.572 [2024-07-15 15:35:11.406666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.573 [2024-07-15 15:35:11.406705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.573 qpair failed and we were unable to recover it. 00:30:07.573 [2024-07-15 15:35:11.406931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.573 [2024-07-15 15:35:11.406949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.573 qpair failed and we were unable to recover it. 00:30:07.573 [2024-07-15 15:35:11.407197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.573 [2024-07-15 15:35:11.407214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.573 qpair failed and we were unable to recover it. 00:30:07.573 [2024-07-15 15:35:11.407476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.573 [2024-07-15 15:35:11.407493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.573 qpair failed and we were unable to recover it. 00:30:07.573 [2024-07-15 15:35:11.407747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.573 [2024-07-15 15:35:11.407764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.573 qpair failed and we were unable to recover it. 00:30:07.573 [2024-07-15 15:35:11.408031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.573 [2024-07-15 15:35:11.408071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.573 qpair failed and we were unable to recover it. 00:30:07.573 [2024-07-15 15:35:11.408309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.573 [2024-07-15 15:35:11.408348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.573 qpair failed and we were unable to recover it. 00:30:07.573 [2024-07-15 15:35:11.408666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.573 [2024-07-15 15:35:11.408705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.573 qpair failed and we were unable to recover it. 00:30:07.573 [2024-07-15 15:35:11.409045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.573 [2024-07-15 15:35:11.409126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.573 qpair failed and we were unable to recover it. 00:30:07.573 [2024-07-15 15:35:11.409574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.573 [2024-07-15 15:35:11.409633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.573 qpair failed and we were unable to recover it. 00:30:07.573 [2024-07-15 15:35:11.409949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.573 [2024-07-15 15:35:11.409992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.573 qpair failed and we were unable to recover it. 00:30:07.573 [2024-07-15 15:35:11.410274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.573 [2024-07-15 15:35:11.410314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.573 qpair failed and we were unable to recover it. 00:30:07.573 [2024-07-15 15:35:11.410687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.573 [2024-07-15 15:35:11.410740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.573 qpair failed and we were unable to recover it. 00:30:07.573 [2024-07-15 15:35:11.410960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.573 [2024-07-15 15:35:11.410978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.573 qpair failed and we were unable to recover it. 00:30:07.573 [2024-07-15 15:35:11.411085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.573 [2024-07-15 15:35:11.411103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.573 qpair failed and we were unable to recover it. 00:30:07.573 [2024-07-15 15:35:11.411332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.573 [2024-07-15 15:35:11.411373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.573 qpair failed and we were unable to recover it. 00:30:07.573 [2024-07-15 15:35:11.411666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.573 [2024-07-15 15:35:11.411706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.573 qpair failed and we were unable to recover it. 00:30:07.573 [2024-07-15 15:35:11.412003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.573 [2024-07-15 15:35:11.412025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.573 qpair failed and we were unable to recover it. 00:30:07.573 [2024-07-15 15:35:11.412309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.573 [2024-07-15 15:35:11.412354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.573 qpair failed and we were unable to recover it. 00:30:07.573 [2024-07-15 15:35:11.412663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.573 [2024-07-15 15:35:11.412701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.573 qpair failed and we were unable to recover it. 00:30:07.573 [2024-07-15 15:35:11.412964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.573 [2024-07-15 15:35:11.413019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.573 qpair failed and we were unable to recover it. 00:30:07.573 [2024-07-15 15:35:11.413326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.573 [2024-07-15 15:35:11.413387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.573 qpair failed and we were unable to recover it. 00:30:07.573 [2024-07-15 15:35:11.413772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.573 [2024-07-15 15:35:11.413789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.573 qpair failed and we were unable to recover it. 00:30:07.573 [2024-07-15 15:35:11.414103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.573 [2024-07-15 15:35:11.414120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.573 qpair failed and we were unable to recover it. 00:30:07.573 [2024-07-15 15:35:11.414493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.573 [2024-07-15 15:35:11.414510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.573 qpair failed and we were unable to recover it. 00:30:07.573 [2024-07-15 15:35:11.414775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.573 [2024-07-15 15:35:11.414814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.573 qpair failed and we were unable to recover it. 00:30:07.573 [2024-07-15 15:35:11.415236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.573 [2024-07-15 15:35:11.415280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.573 qpair failed and we were unable to recover it. 00:30:07.573 [2024-07-15 15:35:11.415541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.573 [2024-07-15 15:35:11.415581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.573 qpair failed and we were unable to recover it. 00:30:07.573 [2024-07-15 15:35:11.415944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.573 [2024-07-15 15:35:11.415985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.573 qpair failed and we were unable to recover it. 00:30:07.573 [2024-07-15 15:35:11.416309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.573 [2024-07-15 15:35:11.416368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.573 qpair failed and we were unable to recover it. 00:30:07.573 [2024-07-15 15:35:11.416546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.573 [2024-07-15 15:35:11.416585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.573 qpair failed and we were unable to recover it. 00:30:07.573 [2024-07-15 15:35:11.416896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.573 [2024-07-15 15:35:11.416938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.573 qpair failed and we were unable to recover it. 00:30:07.573 [2024-07-15 15:35:11.417238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.573 [2024-07-15 15:35:11.417281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.573 qpair failed and we were unable to recover it. 00:30:07.573 [2024-07-15 15:35:11.417622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.573 [2024-07-15 15:35:11.417643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.573 qpair failed and we were unable to recover it. 00:30:07.573 [2024-07-15 15:35:11.417914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.573 [2024-07-15 15:35:11.417955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.573 qpair failed and we were unable to recover it. 00:30:07.573 [2024-07-15 15:35:11.418262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.573 [2024-07-15 15:35:11.418303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.573 qpair failed and we were unable to recover it. 00:30:07.573 [2024-07-15 15:35:11.418544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.573 [2024-07-15 15:35:11.418583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.573 qpair failed and we were unable to recover it. 00:30:07.573 [2024-07-15 15:35:11.418970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.573 [2024-07-15 15:35:11.419010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.573 qpair failed and we were unable to recover it. 00:30:07.573 [2024-07-15 15:35:11.419369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.574 [2024-07-15 15:35:11.419409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.574 qpair failed and we were unable to recover it. 00:30:07.574 [2024-07-15 15:35:11.419706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.574 [2024-07-15 15:35:11.419746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.574 qpair failed and we were unable to recover it. 00:30:07.574 [2024-07-15 15:35:11.420157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.574 [2024-07-15 15:35:11.420198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.574 qpair failed and we were unable to recover it. 00:30:07.574 [2024-07-15 15:35:11.420494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.574 [2024-07-15 15:35:11.420535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.574 qpair failed and we were unable to recover it. 00:30:07.574 [2024-07-15 15:35:11.420876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.574 [2024-07-15 15:35:11.420917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.574 qpair failed and we were unable to recover it. 00:30:07.574 [2024-07-15 15:35:11.421234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.574 [2024-07-15 15:35:11.421274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.574 qpair failed and we were unable to recover it. 00:30:07.574 [2024-07-15 15:35:11.421636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.574 [2024-07-15 15:35:11.421675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.574 qpair failed and we were unable to recover it. 00:30:07.574 [2024-07-15 15:35:11.421988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.574 [2024-07-15 15:35:11.422028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.574 qpair failed and we were unable to recover it. 00:30:07.574 [2024-07-15 15:35:11.422413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.574 [2024-07-15 15:35:11.422453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.574 qpair failed and we were unable to recover it. 00:30:07.574 [2024-07-15 15:35:11.422629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.574 [2024-07-15 15:35:11.422668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.574 qpair failed and we were unable to recover it. 00:30:07.574 [2024-07-15 15:35:11.423099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.574 [2024-07-15 15:35:11.423178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.574 qpair failed and we were unable to recover it. 00:30:07.574 [2024-07-15 15:35:11.423401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.574 [2024-07-15 15:35:11.423444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.574 qpair failed and we were unable to recover it. 00:30:07.574 [2024-07-15 15:35:11.423700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.574 [2024-07-15 15:35:11.423717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.574 qpair failed and we were unable to recover it. 00:30:07.574 [2024-07-15 15:35:11.424083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.574 [2024-07-15 15:35:11.424125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.574 qpair failed and we were unable to recover it. 00:30:07.574 [2024-07-15 15:35:11.424414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.574 [2024-07-15 15:35:11.424454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.574 qpair failed and we were unable to recover it. 00:30:07.574 [2024-07-15 15:35:11.424779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.574 [2024-07-15 15:35:11.424818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.574 qpair failed and we were unable to recover it. 00:30:07.574 [2024-07-15 15:35:11.425155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.574 [2024-07-15 15:35:11.425195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.574 qpair failed and we were unable to recover it. 00:30:07.574 [2024-07-15 15:35:11.425569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.574 [2024-07-15 15:35:11.425609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.574 qpair failed and we were unable to recover it. 00:30:07.574 [2024-07-15 15:35:11.425920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.574 [2024-07-15 15:35:11.425961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.574 qpair failed and we were unable to recover it. 00:30:07.574 [2024-07-15 15:35:11.426274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.574 [2024-07-15 15:35:11.426313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.574 qpair failed and we were unable to recover it. 00:30:07.574 [2024-07-15 15:35:11.426608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.574 [2024-07-15 15:35:11.426649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.574 qpair failed and we were unable to recover it. 00:30:07.574 [2024-07-15 15:35:11.426911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.574 [2024-07-15 15:35:11.426951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.574 qpair failed and we were unable to recover it. 00:30:07.574 [2024-07-15 15:35:11.427264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.574 [2024-07-15 15:35:11.427281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.574 qpair failed and we were unable to recover it. 00:30:07.574 [2024-07-15 15:35:11.427647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.574 [2024-07-15 15:35:11.427686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.574 qpair failed and we were unable to recover it. 00:30:07.574 [2024-07-15 15:35:11.428028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.574 [2024-07-15 15:35:11.428068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.574 qpair failed and we were unable to recover it. 00:30:07.574 [2024-07-15 15:35:11.428343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.574 [2024-07-15 15:35:11.428360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.574 qpair failed and we were unable to recover it. 00:30:07.574 [2024-07-15 15:35:11.428708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.574 [2024-07-15 15:35:11.428747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.574 qpair failed and we were unable to recover it. 00:30:07.574 [2024-07-15 15:35:11.429081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.574 [2024-07-15 15:35:11.429122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.574 qpair failed and we were unable to recover it. 00:30:07.574 [2024-07-15 15:35:11.429420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.574 [2024-07-15 15:35:11.429460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.574 qpair failed and we were unable to recover it. 00:30:07.574 [2024-07-15 15:35:11.429758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.574 [2024-07-15 15:35:11.429797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.574 qpair failed and we were unable to recover it. 00:30:07.574 [2024-07-15 15:35:11.430116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.574 [2024-07-15 15:35:11.430155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.574 qpair failed and we were unable to recover it. 00:30:07.574 [2024-07-15 15:35:11.430451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.574 [2024-07-15 15:35:11.430491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.574 qpair failed and we were unable to recover it. 00:30:07.574 [2024-07-15 15:35:11.430822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.574 [2024-07-15 15:35:11.430845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.574 qpair failed and we were unable to recover it. 00:30:07.574 [2024-07-15 15:35:11.431053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.574 [2024-07-15 15:35:11.431070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.574 qpair failed and we were unable to recover it. 00:30:07.574 [2024-07-15 15:35:11.431363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.574 [2024-07-15 15:35:11.431402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.574 qpair failed and we were unable to recover it. 00:30:07.575 [2024-07-15 15:35:11.431711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.575 [2024-07-15 15:35:11.431750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.575 qpair failed and we were unable to recover it. 00:30:07.575 [2024-07-15 15:35:11.432012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.575 [2024-07-15 15:35:11.432053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.575 qpair failed and we were unable to recover it. 00:30:07.575 [2024-07-15 15:35:11.432438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.575 [2024-07-15 15:35:11.432484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.575 qpair failed and we were unable to recover it. 00:30:07.575 [2024-07-15 15:35:11.432722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.575 [2024-07-15 15:35:11.432761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.575 qpair failed and we were unable to recover it. 00:30:07.575 [2024-07-15 15:35:11.433074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.575 [2024-07-15 15:35:11.433092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.575 qpair failed and we were unable to recover it. 00:30:07.575 [2024-07-15 15:35:11.433385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.575 [2024-07-15 15:35:11.433424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.575 qpair failed and we were unable to recover it. 00:30:07.575 [2024-07-15 15:35:11.433731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.575 [2024-07-15 15:35:11.433770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.575 qpair failed and we were unable to recover it. 00:30:07.575 [2024-07-15 15:35:11.434081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.575 [2024-07-15 15:35:11.434122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.575 qpair failed and we were unable to recover it. 00:30:07.575 [2024-07-15 15:35:11.434418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.575 [2024-07-15 15:35:11.434457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.575 qpair failed and we were unable to recover it. 00:30:07.575 [2024-07-15 15:35:11.434823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.575 [2024-07-15 15:35:11.434872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.575 qpair failed and we were unable to recover it. 00:30:07.575 [2024-07-15 15:35:11.435269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.575 [2024-07-15 15:35:11.435309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.575 qpair failed and we were unable to recover it. 00:30:07.575 [2024-07-15 15:35:11.435565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.575 [2024-07-15 15:35:11.435604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.575 qpair failed and we were unable to recover it. 00:30:07.575 [2024-07-15 15:35:11.435873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.575 [2024-07-15 15:35:11.435913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.575 qpair failed and we were unable to recover it. 00:30:07.575 [2024-07-15 15:35:11.436275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.575 [2024-07-15 15:35:11.436314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.575 qpair failed and we were unable to recover it. 00:30:07.575 [2024-07-15 15:35:11.436624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.575 [2024-07-15 15:35:11.436664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.575 qpair failed and we were unable to recover it. 00:30:07.575 [2024-07-15 15:35:11.436978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.575 [2024-07-15 15:35:11.437019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.575 qpair failed and we were unable to recover it. 00:30:07.575 [2024-07-15 15:35:11.437345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.575 [2024-07-15 15:35:11.437385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.575 qpair failed and we were unable to recover it. 00:30:07.575 [2024-07-15 15:35:11.437625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.575 [2024-07-15 15:35:11.437664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.575 qpair failed and we were unable to recover it. 00:30:07.575 [2024-07-15 15:35:11.437956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.575 [2024-07-15 15:35:11.437974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.575 qpair failed and we were unable to recover it. 00:30:07.575 [2024-07-15 15:35:11.438216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.575 [2024-07-15 15:35:11.438233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.575 qpair failed and we were unable to recover it. 00:30:07.575 [2024-07-15 15:35:11.438540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.575 [2024-07-15 15:35:11.438580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.575 qpair failed and we were unable to recover it. 00:30:07.575 [2024-07-15 15:35:11.438963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.575 [2024-07-15 15:35:11.439022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.575 qpair failed and we were unable to recover it. 00:30:07.575 [2024-07-15 15:35:11.439431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.575 [2024-07-15 15:35:11.439471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.575 qpair failed and we were unable to recover it. 00:30:07.575 [2024-07-15 15:35:11.439734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.575 [2024-07-15 15:35:11.439773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.575 qpair failed and we were unable to recover it. 00:30:07.575 [2024-07-15 15:35:11.440123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.575 [2024-07-15 15:35:11.440164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.575 qpair failed and we were unable to recover it. 00:30:07.575 [2024-07-15 15:35:11.440477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.575 [2024-07-15 15:35:11.440516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.575 qpair failed and we were unable to recover it. 00:30:07.575 [2024-07-15 15:35:11.440831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.575 [2024-07-15 15:35:11.440884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.575 qpair failed and we were unable to recover it. 00:30:07.575 [2024-07-15 15:35:11.441247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.575 [2024-07-15 15:35:11.441286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.575 qpair failed and we were unable to recover it. 00:30:07.575 [2024-07-15 15:35:11.441564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.575 [2024-07-15 15:35:11.441604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.575 qpair failed and we were unable to recover it. 00:30:07.575 [2024-07-15 15:35:11.441982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.575 [2024-07-15 15:35:11.442002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.575 qpair failed and we were unable to recover it. 00:30:07.575 [2024-07-15 15:35:11.442259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.575 [2024-07-15 15:35:11.442299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.575 qpair failed and we were unable to recover it. 00:30:07.575 [2024-07-15 15:35:11.442661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.575 [2024-07-15 15:35:11.442700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.575 qpair failed and we were unable to recover it. 00:30:07.575 [2024-07-15 15:35:11.443066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.575 [2024-07-15 15:35:11.443106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.575 qpair failed and we were unable to recover it. 00:30:07.575 [2024-07-15 15:35:11.443424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.575 [2024-07-15 15:35:11.443441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.575 qpair failed and we were unable to recover it. 00:30:07.575 [2024-07-15 15:35:11.443752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.575 [2024-07-15 15:35:11.443768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.575 qpair failed and we were unable to recover it. 00:30:07.575 [2024-07-15 15:35:11.444024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.575 [2024-07-15 15:35:11.444041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.575 qpair failed and we were unable to recover it. 00:30:07.575 [2024-07-15 15:35:11.444242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.575 [2024-07-15 15:35:11.444259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.575 qpair failed and we were unable to recover it. 00:30:07.575 [2024-07-15 15:35:11.444580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.575 [2024-07-15 15:35:11.444619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.575 qpair failed and we were unable to recover it. 00:30:07.575 [2024-07-15 15:35:11.444920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.576 [2024-07-15 15:35:11.444961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.576 qpair failed and we were unable to recover it. 00:30:07.576 [2024-07-15 15:35:11.445354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.576 [2024-07-15 15:35:11.445393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.576 qpair failed and we were unable to recover it. 00:30:07.849 [2024-07-15 15:35:11.445648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-07-15 15:35:11.445666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.849 qpair failed and we were unable to recover it. 00:30:07.849 [2024-07-15 15:35:11.445981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-07-15 15:35:11.445998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.849 qpair failed and we were unable to recover it. 00:30:07.849 [2024-07-15 15:35:11.446244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-07-15 15:35:11.446260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.849 qpair failed and we were unable to recover it. 00:30:07.849 [2024-07-15 15:35:11.446517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-07-15 15:35:11.446534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.849 qpair failed and we were unable to recover it. 00:30:07.849 [2024-07-15 15:35:11.446804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-07-15 15:35:11.446820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.849 qpair failed and we were unable to recover it. 00:30:07.849 [2024-07-15 15:35:11.446996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-07-15 15:35:11.447014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.849 qpair failed and we were unable to recover it. 00:30:07.849 [2024-07-15 15:35:11.447293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-07-15 15:35:11.447310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.849 qpair failed and we were unable to recover it. 00:30:07.849 [2024-07-15 15:35:11.447654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-07-15 15:35:11.447671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.849 qpair failed and we were unable to recover it. 00:30:07.849 [2024-07-15 15:35:11.447959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-07-15 15:35:11.447977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.849 qpair failed and we were unable to recover it. 00:30:07.849 [2024-07-15 15:35:11.448234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-07-15 15:35:11.448251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.849 qpair failed and we were unable to recover it. 00:30:07.849 [2024-07-15 15:35:11.448452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-07-15 15:35:11.448469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.849 qpair failed and we were unable to recover it. 00:30:07.849 [2024-07-15 15:35:11.448710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-07-15 15:35:11.448727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.849 qpair failed and we were unable to recover it. 00:30:07.849 [2024-07-15 15:35:11.448919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-07-15 15:35:11.448959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.849 qpair failed and we were unable to recover it. 00:30:07.849 [2024-07-15 15:35:11.449285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-07-15 15:35:11.449325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.849 qpair failed and we were unable to recover it. 00:30:07.849 [2024-07-15 15:35:11.449567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-07-15 15:35:11.449583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.849 qpair failed and we were unable to recover it. 00:30:07.849 [2024-07-15 15:35:11.449925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-07-15 15:35:11.449966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.849 qpair failed and we were unable to recover it. 00:30:07.849 [2024-07-15 15:35:11.450377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-07-15 15:35:11.450428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.849 qpair failed and we were unable to recover it. 00:30:07.850 [2024-07-15 15:35:11.450691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.850 [2024-07-15 15:35:11.450731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.850 qpair failed and we were unable to recover it. 00:30:07.850 [2024-07-15 15:35:11.450986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.850 [2024-07-15 15:35:11.451003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.850 qpair failed and we were unable to recover it. 00:30:07.850 [2024-07-15 15:35:11.451337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.850 [2024-07-15 15:35:11.451354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.850 qpair failed and we were unable to recover it. 00:30:07.850 [2024-07-15 15:35:11.451612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.850 [2024-07-15 15:35:11.451658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.850 qpair failed and we were unable to recover it. 00:30:07.850 [2024-07-15 15:35:11.451981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.850 [2024-07-15 15:35:11.452021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.850 qpair failed and we were unable to recover it. 00:30:07.850 [2024-07-15 15:35:11.452436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.850 [2024-07-15 15:35:11.452475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.850 qpair failed and we were unable to recover it. 00:30:07.850 [2024-07-15 15:35:11.452860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.850 [2024-07-15 15:35:11.452901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.850 qpair failed and we were unable to recover it. 00:30:07.850 [2024-07-15 15:35:11.453148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.850 [2024-07-15 15:35:11.453188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.850 qpair failed and we were unable to recover it. 00:30:07.850 [2024-07-15 15:35:11.453428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.850 [2024-07-15 15:35:11.453468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.850 qpair failed and we were unable to recover it. 00:30:07.850 [2024-07-15 15:35:11.453695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.850 [2024-07-15 15:35:11.453735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.850 qpair failed and we were unable to recover it. 00:30:07.850 [2024-07-15 15:35:11.453975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.850 [2024-07-15 15:35:11.454016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.850 qpair failed and we were unable to recover it. 00:30:07.850 [2024-07-15 15:35:11.454291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.850 [2024-07-15 15:35:11.454331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.850 qpair failed and we were unable to recover it. 00:30:07.850 [2024-07-15 15:35:11.454650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.850 [2024-07-15 15:35:11.454689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.850 qpair failed and we were unable to recover it. 00:30:07.850 [2024-07-15 15:35:11.454986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.850 [2024-07-15 15:35:11.455003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.850 qpair failed and we were unable to recover it. 00:30:07.850 [2024-07-15 15:35:11.455223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.850 [2024-07-15 15:35:11.455262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.850 qpair failed and we were unable to recover it. 00:30:07.850 [2024-07-15 15:35:11.455573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.850 [2024-07-15 15:35:11.455613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.850 qpair failed and we were unable to recover it. 00:30:07.850 [2024-07-15 15:35:11.455969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.850 [2024-07-15 15:35:11.455986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.850 qpair failed and we were unable to recover it. 00:30:07.850 [2024-07-15 15:35:11.456249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.850 [2024-07-15 15:35:11.456266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.850 qpair failed and we were unable to recover it. 00:30:07.850 [2024-07-15 15:35:11.456460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.850 [2024-07-15 15:35:11.456478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.850 qpair failed and we were unable to recover it. 00:30:07.850 [2024-07-15 15:35:11.456813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.850 [2024-07-15 15:35:11.456830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.850 qpair failed and we were unable to recover it. 00:30:07.850 [2024-07-15 15:35:11.457115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.850 [2024-07-15 15:35:11.457155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.850 qpair failed and we were unable to recover it. 00:30:07.850 [2024-07-15 15:35:11.457493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.850 [2024-07-15 15:35:11.457533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.850 qpair failed and we were unable to recover it. 00:30:07.850 [2024-07-15 15:35:11.457914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.850 [2024-07-15 15:35:11.457955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.850 qpair failed and we were unable to recover it. 00:30:07.850 [2024-07-15 15:35:11.458350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.850 [2024-07-15 15:35:11.458390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.850 qpair failed and we were unable to recover it. 00:30:07.850 [2024-07-15 15:35:11.458784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.850 [2024-07-15 15:35:11.458823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.850 qpair failed and we were unable to recover it. 00:30:07.850 [2024-07-15 15:35:11.459149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.850 [2024-07-15 15:35:11.459166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.850 qpair failed and we were unable to recover it. 00:30:07.850 [2024-07-15 15:35:11.459482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.850 [2024-07-15 15:35:11.459522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.850 qpair failed and we were unable to recover it. 00:30:07.850 [2024-07-15 15:35:11.459800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.850 [2024-07-15 15:35:11.459850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.850 qpair failed and we were unable to recover it. 00:30:07.850 [2024-07-15 15:35:11.460221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.850 [2024-07-15 15:35:11.460261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.850 qpair failed and we were unable to recover it. 00:30:07.850 [2024-07-15 15:35:11.460613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.850 [2024-07-15 15:35:11.460653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.850 qpair failed and we were unable to recover it. 00:30:07.850 [2024-07-15 15:35:11.460955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.850 [2024-07-15 15:35:11.460973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.850 qpair failed and we were unable to recover it. 00:30:07.850 [2024-07-15 15:35:11.461220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.850 [2024-07-15 15:35:11.461237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.850 qpair failed and we were unable to recover it. 00:30:07.850 [2024-07-15 15:35:11.461635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.850 [2024-07-15 15:35:11.461675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.850 qpair failed and we were unable to recover it. 00:30:07.850 [2024-07-15 15:35:11.461977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.850 [2024-07-15 15:35:11.461995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.850 qpair failed and we were unable to recover it. 00:30:07.850 [2024-07-15 15:35:11.462255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.850 [2024-07-15 15:35:11.462272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.850 qpair failed and we were unable to recover it. 00:30:07.850 [2024-07-15 15:35:11.462460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.850 [2024-07-15 15:35:11.462477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.850 qpair failed and we were unable to recover it. 00:30:07.850 [2024-07-15 15:35:11.462828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.850 [2024-07-15 15:35:11.462877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.850 qpair failed and we were unable to recover it. 00:30:07.850 [2024-07-15 15:35:11.463173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.850 [2024-07-15 15:35:11.463213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.850 qpair failed and we were unable to recover it. 00:30:07.850 [2024-07-15 15:35:11.463527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.850 [2024-07-15 15:35:11.463566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.850 qpair failed and we were unable to recover it. 00:30:07.850 [2024-07-15 15:35:11.463792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.850 [2024-07-15 15:35:11.463841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.850 qpair failed and we were unable to recover it. 00:30:07.850 [2024-07-15 15:35:11.464154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.850 [2024-07-15 15:35:11.464171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.850 qpair failed and we were unable to recover it. 00:30:07.850 [2024-07-15 15:35:11.464459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.851 [2024-07-15 15:35:11.464476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.851 qpair failed and we were unable to recover it. 00:30:07.851 [2024-07-15 15:35:11.464740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.851 [2024-07-15 15:35:11.464787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.851 qpair failed and we were unable to recover it. 00:30:07.851 [2024-07-15 15:35:11.465107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.851 [2024-07-15 15:35:11.465148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.851 qpair failed and we were unable to recover it. 00:30:07.851 [2024-07-15 15:35:11.465542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.851 [2024-07-15 15:35:11.465582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.851 qpair failed and we were unable to recover it. 00:30:07.851 [2024-07-15 15:35:11.465957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.851 [2024-07-15 15:35:11.465998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.851 qpair failed and we were unable to recover it. 00:30:07.851 [2024-07-15 15:35:11.466334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.851 [2024-07-15 15:35:11.466374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.851 qpair failed and we were unable to recover it. 00:30:07.851 [2024-07-15 15:35:11.466739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.851 [2024-07-15 15:35:11.466779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.851 qpair failed and we were unable to recover it. 00:30:07.851 [2024-07-15 15:35:11.467017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.851 [2024-07-15 15:35:11.467058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.851 qpair failed and we were unable to recover it. 00:30:07.851 [2024-07-15 15:35:11.467368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.851 [2024-07-15 15:35:11.467408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.851 qpair failed and we were unable to recover it. 00:30:07.851 [2024-07-15 15:35:11.467794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.851 [2024-07-15 15:35:11.467842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.851 qpair failed and we were unable to recover it. 00:30:07.851 [2024-07-15 15:35:11.468233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.851 [2024-07-15 15:35:11.468273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.851 qpair failed and we were unable to recover it. 00:30:07.851 [2024-07-15 15:35:11.468529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.851 [2024-07-15 15:35:11.468569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.851 qpair failed and we were unable to recover it. 00:30:07.851 [2024-07-15 15:35:11.468886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.851 [2024-07-15 15:35:11.468926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.851 qpair failed and we were unable to recover it. 00:30:07.851 [2024-07-15 15:35:11.469036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.851 [2024-07-15 15:35:11.469053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.851 qpair failed and we were unable to recover it. 00:30:07.851 [2024-07-15 15:35:11.469242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.851 [2024-07-15 15:35:11.469259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.851 qpair failed and we were unable to recover it. 00:30:07.851 [2024-07-15 15:35:11.469532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.851 [2024-07-15 15:35:11.469571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.851 qpair failed and we were unable to recover it. 00:30:07.851 [2024-07-15 15:35:11.469817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.851 [2024-07-15 15:35:11.469870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.851 qpair failed and we were unable to recover it. 00:30:07.851 [2024-07-15 15:35:11.470116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.851 [2024-07-15 15:35:11.470156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.851 qpair failed and we were unable to recover it. 00:30:07.851 [2024-07-15 15:35:11.470549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.851 [2024-07-15 15:35:11.470589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.851 qpair failed and we were unable to recover it. 00:30:07.851 [2024-07-15 15:35:11.470919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.851 [2024-07-15 15:35:11.470937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.851 qpair failed and we were unable to recover it. 00:30:07.851 [2024-07-15 15:35:11.471242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.851 [2024-07-15 15:35:11.471281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.851 qpair failed and we were unable to recover it. 00:30:07.851 [2024-07-15 15:35:11.471654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.851 [2024-07-15 15:35:11.471693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.851 qpair failed and we were unable to recover it. 00:30:07.851 [2024-07-15 15:35:11.472011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.851 [2024-07-15 15:35:11.472051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.851 qpair failed and we were unable to recover it. 00:30:07.851 [2024-07-15 15:35:11.472409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.851 [2024-07-15 15:35:11.472448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.851 qpair failed and we were unable to recover it. 00:30:07.851 [2024-07-15 15:35:11.472754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.851 [2024-07-15 15:35:11.472794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.851 qpair failed and we were unable to recover it. 00:30:07.851 [2024-07-15 15:35:11.473124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.851 [2024-07-15 15:35:11.473164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.851 qpair failed and we were unable to recover it. 00:30:07.851 [2024-07-15 15:35:11.473459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.851 [2024-07-15 15:35:11.473503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.851 qpair failed and we were unable to recover it. 00:30:07.851 [2024-07-15 15:35:11.473868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.851 [2024-07-15 15:35:11.473909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.851 qpair failed and we were unable to recover it. 00:30:07.851 [2024-07-15 15:35:11.474137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.851 [2024-07-15 15:35:11.474154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.851 qpair failed and we were unable to recover it. 00:30:07.851 [2024-07-15 15:35:11.474376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.851 [2024-07-15 15:35:11.474415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.851 qpair failed and we were unable to recover it. 00:30:07.851 [2024-07-15 15:35:11.474664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.851 [2024-07-15 15:35:11.474703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.851 qpair failed and we were unable to recover it. 00:30:07.851 [2024-07-15 15:35:11.475008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.851 [2024-07-15 15:35:11.475049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.851 qpair failed and we were unable to recover it. 00:30:07.851 [2024-07-15 15:35:11.475350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.851 [2024-07-15 15:35:11.475389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.851 qpair failed and we were unable to recover it. 00:30:07.851 [2024-07-15 15:35:11.475751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.851 [2024-07-15 15:35:11.475768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.851 qpair failed and we were unable to recover it. 00:30:07.851 [2024-07-15 15:35:11.475975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.851 [2024-07-15 15:35:11.476015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.851 qpair failed and we were unable to recover it. 00:30:07.851 [2024-07-15 15:35:11.476403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.851 [2024-07-15 15:35:11.476443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.851 qpair failed and we were unable to recover it. 00:30:07.851 [2024-07-15 15:35:11.476751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.851 [2024-07-15 15:35:11.476791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.851 qpair failed and we were unable to recover it. 00:30:07.851 [2024-07-15 15:35:11.477048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.851 [2024-07-15 15:35:11.477065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.851 qpair failed and we were unable to recover it. 00:30:07.851 [2024-07-15 15:35:11.477333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.851 [2024-07-15 15:35:11.477373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.851 qpair failed and we were unable to recover it. 00:30:07.851 [2024-07-15 15:35:11.477667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.851 [2024-07-15 15:35:11.477707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.851 qpair failed and we were unable to recover it. 00:30:07.851 [2024-07-15 15:35:11.478097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.851 [2024-07-15 15:35:11.478114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.851 qpair failed and we were unable to recover it. 00:30:07.852 [2024-07-15 15:35:11.478431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.852 [2024-07-15 15:35:11.478470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.852 qpair failed and we were unable to recover it. 00:30:07.852 [2024-07-15 15:35:11.478847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.852 [2024-07-15 15:35:11.478888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.852 qpair failed and we were unable to recover it. 00:30:07.852 [2024-07-15 15:35:11.479294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.852 [2024-07-15 15:35:11.479333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.852 qpair failed and we were unable to recover it. 00:30:07.852 [2024-07-15 15:35:11.479638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.852 [2024-07-15 15:35:11.479677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.852 qpair failed and we were unable to recover it. 00:30:07.852 [2024-07-15 15:35:11.479982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.852 [2024-07-15 15:35:11.480023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.852 qpair failed and we were unable to recover it. 00:30:07.852 [2024-07-15 15:35:11.480351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.852 [2024-07-15 15:35:11.480394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.852 qpair failed and we were unable to recover it. 00:30:07.852 [2024-07-15 15:35:11.480772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.852 [2024-07-15 15:35:11.480812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.852 qpair failed and we were unable to recover it. 00:30:07.852 [2024-07-15 15:35:11.481144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.852 [2024-07-15 15:35:11.481186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.852 qpair failed and we were unable to recover it. 00:30:07.852 [2024-07-15 15:35:11.481514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.852 [2024-07-15 15:35:11.481553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.852 qpair failed and we were unable to recover it. 00:30:07.852 [2024-07-15 15:35:11.481864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.852 [2024-07-15 15:35:11.481905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.852 qpair failed and we were unable to recover it. 00:30:07.852 [2024-07-15 15:35:11.482228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.852 [2024-07-15 15:35:11.482268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.852 qpair failed and we were unable to recover it. 00:30:07.852 [2024-07-15 15:35:11.482569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.852 [2024-07-15 15:35:11.482608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.852 qpair failed and we were unable to recover it. 00:30:07.852 [2024-07-15 15:35:11.482913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.852 [2024-07-15 15:35:11.482960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.852 qpair failed and we were unable to recover it. 00:30:07.852 [2024-07-15 15:35:11.483326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.852 [2024-07-15 15:35:11.483367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.852 qpair failed and we were unable to recover it. 00:30:07.852 [2024-07-15 15:35:11.483669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.852 [2024-07-15 15:35:11.483708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.852 qpair failed and we were unable to recover it. 00:30:07.852 [2024-07-15 15:35:11.484088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.852 [2024-07-15 15:35:11.484106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.852 qpair failed and we were unable to recover it. 00:30:07.852 [2024-07-15 15:35:11.484419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.852 [2024-07-15 15:35:11.484436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.852 qpair failed and we were unable to recover it. 00:30:07.852 [2024-07-15 15:35:11.484709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.852 [2024-07-15 15:35:11.484748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.852 qpair failed and we were unable to recover it. 00:30:07.852 [2024-07-15 15:35:11.485085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.852 [2024-07-15 15:35:11.485125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.852 qpair failed and we were unable to recover it. 00:30:07.852 [2024-07-15 15:35:11.485328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.852 [2024-07-15 15:35:11.485345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.852 qpair failed and we were unable to recover it. 00:30:07.852 [2024-07-15 15:35:11.485612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.852 [2024-07-15 15:35:11.485629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.852 qpair failed and we were unable to recover it. 00:30:07.852 [2024-07-15 15:35:11.485886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.852 [2024-07-15 15:35:11.485903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.852 qpair failed and we were unable to recover it. 00:30:07.852 [2024-07-15 15:35:11.486212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.852 [2024-07-15 15:35:11.486229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.852 qpair failed and we were unable to recover it. 00:30:07.852 [2024-07-15 15:35:11.486406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.852 [2024-07-15 15:35:11.486423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.852 qpair failed and we were unable to recover it. 00:30:07.852 [2024-07-15 15:35:11.486574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.852 [2024-07-15 15:35:11.486591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.852 qpair failed and we were unable to recover it. 00:30:07.852 [2024-07-15 15:35:11.486752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.852 [2024-07-15 15:35:11.486769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.852 qpair failed and we were unable to recover it. 00:30:07.852 [2024-07-15 15:35:11.487036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.852 [2024-07-15 15:35:11.487054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.852 qpair failed and we were unable to recover it. 00:30:07.852 [2024-07-15 15:35:11.487392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.852 [2024-07-15 15:35:11.487432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.852 qpair failed and we were unable to recover it. 00:30:07.852 [2024-07-15 15:35:11.487682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.852 [2024-07-15 15:35:11.487721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.852 qpair failed and we were unable to recover it. 00:30:07.852 [2024-07-15 15:35:11.488042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.852 [2024-07-15 15:35:11.488083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.852 qpair failed and we were unable to recover it. 00:30:07.852 [2024-07-15 15:35:11.488400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.852 [2024-07-15 15:35:11.488417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.852 qpair failed and we were unable to recover it. 00:30:07.852 [2024-07-15 15:35:11.488622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.852 [2024-07-15 15:35:11.488661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.852 qpair failed and we were unable to recover it. 00:30:07.852 [2024-07-15 15:35:11.489026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.852 [2024-07-15 15:35:11.489067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.852 qpair failed and we were unable to recover it. 00:30:07.852 [2024-07-15 15:35:11.489439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.852 [2024-07-15 15:35:11.489479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.852 qpair failed and we were unable to recover it. 00:30:07.852 [2024-07-15 15:35:11.489874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.852 [2024-07-15 15:35:11.489915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.852 qpair failed and we were unable to recover it. 00:30:07.852 [2024-07-15 15:35:11.490182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.852 [2024-07-15 15:35:11.490222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.852 qpair failed and we were unable to recover it. 00:30:07.852 [2024-07-15 15:35:11.490378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.852 [2024-07-15 15:35:11.490418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.852 qpair failed and we were unable to recover it. 00:30:07.852 [2024-07-15 15:35:11.490780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.852 [2024-07-15 15:35:11.490820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.852 qpair failed and we were unable to recover it. 00:30:07.852 [2024-07-15 15:35:11.491149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.852 [2024-07-15 15:35:11.491189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.852 qpair failed and we were unable to recover it. 00:30:07.852 [2024-07-15 15:35:11.491504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.852 [2024-07-15 15:35:11.491550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.852 qpair failed and we were unable to recover it. 00:30:07.852 [2024-07-15 15:35:11.491859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.852 [2024-07-15 15:35:11.491900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.852 qpair failed and we were unable to recover it. 00:30:07.852 [2024-07-15 15:35:11.492285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.853 [2024-07-15 15:35:11.492325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.853 qpair failed and we were unable to recover it. 00:30:07.853 [2024-07-15 15:35:11.492914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.853 [2024-07-15 15:35:11.492933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.853 qpair failed and we were unable to recover it. 00:30:07.853 [2024-07-15 15:35:11.493250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.853 [2024-07-15 15:35:11.493269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.853 qpair failed and we were unable to recover it. 00:30:07.853 [2024-07-15 15:35:11.493591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.853 [2024-07-15 15:35:11.493631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.853 qpair failed and we were unable to recover it. 00:30:07.853 [2024-07-15 15:35:11.494038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.853 [2024-07-15 15:35:11.494079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.853 qpair failed and we were unable to recover it. 00:30:07.853 [2024-07-15 15:35:11.494451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.853 [2024-07-15 15:35:11.494491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.853 qpair failed and we were unable to recover it. 00:30:07.853 [2024-07-15 15:35:11.494829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.853 [2024-07-15 15:35:11.494879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.853 qpair failed and we were unable to recover it. 00:30:07.853 [2024-07-15 15:35:11.495275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.853 [2024-07-15 15:35:11.495315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.853 qpair failed and we were unable to recover it. 00:30:07.853 [2024-07-15 15:35:11.495637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.853 [2024-07-15 15:35:11.495677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.853 qpair failed and we were unable to recover it. 00:30:07.853 [2024-07-15 15:35:11.496044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.853 [2024-07-15 15:35:11.496085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.853 qpair failed and we were unable to recover it. 00:30:07.853 [2024-07-15 15:35:11.496450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.853 [2024-07-15 15:35:11.496490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.853 qpair failed and we were unable to recover it. 00:30:07.853 [2024-07-15 15:35:11.496853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.853 [2024-07-15 15:35:11.496893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.853 qpair failed and we were unable to recover it. 00:30:07.853 [2024-07-15 15:35:11.497208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.853 [2024-07-15 15:35:11.497225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.853 qpair failed and we were unable to recover it. 00:30:07.853 [2024-07-15 15:35:11.497567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.853 [2024-07-15 15:35:11.497606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.853 qpair failed and we were unable to recover it. 00:30:07.853 [2024-07-15 15:35:11.498014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.853 [2024-07-15 15:35:11.498054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.853 qpair failed and we were unable to recover it. 00:30:07.853 [2024-07-15 15:35:11.498400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.853 [2024-07-15 15:35:11.498417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.853 qpair failed and we were unable to recover it. 00:30:07.853 [2024-07-15 15:35:11.498702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.853 [2024-07-15 15:35:11.498719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.853 qpair failed and we were unable to recover it. 00:30:07.853 [2024-07-15 15:35:11.498968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.853 [2024-07-15 15:35:11.499009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.853 qpair failed and we were unable to recover it. 00:30:07.853 [2024-07-15 15:35:11.499346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.853 [2024-07-15 15:35:11.499386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.853 qpair failed and we were unable to recover it. 00:30:07.853 [2024-07-15 15:35:11.499727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.853 [2024-07-15 15:35:11.499768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.853 qpair failed and we were unable to recover it. 00:30:07.853 [2024-07-15 15:35:11.500102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.853 [2024-07-15 15:35:11.500143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.853 qpair failed and we were unable to recover it. 00:30:07.853 [2024-07-15 15:35:11.500526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.853 [2024-07-15 15:35:11.500566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.853 qpair failed and we were unable to recover it. 00:30:07.853 [2024-07-15 15:35:11.500736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.853 [2024-07-15 15:35:11.500776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.853 qpair failed and we were unable to recover it. 00:30:07.853 [2024-07-15 15:35:11.501091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.853 [2024-07-15 15:35:11.501132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.853 qpair failed and we were unable to recover it. 00:30:07.853 [2024-07-15 15:35:11.501321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.853 [2024-07-15 15:35:11.501360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.853 qpair failed and we were unable to recover it. 00:30:07.853 [2024-07-15 15:35:11.501700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.853 [2024-07-15 15:35:11.501740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.853 qpair failed and we were unable to recover it. 00:30:07.853 [2024-07-15 15:35:11.502063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.853 [2024-07-15 15:35:11.502104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.853 qpair failed and we were unable to recover it. 00:30:07.853 [2024-07-15 15:35:11.502467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.853 [2024-07-15 15:35:11.502506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.853 qpair failed and we were unable to recover it. 00:30:07.853 [2024-07-15 15:35:11.502822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.853 [2024-07-15 15:35:11.502870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.853 qpair failed and we were unable to recover it. 00:30:07.853 [2024-07-15 15:35:11.503220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.853 [2024-07-15 15:35:11.503259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.853 qpair failed and we were unable to recover it. 00:30:07.853 [2024-07-15 15:35:11.503594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.853 [2024-07-15 15:35:11.503634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.853 qpair failed and we were unable to recover it. 00:30:07.853 [2024-07-15 15:35:11.503891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.853 [2024-07-15 15:35:11.503932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.853 qpair failed and we were unable to recover it. 00:30:07.853 [2024-07-15 15:35:11.504301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.853 [2024-07-15 15:35:11.504341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.853 qpair failed and we were unable to recover it. 00:30:07.854 [2024-07-15 15:35:11.504671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.854 [2024-07-15 15:35:11.504710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.854 qpair failed and we were unable to recover it. 00:30:07.854 [2024-07-15 15:35:11.504992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.854 [2024-07-15 15:35:11.505009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.854 qpair failed and we were unable to recover it. 00:30:07.854 [2024-07-15 15:35:11.505366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.854 [2024-07-15 15:35:11.505406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.854 qpair failed and we were unable to recover it. 00:30:07.854 [2024-07-15 15:35:11.505560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.854 [2024-07-15 15:35:11.505600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.854 qpair failed and we were unable to recover it. 00:30:07.854 [2024-07-15 15:35:11.505907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.854 [2024-07-15 15:35:11.505948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.854 qpair failed and we were unable to recover it. 00:30:07.854 [2024-07-15 15:35:11.506311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.854 [2024-07-15 15:35:11.506350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.854 qpair failed and we were unable to recover it. 00:30:07.854 [2024-07-15 15:35:11.506686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.854 [2024-07-15 15:35:11.506731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.854 qpair failed and we were unable to recover it. 00:30:07.854 [2024-07-15 15:35:11.506954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.854 [2024-07-15 15:35:11.506995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.854 qpair failed and we were unable to recover it. 00:30:07.854 [2024-07-15 15:35:11.507369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.854 [2024-07-15 15:35:11.507409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.854 qpair failed and we were unable to recover it. 00:30:07.854 [2024-07-15 15:35:11.507667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.854 [2024-07-15 15:35:11.507707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.854 qpair failed and we were unable to recover it. 00:30:07.854 [2024-07-15 15:35:11.508093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.854 [2024-07-15 15:35:11.508144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.854 qpair failed and we were unable to recover it. 00:30:07.854 [2024-07-15 15:35:11.508325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.854 [2024-07-15 15:35:11.508365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.854 qpair failed and we were unable to recover it. 00:30:07.854 [2024-07-15 15:35:11.508726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.854 [2024-07-15 15:35:11.508766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.854 qpair failed and we were unable to recover it. 00:30:07.854 [2024-07-15 15:35:11.509162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.854 [2024-07-15 15:35:11.509202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.854 qpair failed and we were unable to recover it. 00:30:07.854 [2024-07-15 15:35:11.509569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.854 [2024-07-15 15:35:11.509609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.854 qpair failed and we were unable to recover it. 00:30:07.854 [2024-07-15 15:35:11.509977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.854 [2024-07-15 15:35:11.510017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.854 qpair failed and we were unable to recover it. 00:30:07.854 [2024-07-15 15:35:11.510293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.854 [2024-07-15 15:35:11.510328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.854 qpair failed and we were unable to recover it. 00:30:07.854 [2024-07-15 15:35:11.510625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.854 [2024-07-15 15:35:11.510664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.854 qpair failed and we were unable to recover it. 00:30:07.854 [2024-07-15 15:35:11.510897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.854 [2024-07-15 15:35:11.510914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.854 qpair failed and we were unable to recover it. 00:30:07.854 [2024-07-15 15:35:11.511085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.854 [2024-07-15 15:35:11.511101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.854 qpair failed and we were unable to recover it. 00:30:07.854 [2024-07-15 15:35:11.511436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.854 [2024-07-15 15:35:11.511476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.854 qpair failed and we were unable to recover it. 00:30:07.854 [2024-07-15 15:35:11.511790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.854 [2024-07-15 15:35:11.511830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.854 qpair failed and we were unable to recover it. 00:30:07.854 [2024-07-15 15:35:11.512066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.854 [2024-07-15 15:35:11.512106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.854 qpair failed and we were unable to recover it. 00:30:07.854 [2024-07-15 15:35:11.512405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.854 [2024-07-15 15:35:11.512445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.854 qpair failed and we were unable to recover it. 00:30:07.854 [2024-07-15 15:35:11.512758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.854 [2024-07-15 15:35:11.512797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.854 qpair failed and we were unable to recover it. 00:30:07.854 [2024-07-15 15:35:11.513196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.854 [2024-07-15 15:35:11.513236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.854 qpair failed and we were unable to recover it. 00:30:07.854 [2024-07-15 15:35:11.513562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.854 [2024-07-15 15:35:11.513602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.854 qpair failed and we were unable to recover it. 00:30:07.854 [2024-07-15 15:35:11.513917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.854 [2024-07-15 15:35:11.513957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.854 qpair failed and we were unable to recover it. 00:30:07.854 [2024-07-15 15:35:11.514254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.854 [2024-07-15 15:35:11.514293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.854 qpair failed and we were unable to recover it. 00:30:07.854 [2024-07-15 15:35:11.514656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.854 [2024-07-15 15:35:11.514696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.854 qpair failed and we were unable to recover it. 00:30:07.854 [2024-07-15 15:35:11.515026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.854 [2024-07-15 15:35:11.515067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.854 qpair failed and we were unable to recover it. 00:30:07.854 [2024-07-15 15:35:11.515357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.854 [2024-07-15 15:35:11.515374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.854 qpair failed and we were unable to recover it. 00:30:07.854 [2024-07-15 15:35:11.515729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.854 [2024-07-15 15:35:11.515769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.854 qpair failed and we were unable to recover it. 00:30:07.854 [2024-07-15 15:35:11.516110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.854 [2024-07-15 15:35:11.516156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.854 qpair failed and we were unable to recover it. 00:30:07.854 [2024-07-15 15:35:11.516402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.854 [2024-07-15 15:35:11.516442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.854 qpair failed and we were unable to recover it. 00:30:07.854 [2024-07-15 15:35:11.516759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.854 [2024-07-15 15:35:11.516798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.854 qpair failed and we were unable to recover it. 00:30:07.854 [2024-07-15 15:35:11.517052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.854 [2024-07-15 15:35:11.517097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.854 qpair failed and we were unable to recover it. 00:30:07.854 [2024-07-15 15:35:11.517413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.854 [2024-07-15 15:35:11.517453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.854 qpair failed and we were unable to recover it. 00:30:07.854 [2024-07-15 15:35:11.517788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.854 [2024-07-15 15:35:11.517827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.854 qpair failed and we were unable to recover it. 00:30:07.854 [2024-07-15 15:35:11.518208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.854 [2024-07-15 15:35:11.518248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.854 qpair failed and we were unable to recover it. 00:30:07.854 [2024-07-15 15:35:11.518498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.854 [2024-07-15 15:35:11.518537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.854 qpair failed and we were unable to recover it. 00:30:07.854 [2024-07-15 15:35:11.518860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.855 [2024-07-15 15:35:11.518902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.855 qpair failed and we were unable to recover it. 00:30:07.855 [2024-07-15 15:35:11.519234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.855 [2024-07-15 15:35:11.519273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.855 qpair failed and we were unable to recover it. 00:30:07.855 [2024-07-15 15:35:11.519635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.855 [2024-07-15 15:35:11.519674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.855 qpair failed and we were unable to recover it. 00:30:07.855 [2024-07-15 15:35:11.520035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.855 [2024-07-15 15:35:11.520076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.855 qpair failed and we were unable to recover it. 00:30:07.855 [2024-07-15 15:35:11.520294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.855 [2024-07-15 15:35:11.520311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.855 qpair failed and we were unable to recover it. 00:30:07.855 [2024-07-15 15:35:11.520591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.855 [2024-07-15 15:35:11.520608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.855 qpair failed and we were unable to recover it. 00:30:07.855 [2024-07-15 15:35:11.520855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.855 [2024-07-15 15:35:11.520873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.855 qpair failed and we were unable to recover it. 00:30:07.855 [2024-07-15 15:35:11.521168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.855 [2024-07-15 15:35:11.521208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.855 qpair failed and we were unable to recover it. 00:30:07.855 [2024-07-15 15:35:11.521521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.855 [2024-07-15 15:35:11.521560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.855 qpair failed and we were unable to recover it. 00:30:07.855 [2024-07-15 15:35:11.521893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.855 [2024-07-15 15:35:11.521933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.855 qpair failed and we were unable to recover it. 00:30:07.855 [2024-07-15 15:35:11.522254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.855 [2024-07-15 15:35:11.522293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.855 qpair failed and we were unable to recover it. 00:30:07.855 [2024-07-15 15:35:11.522679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.855 [2024-07-15 15:35:11.522719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.855 qpair failed and we were unable to recover it. 00:30:07.855 [2024-07-15 15:35:11.523079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.855 [2024-07-15 15:35:11.523096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.855 qpair failed and we were unable to recover it. 00:30:07.855 [2024-07-15 15:35:11.523353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.855 [2024-07-15 15:35:11.523370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.855 qpair failed and we were unable to recover it. 00:30:07.855 [2024-07-15 15:35:11.523565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.855 [2024-07-15 15:35:11.523582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.855 qpair failed and we were unable to recover it. 00:30:07.855 [2024-07-15 15:35:11.523896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.855 [2024-07-15 15:35:11.523937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.855 qpair failed and we were unable to recover it. 00:30:07.855 [2024-07-15 15:35:11.524230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.855 [2024-07-15 15:35:11.524269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.855 qpair failed and we were unable to recover it. 00:30:07.855 [2024-07-15 15:35:11.524700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.855 [2024-07-15 15:35:11.524739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.855 qpair failed and we were unable to recover it. 00:30:07.855 [2024-07-15 15:35:11.525047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.855 [2024-07-15 15:35:11.525065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.855 qpair failed and we were unable to recover it. 00:30:07.855 [2024-07-15 15:35:11.525382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.855 [2024-07-15 15:35:11.525427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.855 qpair failed and we were unable to recover it. 00:30:07.855 [2024-07-15 15:35:11.525723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.855 [2024-07-15 15:35:11.525763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.855 qpair failed and we were unable to recover it. 00:30:07.855 [2024-07-15 15:35:11.525980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.855 [2024-07-15 15:35:11.525998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.855 qpair failed and we were unable to recover it. 00:30:07.855 [2024-07-15 15:35:11.526332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.855 [2024-07-15 15:35:11.526348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.855 qpair failed and we were unable to recover it. 00:30:07.855 [2024-07-15 15:35:11.526677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.855 [2024-07-15 15:35:11.526694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.855 qpair failed and we were unable to recover it. 00:30:07.855 [2024-07-15 15:35:11.526964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.855 [2024-07-15 15:35:11.526981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.855 qpair failed and we were unable to recover it. 00:30:07.855 [2024-07-15 15:35:11.527249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.855 [2024-07-15 15:35:11.527288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.855 qpair failed and we were unable to recover it. 00:30:07.855 [2024-07-15 15:35:11.527608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.855 [2024-07-15 15:35:11.527647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.855 qpair failed and we were unable to recover it. 00:30:07.855 [2024-07-15 15:35:11.527999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.855 [2024-07-15 15:35:11.528039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.855 qpair failed and we were unable to recover it. 00:30:07.855 [2024-07-15 15:35:11.528289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.855 [2024-07-15 15:35:11.528328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.855 qpair failed and we were unable to recover it. 00:30:07.855 [2024-07-15 15:35:11.528626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.855 [2024-07-15 15:35:11.528665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.855 qpair failed and we were unable to recover it. 00:30:07.855 [2024-07-15 15:35:11.529056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.855 [2024-07-15 15:35:11.529096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.855 qpair failed and we were unable to recover it. 00:30:07.855 [2024-07-15 15:35:11.529386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.855 [2024-07-15 15:35:11.529403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.855 qpair failed and we were unable to recover it. 00:30:07.855 [2024-07-15 15:35:11.529695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.855 [2024-07-15 15:35:11.529735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.855 qpair failed and we were unable to recover it. 00:30:07.855 [2024-07-15 15:35:11.530104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.855 [2024-07-15 15:35:11.530144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.855 qpair failed and we were unable to recover it. 00:30:07.855 [2024-07-15 15:35:11.530519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.855 [2024-07-15 15:35:11.530559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.855 qpair failed and we were unable to recover it. 00:30:07.855 [2024-07-15 15:35:11.530940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.855 [2024-07-15 15:35:11.530980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.855 qpair failed and we were unable to recover it. 00:30:07.855 [2024-07-15 15:35:11.531280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.855 [2024-07-15 15:35:11.531320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.855 qpair failed and we were unable to recover it. 00:30:07.855 [2024-07-15 15:35:11.531625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.855 [2024-07-15 15:35:11.531663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.855 qpair failed and we were unable to recover it. 00:30:07.855 [2024-07-15 15:35:11.532027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.855 [2024-07-15 15:35:11.532068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.855 qpair failed and we were unable to recover it. 00:30:07.855 [2024-07-15 15:35:11.532395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.855 [2024-07-15 15:35:11.532435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.855 qpair failed and we were unable to recover it. 00:30:07.855 [2024-07-15 15:35:11.532853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.855 [2024-07-15 15:35:11.532893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.855 qpair failed and we were unable to recover it. 00:30:07.855 [2024-07-15 15:35:11.533270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.856 [2024-07-15 15:35:11.533310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.856 qpair failed and we were unable to recover it. 00:30:07.856 [2024-07-15 15:35:11.533691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.856 [2024-07-15 15:35:11.533730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.856 qpair failed and we were unable to recover it. 00:30:07.856 [2024-07-15 15:35:11.534097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.856 [2024-07-15 15:35:11.534138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.856 qpair failed and we were unable to recover it. 00:30:07.856 [2024-07-15 15:35:11.534514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.856 [2024-07-15 15:35:11.534554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.856 qpair failed and we were unable to recover it. 00:30:07.856 [2024-07-15 15:35:11.534934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.856 [2024-07-15 15:35:11.534975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.856 qpair failed and we were unable to recover it. 00:30:07.856 [2024-07-15 15:35:11.535340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.856 [2024-07-15 15:35:11.535380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.856 qpair failed and we were unable to recover it. 00:30:07.856 [2024-07-15 15:35:11.535742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.856 [2024-07-15 15:35:11.535781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.856 qpair failed and we were unable to recover it. 00:30:07.856 [2024-07-15 15:35:11.536105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.856 [2024-07-15 15:35:11.536146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.856 qpair failed and we were unable to recover it. 00:30:07.856 [2024-07-15 15:35:11.536519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.856 [2024-07-15 15:35:11.536558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.856 qpair failed and we were unable to recover it. 00:30:07.856 [2024-07-15 15:35:11.536805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.856 [2024-07-15 15:35:11.536856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.856 qpair failed and we were unable to recover it. 00:30:07.856 [2024-07-15 15:35:11.537246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.856 [2024-07-15 15:35:11.537285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.856 qpair failed and we were unable to recover it. 00:30:07.856 [2024-07-15 15:35:11.537625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.856 [2024-07-15 15:35:11.537664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.856 qpair failed and we were unable to recover it. 00:30:07.856 [2024-07-15 15:35:11.537961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.856 [2024-07-15 15:35:11.538002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.856 qpair failed and we were unable to recover it. 00:30:07.856 [2024-07-15 15:35:11.538388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.856 [2024-07-15 15:35:11.538428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.856 qpair failed and we were unable to recover it. 00:30:07.856 [2024-07-15 15:35:11.538594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.856 [2024-07-15 15:35:11.538633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.856 qpair failed and we were unable to recover it. 00:30:07.856 [2024-07-15 15:35:11.538940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.856 [2024-07-15 15:35:11.538981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.856 qpair failed and we were unable to recover it. 00:30:07.856 [2024-07-15 15:35:11.539365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.856 [2024-07-15 15:35:11.539405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.856 qpair failed and we were unable to recover it. 00:30:07.856 [2024-07-15 15:35:11.539781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.856 [2024-07-15 15:35:11.539820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.856 qpair failed and we were unable to recover it. 00:30:07.856 [2024-07-15 15:35:11.540061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.856 [2024-07-15 15:35:11.540078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.856 qpair failed and we were unable to recover it. 00:30:07.856 [2024-07-15 15:35:11.540347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.856 [2024-07-15 15:35:11.540364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.856 qpair failed and we were unable to recover it. 00:30:07.856 [2024-07-15 15:35:11.540651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.856 [2024-07-15 15:35:11.540668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.856 qpair failed and we were unable to recover it. 00:30:07.856 [2024-07-15 15:35:11.541008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.856 [2024-07-15 15:35:11.541025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.856 qpair failed and we were unable to recover it. 00:30:07.856 [2024-07-15 15:35:11.541295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.856 [2024-07-15 15:35:11.541334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.856 qpair failed and we were unable to recover it. 00:30:07.856 [2024-07-15 15:35:11.541646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.856 [2024-07-15 15:35:11.541686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.856 qpair failed and we were unable to recover it. 00:30:07.856 [2024-07-15 15:35:11.542048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.856 [2024-07-15 15:35:11.542089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.856 qpair failed and we were unable to recover it. 00:30:07.856 [2024-07-15 15:35:11.542317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.856 [2024-07-15 15:35:11.542357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.856 qpair failed and we were unable to recover it. 00:30:07.856 [2024-07-15 15:35:11.542660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.856 [2024-07-15 15:35:11.542699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.856 qpair failed and we were unable to recover it. 00:30:07.856 [2024-07-15 15:35:11.543116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.856 [2024-07-15 15:35:11.543158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.856 qpair failed and we were unable to recover it. 00:30:07.856 [2024-07-15 15:35:11.543471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.856 [2024-07-15 15:35:11.543488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.856 qpair failed and we were unable to recover it. 00:30:07.856 [2024-07-15 15:35:11.543822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.856 [2024-07-15 15:35:11.543844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.856 qpair failed and we were unable to recover it. 00:30:07.856 [2024-07-15 15:35:11.544055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.856 [2024-07-15 15:35:11.544095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.856 qpair failed and we were unable to recover it. 00:30:07.856 [2024-07-15 15:35:11.544432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.856 [2024-07-15 15:35:11.544470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.856 qpair failed and we were unable to recover it. 00:30:07.856 [2024-07-15 15:35:11.544830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.856 [2024-07-15 15:35:11.544879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.856 qpair failed and we were unable to recover it. 00:30:07.856 [2024-07-15 15:35:11.545186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.856 [2024-07-15 15:35:11.545225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.856 qpair failed and we were unable to recover it. 00:30:07.856 [2024-07-15 15:35:11.545625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.856 [2024-07-15 15:35:11.545664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.856 qpair failed and we were unable to recover it. 00:30:07.856 [2024-07-15 15:35:11.546027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.856 [2024-07-15 15:35:11.546068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.856 qpair failed and we were unable to recover it. 00:30:07.856 [2024-07-15 15:35:11.546432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.856 [2024-07-15 15:35:11.546472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.856 qpair failed and we were unable to recover it. 00:30:07.856 [2024-07-15 15:35:11.546803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.856 [2024-07-15 15:35:11.546850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.856 qpair failed and we were unable to recover it. 00:30:07.856 [2024-07-15 15:35:11.547208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.856 [2024-07-15 15:35:11.547225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.856 qpair failed and we were unable to recover it. 00:30:07.856 [2024-07-15 15:35:11.547468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.856 [2024-07-15 15:35:11.547485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.856 qpair failed and we were unable to recover it. 00:30:07.856 [2024-07-15 15:35:11.547653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.856 [2024-07-15 15:35:11.547670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.856 qpair failed and we were unable to recover it. 00:30:07.856 [2024-07-15 15:35:11.547956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.856 [2024-07-15 15:35:11.547973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.857 qpair failed and we were unable to recover it. 00:30:07.857 [2024-07-15 15:35:11.548316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.857 [2024-07-15 15:35:11.548355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.857 qpair failed and we were unable to recover it. 00:30:07.857 [2024-07-15 15:35:11.548738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.857 [2024-07-15 15:35:11.548777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.857 qpair failed and we were unable to recover it. 00:30:07.857 [2024-07-15 15:35:11.549031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.857 [2024-07-15 15:35:11.549071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.857 qpair failed and we were unable to recover it. 00:30:07.857 [2024-07-15 15:35:11.549408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.857 [2024-07-15 15:35:11.549447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.857 qpair failed and we were unable to recover it. 00:30:07.857 [2024-07-15 15:35:11.549760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.857 [2024-07-15 15:35:11.549805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.857 qpair failed and we were unable to recover it. 00:30:07.857 [2024-07-15 15:35:11.550073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.857 [2024-07-15 15:35:11.550114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.857 qpair failed and we were unable to recover it. 00:30:07.857 [2024-07-15 15:35:11.550342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.857 [2024-07-15 15:35:11.550381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.857 qpair failed and we were unable to recover it. 00:30:07.857 [2024-07-15 15:35:11.550692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.857 [2024-07-15 15:35:11.550731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.857 qpair failed and we were unable to recover it. 00:30:07.857 [2024-07-15 15:35:11.550965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.857 [2024-07-15 15:35:11.551007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.857 qpair failed and we were unable to recover it. 00:30:07.857 [2024-07-15 15:35:11.551320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.857 [2024-07-15 15:35:11.551359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.857 qpair failed and we were unable to recover it. 00:30:07.857 [2024-07-15 15:35:11.551744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.857 [2024-07-15 15:35:11.551783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.857 qpair failed and we were unable to recover it. 00:30:07.857 [2024-07-15 15:35:11.552089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.857 [2024-07-15 15:35:11.552128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.857 qpair failed and we were unable to recover it. 00:30:07.857 [2024-07-15 15:35:11.552438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.857 [2024-07-15 15:35:11.552478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.857 qpair failed and we were unable to recover it. 00:30:07.857 [2024-07-15 15:35:11.552793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.857 [2024-07-15 15:35:11.552843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.857 qpair failed and we were unable to recover it. 00:30:07.857 [2024-07-15 15:35:11.553212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.857 [2024-07-15 15:35:11.553251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.857 qpair failed and we were unable to recover it. 00:30:07.857 [2024-07-15 15:35:11.553613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.857 [2024-07-15 15:35:11.553652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.857 qpair failed and we were unable to recover it. 00:30:07.857 [2024-07-15 15:35:11.553893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.857 [2024-07-15 15:35:11.553934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.857 qpair failed and we were unable to recover it. 00:30:07.857 [2024-07-15 15:35:11.554228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.857 [2024-07-15 15:35:11.554268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.857 qpair failed and we were unable to recover it. 00:30:07.857 [2024-07-15 15:35:11.554537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.857 [2024-07-15 15:35:11.554577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.857 qpair failed and we were unable to recover it. 00:30:07.857 [2024-07-15 15:35:11.554897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.857 [2024-07-15 15:35:11.554938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.857 qpair failed and we were unable to recover it. 00:30:07.857 [2024-07-15 15:35:11.555325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.857 [2024-07-15 15:35:11.555364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.857 qpair failed and we were unable to recover it. 00:30:07.857 [2024-07-15 15:35:11.555587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.857 [2024-07-15 15:35:11.555626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.857 qpair failed and we were unable to recover it. 00:30:07.857 [2024-07-15 15:35:11.555858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.857 [2024-07-15 15:35:11.555899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.857 qpair failed and we were unable to recover it. 00:30:07.857 [2024-07-15 15:35:11.556194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.857 [2024-07-15 15:35:11.556211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.857 qpair failed and we were unable to recover it. 00:30:07.857 [2024-07-15 15:35:11.556478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.857 [2024-07-15 15:35:11.556494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.857 qpair failed and we were unable to recover it. 00:30:07.857 [2024-07-15 15:35:11.556669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.857 [2024-07-15 15:35:11.556686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.857 qpair failed and we were unable to recover it. 00:30:07.857 [2024-07-15 15:35:11.556941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.857 [2024-07-15 15:35:11.556958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.857 qpair failed and we were unable to recover it. 00:30:07.857 [2024-07-15 15:35:11.557224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.857 [2024-07-15 15:35:11.557241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.857 qpair failed and we were unable to recover it. 00:30:07.857 [2024-07-15 15:35:11.557578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.857 [2024-07-15 15:35:11.557617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.857 qpair failed and we were unable to recover it. 00:30:07.857 [2024-07-15 15:35:11.558003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.857 [2024-07-15 15:35:11.558043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.857 qpair failed and we were unable to recover it. 00:30:07.857 [2024-07-15 15:35:11.558426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.857 [2024-07-15 15:35:11.558466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.857 qpair failed and we were unable to recover it. 00:30:07.857 [2024-07-15 15:35:11.558805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.857 [2024-07-15 15:35:11.558875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.857 qpair failed and we were unable to recover it. 00:30:07.857 [2024-07-15 15:35:11.559177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.857 [2024-07-15 15:35:11.559194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.857 qpair failed and we were unable to recover it. 00:30:07.857 [2024-07-15 15:35:11.559441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.857 [2024-07-15 15:35:11.559458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.857 qpair failed and we were unable to recover it. 00:30:07.857 [2024-07-15 15:35:11.559711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.857 [2024-07-15 15:35:11.559728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.857 qpair failed and we were unable to recover it. 00:30:07.857 [2024-07-15 15:35:11.559966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.857 [2024-07-15 15:35:11.559984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.857 qpair failed and we were unable to recover it. 00:30:07.857 [2024-07-15 15:35:11.560229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.857 [2024-07-15 15:35:11.560246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.857 qpair failed and we were unable to recover it. 00:30:07.857 [2024-07-15 15:35:11.560555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.857 [2024-07-15 15:35:11.560595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.857 qpair failed and we were unable to recover it. 00:30:07.857 [2024-07-15 15:35:11.560950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.857 [2024-07-15 15:35:11.560991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.857 qpair failed and we were unable to recover it. 00:30:07.857 [2024-07-15 15:35:11.561377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.857 [2024-07-15 15:35:11.561416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.857 qpair failed and we were unable to recover it. 00:30:07.857 [2024-07-15 15:35:11.561797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.858 [2024-07-15 15:35:11.561845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.858 qpair failed and we were unable to recover it. 00:30:07.858 [2024-07-15 15:35:11.562153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.858 [2024-07-15 15:35:11.562170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.858 qpair failed and we were unable to recover it. 00:30:07.858 [2024-07-15 15:35:11.562449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.858 [2024-07-15 15:35:11.562488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.858 qpair failed and we were unable to recover it. 00:30:07.858 [2024-07-15 15:35:11.562806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.858 [2024-07-15 15:35:11.562854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.858 qpair failed and we were unable to recover it. 00:30:07.858 [2024-07-15 15:35:11.563213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.858 [2024-07-15 15:35:11.563230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.858 qpair failed and we were unable to recover it. 00:30:07.858 [2024-07-15 15:35:11.563487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.858 [2024-07-15 15:35:11.563531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.858 qpair failed and we were unable to recover it. 00:30:07.858 [2024-07-15 15:35:11.563858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.858 [2024-07-15 15:35:11.563898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.858 qpair failed and we were unable to recover it. 00:30:07.858 [2024-07-15 15:35:11.564142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.858 [2024-07-15 15:35:11.564182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.858 qpair failed and we were unable to recover it. 00:30:07.858 [2024-07-15 15:35:11.564479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.858 [2024-07-15 15:35:11.564518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.858 qpair failed and we were unable to recover it. 00:30:07.858 [2024-07-15 15:35:11.564817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.858 [2024-07-15 15:35:11.564867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.858 qpair failed and we were unable to recover it. 00:30:07.858 [2024-07-15 15:35:11.564996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.858 [2024-07-15 15:35:11.565013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.858 qpair failed and we were unable to recover it. 00:30:07.858 [2024-07-15 15:35:11.565261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.858 [2024-07-15 15:35:11.565300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.858 qpair failed and we were unable to recover it. 00:30:07.858 [2024-07-15 15:35:11.565618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.858 [2024-07-15 15:35:11.565657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.858 qpair failed and we were unable to recover it. 00:30:07.858 [2024-07-15 15:35:11.565896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.858 [2024-07-15 15:35:11.565937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.858 qpair failed and we were unable to recover it. 00:30:07.858 [2024-07-15 15:35:11.566325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.858 [2024-07-15 15:35:11.566364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.858 qpair failed and we were unable to recover it. 00:30:07.858 [2024-07-15 15:35:11.566678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.858 [2024-07-15 15:35:11.566717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.858 qpair failed and we were unable to recover it. 00:30:07.858 [2024-07-15 15:35:11.567057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.858 [2024-07-15 15:35:11.567091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.858 qpair failed and we were unable to recover it. 00:30:07.858 [2024-07-15 15:35:11.567402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.858 [2024-07-15 15:35:11.567441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.858 qpair failed and we were unable to recover it. 00:30:07.858 [2024-07-15 15:35:11.567767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.858 [2024-07-15 15:35:11.567806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.858 qpair failed and we were unable to recover it. 00:30:07.858 [2024-07-15 15:35:11.568189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.858 [2024-07-15 15:35:11.568229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.858 qpair failed and we were unable to recover it. 00:30:07.858 [2024-07-15 15:35:11.568519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.858 [2024-07-15 15:35:11.568537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.858 qpair failed and we were unable to recover it. 00:30:07.858 [2024-07-15 15:35:11.568908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.858 [2024-07-15 15:35:11.568948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.858 qpair failed and we were unable to recover it. 00:30:07.858 [2024-07-15 15:35:11.569348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.858 [2024-07-15 15:35:11.569387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.858 qpair failed and we were unable to recover it. 00:30:07.858 [2024-07-15 15:35:11.569762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.858 [2024-07-15 15:35:11.569801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.858 qpair failed and we were unable to recover it. 00:30:07.858 [2024-07-15 15:35:11.570191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.858 [2024-07-15 15:35:11.570208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.858 qpair failed and we were unable to recover it. 00:30:07.858 [2024-07-15 15:35:11.570572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.858 [2024-07-15 15:35:11.570612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.858 qpair failed and we were unable to recover it. 00:30:07.858 [2024-07-15 15:35:11.570975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.858 [2024-07-15 15:35:11.571015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.858 qpair failed and we were unable to recover it. 00:30:07.858 [2024-07-15 15:35:11.571369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.858 [2024-07-15 15:35:11.571386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.858 qpair failed and we were unable to recover it. 00:30:07.858 [2024-07-15 15:35:11.571661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.858 [2024-07-15 15:35:11.571700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.858 qpair failed and we were unable to recover it. 00:30:07.858 [2024-07-15 15:35:11.572083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.858 [2024-07-15 15:35:11.572124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.858 qpair failed and we were unable to recover it. 00:30:07.858 [2024-07-15 15:35:11.572536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.858 [2024-07-15 15:35:11.572576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.858 qpair failed and we were unable to recover it. 00:30:07.858 [2024-07-15 15:35:11.572802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.858 [2024-07-15 15:35:11.572851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.858 qpair failed and we were unable to recover it. 00:30:07.858 [2024-07-15 15:35:11.573243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.858 [2024-07-15 15:35:11.573260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.858 qpair failed and we were unable to recover it. 00:30:07.858 [2024-07-15 15:35:11.573442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.858 [2024-07-15 15:35:11.573459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.858 qpair failed and we were unable to recover it. 00:30:07.858 [2024-07-15 15:35:11.573735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.858 [2024-07-15 15:35:11.573774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.858 qpair failed and we were unable to recover it. 00:30:07.858 [2024-07-15 15:35:11.574165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.859 [2024-07-15 15:35:11.574206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.859 qpair failed and we were unable to recover it. 00:30:07.859 [2024-07-15 15:35:11.574564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.859 [2024-07-15 15:35:11.574581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.859 qpair failed and we were unable to recover it. 00:30:07.859 [2024-07-15 15:35:11.574924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.859 [2024-07-15 15:35:11.574965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.859 qpair failed and we were unable to recover it. 00:30:07.859 [2024-07-15 15:35:11.575286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.859 [2024-07-15 15:35:11.575325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.859 qpair failed and we were unable to recover it. 00:30:07.859 [2024-07-15 15:35:11.575647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.859 [2024-07-15 15:35:11.575687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.859 qpair failed and we were unable to recover it. 00:30:07.859 [2024-07-15 15:35:11.576004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.859 [2024-07-15 15:35:11.576045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.859 qpair failed and we were unable to recover it. 00:30:07.859 [2024-07-15 15:35:11.576432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.859 [2024-07-15 15:35:11.576471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.859 qpair failed and we were unable to recover it. 00:30:07.859 [2024-07-15 15:35:11.576887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.859 [2024-07-15 15:35:11.576926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.859 qpair failed and we were unable to recover it. 00:30:07.859 [2024-07-15 15:35:11.577183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.859 [2024-07-15 15:35:11.577200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.859 qpair failed and we were unable to recover it. 00:30:07.859 [2024-07-15 15:35:11.577375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.859 [2024-07-15 15:35:11.577392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.859 qpair failed and we were unable to recover it. 00:30:07.859 [2024-07-15 15:35:11.577670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.859 [2024-07-15 15:35:11.577687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.859 qpair failed and we were unable to recover it. 00:30:07.859 [2024-07-15 15:35:11.577950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.859 [2024-07-15 15:35:11.577967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.859 qpair failed and we were unable to recover it. 00:30:07.859 [2024-07-15 15:35:11.578237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.859 [2024-07-15 15:35:11.578277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.859 qpair failed and we were unable to recover it. 00:30:07.859 [2024-07-15 15:35:11.578673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.859 [2024-07-15 15:35:11.578712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.859 qpair failed and we were unable to recover it. 00:30:07.859 [2024-07-15 15:35:11.578977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.859 [2024-07-15 15:35:11.579018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.859 qpair failed and we were unable to recover it. 00:30:07.859 [2024-07-15 15:35:11.579330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.859 [2024-07-15 15:35:11.579369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.859 qpair failed and we were unable to recover it. 00:30:07.859 [2024-07-15 15:35:11.579610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.859 [2024-07-15 15:35:11.579649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.859 qpair failed and we were unable to recover it. 00:30:07.859 [2024-07-15 15:35:11.579957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.859 [2024-07-15 15:35:11.579997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.859 qpair failed and we were unable to recover it. 00:30:07.859 [2024-07-15 15:35:11.580188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.859 [2024-07-15 15:35:11.580205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.859 qpair failed and we were unable to recover it. 00:30:07.859 [2024-07-15 15:35:11.580468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.859 [2024-07-15 15:35:11.580507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.859 qpair failed and we were unable to recover it. 00:30:07.859 [2024-07-15 15:35:11.580821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.859 [2024-07-15 15:35:11.580871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.859 qpair failed and we were unable to recover it. 00:30:07.859 [2024-07-15 15:35:11.581195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.859 [2024-07-15 15:35:11.581234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.859 qpair failed and we were unable to recover it. 00:30:07.859 [2024-07-15 15:35:11.581547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.859 [2024-07-15 15:35:11.581586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.859 qpair failed and we were unable to recover it. 00:30:07.859 [2024-07-15 15:35:11.581828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.859 [2024-07-15 15:35:11.581877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.859 qpair failed and we were unable to recover it. 00:30:07.859 [2024-07-15 15:35:11.582240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.859 [2024-07-15 15:35:11.582284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.859 qpair failed and we were unable to recover it. 00:30:07.859 [2024-07-15 15:35:11.582545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.859 [2024-07-15 15:35:11.582584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.859 qpair failed and we were unable to recover it. 00:30:07.859 [2024-07-15 15:35:11.583080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.859 [2024-07-15 15:35:11.583124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.859 qpair failed and we were unable to recover it. 00:30:07.859 [2024-07-15 15:35:11.583412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.859 [2024-07-15 15:35:11.583430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.859 qpair failed and we were unable to recover it. 00:30:07.859 [2024-07-15 15:35:11.583780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.859 [2024-07-15 15:35:11.583797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.859 qpair failed and we were unable to recover it. 00:30:07.859 [2024-07-15 15:35:11.583977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.859 [2024-07-15 15:35:11.584017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.859 qpair failed and we were unable to recover it. 00:30:07.859 [2024-07-15 15:35:11.584412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.859 [2024-07-15 15:35:11.584451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.859 qpair failed and we were unable to recover it. 00:30:07.859 [2024-07-15 15:35:11.584708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.859 [2024-07-15 15:35:11.584748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.859 qpair failed and we were unable to recover it. 00:30:07.859 [2024-07-15 15:35:11.585086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.859 [2024-07-15 15:35:11.585129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.859 qpair failed and we were unable to recover it. 00:30:07.859 [2024-07-15 15:35:11.585383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.859 [2024-07-15 15:35:11.585400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.859 qpair failed and we were unable to recover it. 00:30:07.859 [2024-07-15 15:35:11.585688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.859 [2024-07-15 15:35:11.585728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.859 qpair failed and we were unable to recover it. 00:30:07.859 [2024-07-15 15:35:11.586115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.859 [2024-07-15 15:35:11.586155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.859 qpair failed and we were unable to recover it. 00:30:07.859 [2024-07-15 15:35:11.586528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.859 [2024-07-15 15:35:11.586567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.859 qpair failed and we were unable to recover it. 00:30:07.859 [2024-07-15 15:35:11.586938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.859 [2024-07-15 15:35:11.586956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.859 qpair failed and we were unable to recover it. 00:30:07.859 [2024-07-15 15:35:11.587311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.859 [2024-07-15 15:35:11.587351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.859 qpair failed and we were unable to recover it. 00:30:07.859 [2024-07-15 15:35:11.587644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.859 [2024-07-15 15:35:11.587683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.859 qpair failed and we were unable to recover it. 00:30:07.859 [2024-07-15 15:35:11.588067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.859 [2024-07-15 15:35:11.588107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.859 qpair failed and we were unable to recover it. 00:30:07.859 [2024-07-15 15:35:11.588366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.860 [2024-07-15 15:35:11.588406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.860 qpair failed and we were unable to recover it. 00:30:07.860 [2024-07-15 15:35:11.588703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.860 [2024-07-15 15:35:11.588742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.860 qpair failed and we were unable to recover it. 00:30:07.860 [2024-07-15 15:35:11.589098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.860 [2024-07-15 15:35:11.589144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.860 qpair failed and we were unable to recover it. 00:30:07.860 [2024-07-15 15:35:11.589387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.860 [2024-07-15 15:35:11.589404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.860 qpair failed and we were unable to recover it. 00:30:07.860 [2024-07-15 15:35:11.589714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.860 [2024-07-15 15:35:11.589731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.860 qpair failed and we were unable to recover it. 00:30:07.860 [2024-07-15 15:35:11.589918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.860 [2024-07-15 15:35:11.589935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.860 qpair failed and we were unable to recover it. 00:30:07.860 [2024-07-15 15:35:11.590210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.860 [2024-07-15 15:35:11.590249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.860 qpair failed and we were unable to recover it. 00:30:07.860 [2024-07-15 15:35:11.590559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.860 [2024-07-15 15:35:11.590598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.860 qpair failed and we were unable to recover it. 00:30:07.860 [2024-07-15 15:35:11.590917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.860 [2024-07-15 15:35:11.590958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.860 qpair failed and we were unable to recover it. 00:30:07.860 [2024-07-15 15:35:11.591313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.860 [2024-07-15 15:35:11.591330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.860 qpair failed and we were unable to recover it. 00:30:07.860 [2024-07-15 15:35:11.591534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.860 [2024-07-15 15:35:11.591583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.860 qpair failed and we were unable to recover it. 00:30:07.860 [2024-07-15 15:35:11.591842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.860 [2024-07-15 15:35:11.591883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.860 qpair failed and we were unable to recover it. 00:30:07.860 [2024-07-15 15:35:11.592201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.860 [2024-07-15 15:35:11.592241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.860 qpair failed and we were unable to recover it. 00:30:07.860 [2024-07-15 15:35:11.592476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.860 [2024-07-15 15:35:11.592514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.860 qpair failed and we were unable to recover it. 00:30:07.860 [2024-07-15 15:35:11.592920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.860 [2024-07-15 15:35:11.592960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.860 qpair failed and we were unable to recover it. 00:30:07.860 [2024-07-15 15:35:11.593272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.860 [2024-07-15 15:35:11.593289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.860 qpair failed and we were unable to recover it. 00:30:07.860 [2024-07-15 15:35:11.593608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.860 [2024-07-15 15:35:11.593648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.860 qpair failed and we were unable to recover it. 00:30:07.860 [2024-07-15 15:35:11.594013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.860 [2024-07-15 15:35:11.594053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.860 qpair failed and we were unable to recover it. 00:30:07.860 [2024-07-15 15:35:11.594296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.860 [2024-07-15 15:35:11.594313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.860 qpair failed and we were unable to recover it. 00:30:07.860 [2024-07-15 15:35:11.594569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.860 [2024-07-15 15:35:11.594586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.860 qpair failed and we were unable to recover it. 00:30:07.860 [2024-07-15 15:35:11.594764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.860 [2024-07-15 15:35:11.594781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.860 qpair failed and we were unable to recover it. 00:30:07.860 [2024-07-15 15:35:11.595044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.860 [2024-07-15 15:35:11.595061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.860 qpair failed and we were unable to recover it. 00:30:07.860 [2024-07-15 15:35:11.595319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.860 [2024-07-15 15:35:11.595336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.860 qpair failed and we were unable to recover it. 00:30:07.860 [2024-07-15 15:35:11.595678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.860 [2024-07-15 15:35:11.595718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.860 qpair failed and we were unable to recover it. 00:30:07.860 [2024-07-15 15:35:11.596104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.860 [2024-07-15 15:35:11.596144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.860 qpair failed and we were unable to recover it. 00:30:07.860 [2024-07-15 15:35:11.596493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.860 [2024-07-15 15:35:11.596533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.860 qpair failed and we were unable to recover it. 00:30:07.860 [2024-07-15 15:35:11.596775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.860 [2024-07-15 15:35:11.596815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.860 qpair failed and we were unable to recover it. 00:30:07.860 [2024-07-15 15:35:11.597213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.860 [2024-07-15 15:35:11.597253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.860 qpair failed and we were unable to recover it. 00:30:07.860 [2024-07-15 15:35:11.597661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.860 [2024-07-15 15:35:11.597699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.860 qpair failed and we were unable to recover it. 00:30:07.860 [2024-07-15 15:35:11.598091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.860 [2024-07-15 15:35:11.598131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.860 qpair failed and we were unable to recover it. 00:30:07.860 [2024-07-15 15:35:11.598409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.860 [2024-07-15 15:35:11.598425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.860 qpair failed and we were unable to recover it. 00:30:07.860 [2024-07-15 15:35:11.598727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.860 [2024-07-15 15:35:11.598766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.860 qpair failed and we were unable to recover it. 00:30:07.860 [2024-07-15 15:35:11.599083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.860 [2024-07-15 15:35:11.599123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.860 qpair failed and we were unable to recover it. 00:30:07.860 [2024-07-15 15:35:11.599481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.860 [2024-07-15 15:35:11.599497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.860 qpair failed and we were unable to recover it. 00:30:07.860 [2024-07-15 15:35:11.599849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.860 [2024-07-15 15:35:11.599890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.860 qpair failed and we were unable to recover it. 00:30:07.860 [2024-07-15 15:35:11.600122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.860 [2024-07-15 15:35:11.600161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.860 qpair failed and we were unable to recover it. 00:30:07.860 [2024-07-15 15:35:11.600537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.860 [2024-07-15 15:35:11.600554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.860 qpair failed and we were unable to recover it. 00:30:07.860 [2024-07-15 15:35:11.600918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.860 [2024-07-15 15:35:11.600964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.860 qpair failed and we were unable to recover it. 00:30:07.860 [2024-07-15 15:35:11.601280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.860 [2024-07-15 15:35:11.601319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.860 qpair failed and we were unable to recover it. 00:30:07.860 [2024-07-15 15:35:11.601542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.860 [2024-07-15 15:35:11.601581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.860 qpair failed and we were unable to recover it. 00:30:07.860 [2024-07-15 15:35:11.601845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.860 [2024-07-15 15:35:11.601885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.860 qpair failed and we were unable to recover it. 00:30:07.860 [2024-07-15 15:35:11.602050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.860 [2024-07-15 15:35:11.602089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.860 qpair failed and we were unable to recover it. 00:30:07.861 [2024-07-15 15:35:11.602330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.861 [2024-07-15 15:35:11.602369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.861 qpair failed and we were unable to recover it. 00:30:07.861 [2024-07-15 15:35:11.602781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.861 [2024-07-15 15:35:11.602820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.861 qpair failed and we were unable to recover it. 00:30:07.861 [2024-07-15 15:35:11.603124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.861 [2024-07-15 15:35:11.603164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.861 qpair failed and we were unable to recover it. 00:30:07.861 [2024-07-15 15:35:11.603413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.861 [2024-07-15 15:35:11.603451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.861 qpair failed and we were unable to recover it. 00:30:07.861 [2024-07-15 15:35:11.603708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.861 [2024-07-15 15:35:11.603746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.861 qpair failed and we were unable to recover it. 00:30:07.861 [2024-07-15 15:35:11.604130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.861 [2024-07-15 15:35:11.604170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.861 qpair failed and we were unable to recover it. 00:30:07.861 [2024-07-15 15:35:11.604506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.861 [2024-07-15 15:35:11.604523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.861 qpair failed and we were unable to recover it. 00:30:07.861 [2024-07-15 15:35:11.604839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.861 [2024-07-15 15:35:11.604856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.861 qpair failed and we were unable to recover it. 00:30:07.861 [2024-07-15 15:35:11.605148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.861 [2024-07-15 15:35:11.605187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.861 qpair failed and we were unable to recover it. 00:30:07.861 [2024-07-15 15:35:11.605426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.861 [2024-07-15 15:35:11.605466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.861 qpair failed and we were unable to recover it. 00:30:07.861 [2024-07-15 15:35:11.605786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.861 [2024-07-15 15:35:11.605826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.861 qpair failed and we were unable to recover it. 00:30:07.861 [2024-07-15 15:35:11.606085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.861 [2024-07-15 15:35:11.606125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.861 qpair failed and we were unable to recover it. 00:30:07.861 [2024-07-15 15:35:11.606441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.861 [2024-07-15 15:35:11.606480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.861 qpair failed and we were unable to recover it. 00:30:07.861 [2024-07-15 15:35:11.606785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.861 [2024-07-15 15:35:11.606824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.861 qpair failed and we were unable to recover it. 00:30:07.861 [2024-07-15 15:35:11.607194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.861 [2024-07-15 15:35:11.607211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.861 qpair failed and we were unable to recover it. 00:30:07.861 [2024-07-15 15:35:11.607543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.861 [2024-07-15 15:35:11.607561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.861 qpair failed and we were unable to recover it. 00:30:07.861 [2024-07-15 15:35:11.607829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.861 [2024-07-15 15:35:11.607881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.861 qpair failed and we were unable to recover it. 00:30:07.861 [2024-07-15 15:35:11.608257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.861 [2024-07-15 15:35:11.608296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.861 qpair failed and we were unable to recover it. 00:30:07.861 [2024-07-15 15:35:11.608602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.861 [2024-07-15 15:35:11.608619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.861 qpair failed and we were unable to recover it. 00:30:07.861 [2024-07-15 15:35:11.608876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.861 [2024-07-15 15:35:11.608924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.861 qpair failed and we were unable to recover it. 00:30:07.861 [2024-07-15 15:35:11.609310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.861 [2024-07-15 15:35:11.609350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.861 qpair failed and we were unable to recover it. 00:30:07.861 [2024-07-15 15:35:11.609659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.861 [2024-07-15 15:35:11.609676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.861 qpair failed and we were unable to recover it. 00:30:07.861 [2024-07-15 15:35:11.609946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.861 [2024-07-15 15:35:11.609963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.861 qpair failed and we were unable to recover it. 00:30:07.861 [2024-07-15 15:35:11.610223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.861 [2024-07-15 15:35:11.610268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.861 qpair failed and we were unable to recover it. 00:30:07.861 [2024-07-15 15:35:11.610581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.861 [2024-07-15 15:35:11.610620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.861 qpair failed and we were unable to recover it. 00:30:07.861 [2024-07-15 15:35:11.610990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.861 [2024-07-15 15:35:11.611030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.861 qpair failed and we were unable to recover it. 00:30:07.861 [2024-07-15 15:35:11.611405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.861 [2024-07-15 15:35:11.611422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.861 qpair failed and we were unable to recover it. 00:30:07.861 [2024-07-15 15:35:11.611683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.861 [2024-07-15 15:35:11.611699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.861 qpair failed and we were unable to recover it. 00:30:07.861 [2024-07-15 15:35:11.611962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.861 [2024-07-15 15:35:11.611980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.861 qpair failed and we were unable to recover it. 00:30:07.861 [2024-07-15 15:35:11.612242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.861 [2024-07-15 15:35:11.612259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.861 qpair failed and we were unable to recover it. 00:30:07.861 [2024-07-15 15:35:11.612560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.861 [2024-07-15 15:35:11.612576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.861 qpair failed and we were unable to recover it. 00:30:07.861 [2024-07-15 15:35:11.612889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.861 [2024-07-15 15:35:11.612906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.861 qpair failed and we were unable to recover it. 00:30:07.861 [2024-07-15 15:35:11.613092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.861 [2024-07-15 15:35:11.613131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.861 qpair failed and we were unable to recover it. 00:30:07.861 [2024-07-15 15:35:11.613426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.861 [2024-07-15 15:35:11.613466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.861 qpair failed and we were unable to recover it. 00:30:07.861 [2024-07-15 15:35:11.613776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.861 [2024-07-15 15:35:11.613814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.861 qpair failed and we were unable to recover it. 00:30:07.861 [2024-07-15 15:35:11.614187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.861 [2024-07-15 15:35:11.614227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.861 qpair failed and we were unable to recover it. 00:30:07.861 [2024-07-15 15:35:11.614680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.861 [2024-07-15 15:35:11.614759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.861 qpair failed and we were unable to recover it. 00:30:07.861 [2024-07-15 15:35:11.615038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.861 [2024-07-15 15:35:11.615083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.861 qpair failed and we were unable to recover it. 00:30:07.861 [2024-07-15 15:35:11.615407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.861 [2024-07-15 15:35:11.615456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.861 qpair failed and we were unable to recover it. 00:30:07.861 [2024-07-15 15:35:11.615709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.861 [2024-07-15 15:35:11.615753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.861 qpair failed and we were unable to recover it. 00:30:07.861 [2024-07-15 15:35:11.616143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.861 [2024-07-15 15:35:11.616187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.861 qpair failed and we were unable to recover it. 00:30:07.861 [2024-07-15 15:35:11.616497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.862 [2024-07-15 15:35:11.616514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.862 qpair failed and we were unable to recover it. 00:30:07.862 [2024-07-15 15:35:11.616786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.862 [2024-07-15 15:35:11.616822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.862 qpair failed and we were unable to recover it. 00:30:07.862 [2024-07-15 15:35:11.617222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.862 [2024-07-15 15:35:11.617262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.862 qpair failed and we were unable to recover it. 00:30:07.862 [2024-07-15 15:35:11.617514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.862 [2024-07-15 15:35:11.617554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.862 qpair failed and we were unable to recover it. 00:30:07.862 [2024-07-15 15:35:11.617868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.862 [2024-07-15 15:35:11.617908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.862 qpair failed and we were unable to recover it. 00:30:07.862 [2024-07-15 15:35:11.618221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.862 [2024-07-15 15:35:11.618260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.862 qpair failed and we were unable to recover it. 00:30:07.862 [2024-07-15 15:35:11.618520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.862 [2024-07-15 15:35:11.618569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.862 qpair failed and we were unable to recover it. 00:30:07.862 [2024-07-15 15:35:11.618844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.862 [2024-07-15 15:35:11.618884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.862 qpair failed and we were unable to recover it. 00:30:07.862 [2024-07-15 15:35:11.619216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.862 [2024-07-15 15:35:11.619272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.862 qpair failed and we were unable to recover it. 00:30:07.862 [2024-07-15 15:35:11.619647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.862 [2024-07-15 15:35:11.619687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.862 qpair failed and we were unable to recover it. 00:30:07.862 [2024-07-15 15:35:11.620003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.862 [2024-07-15 15:35:11.620055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.862 qpair failed and we were unable to recover it. 00:30:07.862 [2024-07-15 15:35:11.620408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.862 [2024-07-15 15:35:11.620448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.862 qpair failed and we were unable to recover it. 00:30:07.862 [2024-07-15 15:35:11.620776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.862 [2024-07-15 15:35:11.620815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.862 qpair failed and we were unable to recover it. 00:30:07.862 [2024-07-15 15:35:11.621131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.862 [2024-07-15 15:35:11.621148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.862 qpair failed and we were unable to recover it. 00:30:07.862 [2024-07-15 15:35:11.621387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.862 [2024-07-15 15:35:11.621404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.862 qpair failed and we were unable to recover it. 00:30:07.862 [2024-07-15 15:35:11.621678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.862 [2024-07-15 15:35:11.621695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.862 qpair failed and we were unable to recover it. 00:30:07.862 [2024-07-15 15:35:11.622022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.862 [2024-07-15 15:35:11.622063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.862 qpair failed and we were unable to recover it. 00:30:07.862 [2024-07-15 15:35:11.622321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.862 [2024-07-15 15:35:11.622361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.862 qpair failed and we were unable to recover it. 00:30:07.862 [2024-07-15 15:35:11.622672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.862 [2024-07-15 15:35:11.622712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.862 qpair failed and we were unable to recover it. 00:30:07.862 [2024-07-15 15:35:11.622957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.862 [2024-07-15 15:35:11.622997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.862 qpair failed and we were unable to recover it. 00:30:07.862 [2024-07-15 15:35:11.623329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.862 [2024-07-15 15:35:11.623372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.862 qpair failed and we were unable to recover it. 00:30:07.862 [2024-07-15 15:35:11.623544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.862 [2024-07-15 15:35:11.623561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.862 qpair failed and we were unable to recover it. 00:30:07.862 [2024-07-15 15:35:11.623826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.862 [2024-07-15 15:35:11.623875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.862 qpair failed and we were unable to recover it. 00:30:07.862 [2024-07-15 15:35:11.624192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.862 [2024-07-15 15:35:11.624233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.862 qpair failed and we were unable to recover it. 00:30:07.862 [2024-07-15 15:35:11.624456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.862 [2024-07-15 15:35:11.624496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.862 qpair failed and we were unable to recover it. 00:30:07.862 [2024-07-15 15:35:11.624745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.862 [2024-07-15 15:35:11.624784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.862 qpair failed and we were unable to recover it. 00:30:07.862 [2024-07-15 15:35:11.625090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.862 [2024-07-15 15:35:11.625130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.862 qpair failed and we were unable to recover it. 00:30:07.862 [2024-07-15 15:35:11.625380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.862 [2024-07-15 15:35:11.625419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.862 qpair failed and we were unable to recover it. 00:30:07.862 [2024-07-15 15:35:11.625714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.862 [2024-07-15 15:35:11.625754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.862 qpair failed and we were unable to recover it. 00:30:07.862 [2024-07-15 15:35:11.626147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.862 [2024-07-15 15:35:11.626187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.862 qpair failed and we were unable to recover it. 00:30:07.862 [2024-07-15 15:35:11.626354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.862 [2024-07-15 15:35:11.626394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.862 qpair failed and we were unable to recover it. 00:30:07.862 [2024-07-15 15:35:11.626689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.862 [2024-07-15 15:35:11.626729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.862 qpair failed and we were unable to recover it. 00:30:07.862 [2024-07-15 15:35:11.627137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.862 [2024-07-15 15:35:11.627178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.862 qpair failed and we were unable to recover it. 00:30:07.862 [2024-07-15 15:35:11.627397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.862 [2024-07-15 15:35:11.627414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.862 qpair failed and we were unable to recover it. 00:30:07.862 [2024-07-15 15:35:11.627690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.862 [2024-07-15 15:35:11.627707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.862 qpair failed and we were unable to recover it. 00:30:07.863 [2024-07-15 15:35:11.628020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.863 [2024-07-15 15:35:11.628037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.863 qpair failed and we were unable to recover it. 00:30:07.863 [2024-07-15 15:35:11.628236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.863 [2024-07-15 15:35:11.628253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.863 qpair failed and we were unable to recover it. 00:30:07.863 [2024-07-15 15:35:11.628519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.863 [2024-07-15 15:35:11.628536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.863 qpair failed and we were unable to recover it. 00:30:07.863 [2024-07-15 15:35:11.628782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.863 [2024-07-15 15:35:11.628822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.863 qpair failed and we were unable to recover it. 00:30:07.863 [2024-07-15 15:35:11.629147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.863 [2024-07-15 15:35:11.629187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.863 qpair failed and we were unable to recover it. 00:30:07.863 [2024-07-15 15:35:11.629517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.863 [2024-07-15 15:35:11.629558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.863 qpair failed and we were unable to recover it. 00:30:07.863 [2024-07-15 15:35:11.629822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.863 [2024-07-15 15:35:11.629869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.863 qpair failed and we were unable to recover it. 00:30:07.863 [2024-07-15 15:35:11.630160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.863 [2024-07-15 15:35:11.630177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.863 qpair failed and we were unable to recover it. 00:30:07.863 [2024-07-15 15:35:11.630480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.863 [2024-07-15 15:35:11.630519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.863 qpair failed and we were unable to recover it. 00:30:07.863 [2024-07-15 15:35:11.630812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.863 [2024-07-15 15:35:11.630860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.863 qpair failed and we were unable to recover it. 00:30:07.863 [2024-07-15 15:35:11.631224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.863 [2024-07-15 15:35:11.631272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.863 qpair failed and we were unable to recover it. 00:30:07.863 [2024-07-15 15:35:11.631656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.863 [2024-07-15 15:35:11.631695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.863 qpair failed and we were unable to recover it. 00:30:07.863 [2024-07-15 15:35:11.631990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.863 [2024-07-15 15:35:11.632031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.863 qpair failed and we were unable to recover it. 00:30:07.863 [2024-07-15 15:35:11.632316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.863 [2024-07-15 15:35:11.632333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.863 qpair failed and we were unable to recover it. 00:30:07.863 [2024-07-15 15:35:11.632660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.863 [2024-07-15 15:35:11.632700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.863 qpair failed and we were unable to recover it. 00:30:07.863 [2024-07-15 15:35:11.633082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.863 [2024-07-15 15:35:11.633123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.863 qpair failed and we were unable to recover it. 00:30:07.863 [2024-07-15 15:35:11.633417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.863 [2024-07-15 15:35:11.633457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.863 qpair failed and we were unable to recover it. 00:30:07.863 [2024-07-15 15:35:11.633711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.863 [2024-07-15 15:35:11.633751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.863 qpair failed and we were unable to recover it. 00:30:07.863 [2024-07-15 15:35:11.634119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.863 [2024-07-15 15:35:11.634160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.863 qpair failed and we were unable to recover it. 00:30:07.863 [2024-07-15 15:35:11.634466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.863 [2024-07-15 15:35:11.634483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.863 qpair failed and we were unable to recover it. 00:30:07.863 [2024-07-15 15:35:11.634741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.863 [2024-07-15 15:35:11.634758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.863 qpair failed and we were unable to recover it. 00:30:07.863 [2024-07-15 15:35:11.634938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.863 [2024-07-15 15:35:11.634955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.863 qpair failed and we were unable to recover it. 00:30:07.863 [2024-07-15 15:35:11.635225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.863 [2024-07-15 15:35:11.635265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.863 qpair failed and we were unable to recover it. 00:30:07.863 [2024-07-15 15:35:11.635510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.863 [2024-07-15 15:35:11.635550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.863 qpair failed and we were unable to recover it. 00:30:07.863 [2024-07-15 15:35:11.635938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.863 [2024-07-15 15:35:11.635979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.863 qpair failed and we were unable to recover it. 00:30:07.863 [2024-07-15 15:35:11.636293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.863 [2024-07-15 15:35:11.636332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.863 qpair failed and we were unable to recover it. 00:30:07.863 [2024-07-15 15:35:11.636632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.863 [2024-07-15 15:35:11.636672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.863 qpair failed and we were unable to recover it. 00:30:07.863 [2024-07-15 15:35:11.637079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.863 [2024-07-15 15:35:11.637120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.863 qpair failed and we were unable to recover it. 00:30:07.863 [2024-07-15 15:35:11.637443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.863 [2024-07-15 15:35:11.637482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.863 qpair failed and we were unable to recover it. 00:30:07.863 [2024-07-15 15:35:11.637776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.863 [2024-07-15 15:35:11.637792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.863 qpair failed and we were unable to recover it. 00:30:07.863 [2024-07-15 15:35:11.638109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.863 [2024-07-15 15:35:11.638127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.863 qpair failed and we were unable to recover it. 00:30:07.863 [2024-07-15 15:35:11.638445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.863 [2024-07-15 15:35:11.638485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.863 qpair failed and we were unable to recover it. 00:30:07.863 [2024-07-15 15:35:11.638736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.863 [2024-07-15 15:35:11.638775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.863 qpair failed and we were unable to recover it. 00:30:07.863 [2024-07-15 15:35:11.639145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.863 [2024-07-15 15:35:11.639190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.863 qpair failed and we were unable to recover it. 00:30:07.863 [2024-07-15 15:35:11.639581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.863 [2024-07-15 15:35:11.639626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.863 qpair failed and we were unable to recover it. 00:30:07.863 [2024-07-15 15:35:11.640017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.863 [2024-07-15 15:35:11.640058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.863 qpair failed and we were unable to recover it. 00:30:07.863 [2024-07-15 15:35:11.640298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.863 [2024-07-15 15:35:11.640316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.863 qpair failed and we were unable to recover it. 00:30:07.863 [2024-07-15 15:35:11.640635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.863 [2024-07-15 15:35:11.640690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.863 qpair failed and we were unable to recover it. 00:30:07.863 [2024-07-15 15:35:11.641040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.863 [2024-07-15 15:35:11.641080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.863 qpair failed and we were unable to recover it. 00:30:07.863 [2024-07-15 15:35:11.641320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.863 [2024-07-15 15:35:11.641338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.863 qpair failed and we were unable to recover it. 00:30:07.863 [2024-07-15 15:35:11.641593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.863 [2024-07-15 15:35:11.641642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.864 qpair failed and we were unable to recover it. 00:30:07.864 [2024-07-15 15:35:11.641972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.864 [2024-07-15 15:35:11.642012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.864 qpair failed and we were unable to recover it. 00:30:07.864 [2024-07-15 15:35:11.642369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.864 [2024-07-15 15:35:11.642388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.864 qpair failed and we were unable to recover it. 00:30:07.864 [2024-07-15 15:35:11.642756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.864 [2024-07-15 15:35:11.642798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.864 qpair failed and we were unable to recover it. 00:30:07.864 [2024-07-15 15:35:11.643188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.864 [2024-07-15 15:35:11.643230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.864 qpair failed and we were unable to recover it. 00:30:07.864 [2024-07-15 15:35:11.643463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.864 [2024-07-15 15:35:11.643481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.864 qpair failed and we were unable to recover it. 00:30:07.864 [2024-07-15 15:35:11.643810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.864 [2024-07-15 15:35:11.643862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.864 qpair failed and we were unable to recover it. 00:30:07.864 [2024-07-15 15:35:11.644179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.864 [2024-07-15 15:35:11.644219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.864 qpair failed and we were unable to recover it. 00:30:07.864 [2024-07-15 15:35:11.644467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.864 [2024-07-15 15:35:11.644507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.864 qpair failed and we were unable to recover it. 00:30:07.864 [2024-07-15 15:35:11.644798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.864 [2024-07-15 15:35:11.644846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.864 qpair failed and we were unable to recover it. 00:30:07.864 [2024-07-15 15:35:11.645001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.864 [2024-07-15 15:35:11.645042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.864 qpair failed and we were unable to recover it. 00:30:07.864 [2024-07-15 15:35:11.645358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.864 [2024-07-15 15:35:11.645398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.864 qpair failed and we were unable to recover it. 00:30:07.864 [2024-07-15 15:35:11.645658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.864 [2024-07-15 15:35:11.645675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.864 qpair failed and we were unable to recover it. 00:30:07.864 [2024-07-15 15:35:11.646022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.864 [2024-07-15 15:35:11.646062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.864 qpair failed and we were unable to recover it. 00:30:07.864 [2024-07-15 15:35:11.646409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.864 [2024-07-15 15:35:11.646449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.864 qpair failed and we were unable to recover it. 00:30:07.864 [2024-07-15 15:35:11.646624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.864 [2024-07-15 15:35:11.646641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.864 qpair failed and we were unable to recover it. 00:30:07.864 [2024-07-15 15:35:11.646896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.864 [2024-07-15 15:35:11.646922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.864 qpair failed and we were unable to recover it. 00:30:07.864 [2024-07-15 15:35:11.647137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.864 [2024-07-15 15:35:11.647177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.864 qpair failed and we were unable to recover it. 00:30:07.864 [2024-07-15 15:35:11.647471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.864 [2024-07-15 15:35:11.647510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.864 qpair failed and we were unable to recover it. 00:30:07.864 [2024-07-15 15:35:11.647817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.864 [2024-07-15 15:35:11.647869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.864 qpair failed and we were unable to recover it. 00:30:07.864 [2024-07-15 15:35:11.648098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.864 [2024-07-15 15:35:11.648137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.864 qpair failed and we were unable to recover it. 00:30:07.864 [2024-07-15 15:35:11.648431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.864 [2024-07-15 15:35:11.648471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.864 qpair failed and we were unable to recover it. 00:30:07.864 [2024-07-15 15:35:11.648742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.864 [2024-07-15 15:35:11.648758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.864 qpair failed and we were unable to recover it. 00:30:07.864 [2024-07-15 15:35:11.649097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.864 [2024-07-15 15:35:11.649137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.864 qpair failed and we were unable to recover it. 00:30:07.864 [2024-07-15 15:35:11.649435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.864 [2024-07-15 15:35:11.649474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.864 qpair failed and we were unable to recover it. 00:30:07.864 [2024-07-15 15:35:11.649795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.864 [2024-07-15 15:35:11.649844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.864 qpair failed and we were unable to recover it. 00:30:07.864 [2024-07-15 15:35:11.650228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.864 [2024-07-15 15:35:11.650268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.864 qpair failed and we were unable to recover it. 00:30:07.864 [2024-07-15 15:35:11.650569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.864 [2024-07-15 15:35:11.650609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.864 qpair failed and we were unable to recover it. 00:30:07.864 [2024-07-15 15:35:11.650982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.864 [2024-07-15 15:35:11.651022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.864 qpair failed and we were unable to recover it. 00:30:07.864 [2024-07-15 15:35:11.651253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.864 [2024-07-15 15:35:11.651292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.864 qpair failed and we were unable to recover it. 00:30:07.864 [2024-07-15 15:35:11.651628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.864 [2024-07-15 15:35:11.651668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.864 qpair failed and we were unable to recover it. 00:30:07.864 [2024-07-15 15:35:11.651888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.864 [2024-07-15 15:35:11.651929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.864 qpair failed and we were unable to recover it. 00:30:07.864 [2024-07-15 15:35:11.652236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.864 [2024-07-15 15:35:11.652275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.864 qpair failed and we were unable to recover it. 00:30:07.864 [2024-07-15 15:35:11.652618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.864 [2024-07-15 15:35:11.652657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.864 qpair failed and we were unable to recover it. 00:30:07.864 [2024-07-15 15:35:11.652987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.864 [2024-07-15 15:35:11.653028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.864 qpair failed and we were unable to recover it. 00:30:07.864 [2024-07-15 15:35:11.653336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.864 [2024-07-15 15:35:11.653353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.864 qpair failed and we were unable to recover it. 00:30:07.864 [2024-07-15 15:35:11.653606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.864 [2024-07-15 15:35:11.653623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.864 qpair failed and we were unable to recover it. 00:30:07.864 [2024-07-15 15:35:11.653881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.864 [2024-07-15 15:35:11.653927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.864 qpair failed and we were unable to recover it. 00:30:07.864 [2024-07-15 15:35:11.654259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.864 [2024-07-15 15:35:11.654299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.864 qpair failed and we were unable to recover it. 00:30:07.864 [2024-07-15 15:35:11.654689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.864 [2024-07-15 15:35:11.654728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.864 qpair failed and we were unable to recover it. 00:30:07.864 [2024-07-15 15:35:11.655079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.864 [2024-07-15 15:35:11.655134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.864 qpair failed and we were unable to recover it. 00:30:07.864 [2024-07-15 15:35:11.655360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.865 [2024-07-15 15:35:11.655378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.865 qpair failed and we were unable to recover it. 00:30:07.865 [2024-07-15 15:35:11.655716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.865 [2024-07-15 15:35:11.655756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.865 qpair failed and we were unable to recover it. 00:30:07.865 [2024-07-15 15:35:11.656071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.865 [2024-07-15 15:35:11.656112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.865 qpair failed and we were unable to recover it. 00:30:07.865 [2024-07-15 15:35:11.656456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.865 [2024-07-15 15:35:11.656496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.865 qpair failed and we were unable to recover it. 00:30:07.865 [2024-07-15 15:35:11.656736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.865 [2024-07-15 15:35:11.656776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.865 qpair failed and we were unable to recover it. 00:30:07.865 [2024-07-15 15:35:11.657077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.865 [2024-07-15 15:35:11.657117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.865 qpair failed and we were unable to recover it. 00:30:07.865 [2024-07-15 15:35:11.657403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.865 [2024-07-15 15:35:11.657455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.865 qpair failed and we were unable to recover it. 00:30:07.865 [2024-07-15 15:35:11.657817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.865 [2024-07-15 15:35:11.657866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.865 qpair failed and we were unable to recover it. 00:30:07.865 [2024-07-15 15:35:11.658229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.865 [2024-07-15 15:35:11.658268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.865 qpair failed and we were unable to recover it. 00:30:07.865 [2024-07-15 15:35:11.658645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.865 [2024-07-15 15:35:11.658662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.865 qpair failed and we were unable to recover it. 00:30:07.865 [2024-07-15 15:35:11.658974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.865 [2024-07-15 15:35:11.658991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.865 qpair failed and we were unable to recover it. 00:30:07.865 [2024-07-15 15:35:11.659236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.865 [2024-07-15 15:35:11.659253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.865 qpair failed and we were unable to recover it. 00:30:07.865 [2024-07-15 15:35:11.659528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.865 [2024-07-15 15:35:11.659568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.865 qpair failed and we were unable to recover it. 00:30:07.865 [2024-07-15 15:35:11.659869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.865 [2024-07-15 15:35:11.659910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.865 qpair failed and we were unable to recover it. 00:30:07.865 [2024-07-15 15:35:11.660204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.865 [2024-07-15 15:35:11.660244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.865 qpair failed and we were unable to recover it. 00:30:07.865 [2024-07-15 15:35:11.660557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.865 [2024-07-15 15:35:11.660596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.865 qpair failed and we were unable to recover it. 00:30:07.865 [2024-07-15 15:35:11.660990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.865 [2024-07-15 15:35:11.661030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.865 qpair failed and we were unable to recover it. 00:30:07.865 [2024-07-15 15:35:11.661323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.865 [2024-07-15 15:35:11.661363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.865 qpair failed and we were unable to recover it. 00:30:07.865 [2024-07-15 15:35:11.661755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.865 [2024-07-15 15:35:11.661795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:07.865 qpair failed and we were unable to recover it. 00:30:07.865 [2024-07-15 15:35:11.662149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.865 [2024-07-15 15:35:11.662228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.865 qpair failed and we were unable to recover it. 00:30:07.865 [2024-07-15 15:35:11.662482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.865 [2024-07-15 15:35:11.662500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.865 qpair failed and we were unable to recover it. 00:30:07.865 [2024-07-15 15:35:11.662863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.865 [2024-07-15 15:35:11.662906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.865 qpair failed and we were unable to recover it. 00:30:07.865 [2024-07-15 15:35:11.663220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.865 [2024-07-15 15:35:11.663259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.865 qpair failed and we were unable to recover it. 00:30:07.865 [2024-07-15 15:35:11.663569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.865 [2024-07-15 15:35:11.663610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.865 qpair failed and we were unable to recover it. 00:30:07.865 [2024-07-15 15:35:11.663918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.865 [2024-07-15 15:35:11.663960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.865 qpair failed and we were unable to recover it. 00:30:07.865 [2024-07-15 15:35:11.664253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.865 [2024-07-15 15:35:11.664293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.865 qpair failed and we were unable to recover it. 00:30:07.865 [2024-07-15 15:35:11.664612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.865 [2024-07-15 15:35:11.664652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.865 qpair failed and we were unable to recover it. 00:30:07.865 [2024-07-15 15:35:11.664953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.865 [2024-07-15 15:35:11.664993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.865 qpair failed and we were unable to recover it. 00:30:07.865 [2024-07-15 15:35:11.665303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.865 [2024-07-15 15:35:11.665342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.865 qpair failed and we were unable to recover it. 00:30:07.865 [2024-07-15 15:35:11.665597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.865 [2024-07-15 15:35:11.665637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.865 qpair failed and we were unable to recover it. 00:30:07.865 [2024-07-15 15:35:11.666010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.865 [2024-07-15 15:35:11.666050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.865 qpair failed and we were unable to recover it. 00:30:07.865 [2024-07-15 15:35:11.666435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.865 [2024-07-15 15:35:11.666474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.865 qpair failed and we were unable to recover it. 00:30:07.865 [2024-07-15 15:35:11.666846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.865 [2024-07-15 15:35:11.666886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.865 qpair failed and we were unable to recover it. 00:30:07.865 [2024-07-15 15:35:11.667243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.865 [2024-07-15 15:35:11.667282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.865 qpair failed and we were unable to recover it. 00:30:07.865 [2024-07-15 15:35:11.667645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.865 [2024-07-15 15:35:11.667685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.865 qpair failed and we were unable to recover it. 00:30:07.865 [2024-07-15 15:35:11.667986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.865 [2024-07-15 15:35:11.668027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.865 qpair failed and we were unable to recover it. 00:30:07.865 [2024-07-15 15:35:11.668322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.865 [2024-07-15 15:35:11.668361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.865 qpair failed and we were unable to recover it. 00:30:07.865 [2024-07-15 15:35:11.668665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.865 [2024-07-15 15:35:11.668705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.865 qpair failed and we were unable to recover it. 00:30:07.865 [2024-07-15 15:35:11.669100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.865 [2024-07-15 15:35:11.669141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.865 qpair failed and we were unable to recover it. 00:30:07.865 [2024-07-15 15:35:11.669503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.865 [2024-07-15 15:35:11.669543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.865 qpair failed and we were unable to recover it. 00:30:07.865 [2024-07-15 15:35:11.669929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.865 [2024-07-15 15:35:11.669970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.865 qpair failed and we were unable to recover it. 00:30:07.865 [2024-07-15 15:35:11.670342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.865 [2024-07-15 15:35:11.670381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.866 qpair failed and we were unable to recover it. 00:30:07.866 [2024-07-15 15:35:11.670745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.866 [2024-07-15 15:35:11.670784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.866 qpair failed and we were unable to recover it. 00:30:07.866 [2024-07-15 15:35:11.671106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.866 [2024-07-15 15:35:11.671146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.866 qpair failed and we were unable to recover it. 00:30:07.866 [2024-07-15 15:35:11.671509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.866 [2024-07-15 15:35:11.671548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.866 qpair failed and we were unable to recover it. 00:30:07.866 [2024-07-15 15:35:11.671863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.866 [2024-07-15 15:35:11.671904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.866 qpair failed and we were unable to recover it. 00:30:07.866 [2024-07-15 15:35:11.672199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.866 [2024-07-15 15:35:11.672238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.866 qpair failed and we were unable to recover it. 00:30:07.866 [2024-07-15 15:35:11.672563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.866 [2024-07-15 15:35:11.672602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.866 qpair failed and we were unable to recover it. 00:30:07.866 [2024-07-15 15:35:11.672900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.866 [2024-07-15 15:35:11.672940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.866 qpair failed and we were unable to recover it. 00:30:07.866 [2024-07-15 15:35:11.673327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.866 [2024-07-15 15:35:11.673367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.866 qpair failed and we were unable to recover it. 00:30:07.866 [2024-07-15 15:35:11.673681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.866 [2024-07-15 15:35:11.673720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.866 qpair failed and we were unable to recover it. 00:30:07.866 [2024-07-15 15:35:11.674026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.866 [2024-07-15 15:35:11.674066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.866 qpair failed and we were unable to recover it. 00:30:07.866 [2024-07-15 15:35:11.674388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.866 [2024-07-15 15:35:11.674427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.866 qpair failed and we were unable to recover it. 00:30:07.866 [2024-07-15 15:35:11.674796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.866 [2024-07-15 15:35:11.674845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.866 qpair failed and we were unable to recover it. 00:30:07.866 [2024-07-15 15:35:11.675166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.866 [2024-07-15 15:35:11.675206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.866 qpair failed and we were unable to recover it. 00:30:07.866 [2024-07-15 15:35:11.675526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.866 [2024-07-15 15:35:11.675566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.866 qpair failed and we were unable to recover it. 00:30:07.866 [2024-07-15 15:35:11.675953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.866 [2024-07-15 15:35:11.675994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.866 qpair failed and we were unable to recover it. 00:30:07.866 [2024-07-15 15:35:11.676387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.866 [2024-07-15 15:35:11.676426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.866 qpair failed and we were unable to recover it. 00:30:07.866 [2024-07-15 15:35:11.676719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.866 [2024-07-15 15:35:11.676759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.866 qpair failed and we were unable to recover it. 00:30:07.866 [2024-07-15 15:35:11.677138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.866 [2024-07-15 15:35:11.677178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.866 qpair failed and we were unable to recover it. 00:30:07.866 [2024-07-15 15:35:11.677578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.866 [2024-07-15 15:35:11.677617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.866 qpair failed and we were unable to recover it. 00:30:07.866 [2024-07-15 15:35:11.677922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.866 [2024-07-15 15:35:11.677962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.866 qpair failed and we were unable to recover it. 00:30:07.866 [2024-07-15 15:35:11.678374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.866 [2024-07-15 15:35:11.678424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.866 qpair failed and we were unable to recover it. 00:30:07.866 [2024-07-15 15:35:11.678722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.866 [2024-07-15 15:35:11.678762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.866 qpair failed and we were unable to recover it. 00:30:07.866 [2024-07-15 15:35:11.679154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.866 [2024-07-15 15:35:11.679194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.866 qpair failed and we were unable to recover it. 00:30:07.866 [2024-07-15 15:35:11.679503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.866 [2024-07-15 15:35:11.679555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.866 qpair failed and we were unable to recover it. 00:30:07.866 [2024-07-15 15:35:11.679920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.866 [2024-07-15 15:35:11.679961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.866 qpair failed and we were unable to recover it. 00:30:07.866 [2024-07-15 15:35:11.680278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.866 [2024-07-15 15:35:11.680317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.866 qpair failed and we were unable to recover it. 00:30:07.866 [2024-07-15 15:35:11.680564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.866 [2024-07-15 15:35:11.680604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.866 qpair failed and we were unable to recover it. 00:30:07.866 [2024-07-15 15:35:11.680980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.866 [2024-07-15 15:35:11.681021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.866 qpair failed and we were unable to recover it. 00:30:07.866 [2024-07-15 15:35:11.681346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.866 [2024-07-15 15:35:11.681363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.866 qpair failed and we were unable to recover it. 00:30:07.866 [2024-07-15 15:35:11.681568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.866 [2024-07-15 15:35:11.681586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.866 qpair failed and we were unable to recover it. 00:30:07.866 [2024-07-15 15:35:11.681927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.866 [2024-07-15 15:35:11.681967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.866 qpair failed and we were unable to recover it. 00:30:07.866 [2024-07-15 15:35:11.682331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.866 [2024-07-15 15:35:11.682370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.866 qpair failed and we were unable to recover it. 00:30:07.866 [2024-07-15 15:35:11.682734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.866 [2024-07-15 15:35:11.682750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.866 qpair failed and we were unable to recover it. 00:30:07.866 [2024-07-15 15:35:11.683045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.866 [2024-07-15 15:35:11.683085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.866 qpair failed and we were unable to recover it. 00:30:07.866 [2024-07-15 15:35:11.683412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.866 [2024-07-15 15:35:11.683452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.866 qpair failed and we were unable to recover it. 00:30:07.866 [2024-07-15 15:35:11.683848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.866 [2024-07-15 15:35:11.683889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.867 qpair failed and we were unable to recover it. 00:30:07.867 [2024-07-15 15:35:11.684253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.867 [2024-07-15 15:35:11.684291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.867 qpair failed and we were unable to recover it. 00:30:07.867 [2024-07-15 15:35:11.684581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.867 [2024-07-15 15:35:11.684598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.867 qpair failed and we were unable to recover it. 00:30:07.867 [2024-07-15 15:35:11.684801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.867 [2024-07-15 15:35:11.684820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.867 qpair failed and we were unable to recover it. 00:30:07.867 [2024-07-15 15:35:11.685165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.867 [2024-07-15 15:35:11.685205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.867 qpair failed and we were unable to recover it. 00:30:07.867 [2024-07-15 15:35:11.685514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.867 [2024-07-15 15:35:11.685554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.867 qpair failed and we were unable to recover it. 00:30:07.867 [2024-07-15 15:35:11.685961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.867 [2024-07-15 15:35:11.686002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.867 qpair failed and we were unable to recover it. 00:30:07.867 [2024-07-15 15:35:11.686334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.867 [2024-07-15 15:35:11.686373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.867 qpair failed and we were unable to recover it. 00:30:07.867 [2024-07-15 15:35:11.686746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.867 [2024-07-15 15:35:11.686763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.867 qpair failed and we were unable to recover it. 00:30:07.867 [2024-07-15 15:35:11.686952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.867 [2024-07-15 15:35:11.686970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.867 qpair failed and we were unable to recover it. 00:30:07.867 [2024-07-15 15:35:11.687281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.867 [2024-07-15 15:35:11.687298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.867 qpair failed and we were unable to recover it. 00:30:07.867 [2024-07-15 15:35:11.687549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.867 [2024-07-15 15:35:11.687566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.867 qpair failed and we were unable to recover it. 00:30:07.867 [2024-07-15 15:35:11.687899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.867 [2024-07-15 15:35:11.687916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.867 qpair failed and we were unable to recover it. 00:30:07.867 [2024-07-15 15:35:11.688167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.867 [2024-07-15 15:35:11.688205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.867 qpair failed and we were unable to recover it. 00:30:07.867 [2024-07-15 15:35:11.688516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.867 [2024-07-15 15:35:11.688554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.867 qpair failed and we were unable to recover it. 00:30:07.867 [2024-07-15 15:35:11.688939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.867 [2024-07-15 15:35:11.688979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.867 qpair failed and we were unable to recover it. 00:30:07.867 [2024-07-15 15:35:11.689293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.867 [2024-07-15 15:35:11.689332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.867 qpair failed and we were unable to recover it. 00:30:07.867 [2024-07-15 15:35:11.689750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.867 [2024-07-15 15:35:11.689789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.867 qpair failed and we were unable to recover it. 00:30:07.867 [2024-07-15 15:35:11.690097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.867 [2024-07-15 15:35:11.690137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.867 qpair failed and we were unable to recover it. 00:30:07.867 [2024-07-15 15:35:11.690453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.867 [2024-07-15 15:35:11.690493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.867 qpair failed and we were unable to recover it. 00:30:07.867 [2024-07-15 15:35:11.690735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.867 [2024-07-15 15:35:11.690774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.867 qpair failed and we were unable to recover it. 00:30:07.867 [2024-07-15 15:35:11.691174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.867 [2024-07-15 15:35:11.691215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.867 qpair failed and we were unable to recover it. 00:30:07.867 [2024-07-15 15:35:11.691449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.867 [2024-07-15 15:35:11.691489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.867 qpair failed and we were unable to recover it. 00:30:07.867 [2024-07-15 15:35:11.691842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.867 [2024-07-15 15:35:11.691860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.867 qpair failed and we were unable to recover it. 00:30:07.867 [2024-07-15 15:35:11.692072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.867 [2024-07-15 15:35:11.692111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.867 qpair failed and we were unable to recover it. 00:30:07.867 [2024-07-15 15:35:11.692364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.867 [2024-07-15 15:35:11.692403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.867 qpair failed and we were unable to recover it. 00:30:07.867 [2024-07-15 15:35:11.692743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.867 [2024-07-15 15:35:11.692784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.867 qpair failed and we were unable to recover it. 00:30:07.867 [2024-07-15 15:35:11.693036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.867 [2024-07-15 15:35:11.693076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.867 qpair failed and we were unable to recover it. 00:30:07.867 [2024-07-15 15:35:11.693462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.867 [2024-07-15 15:35:11.693501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.867 qpair failed and we were unable to recover it. 00:30:07.867 [2024-07-15 15:35:11.693816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.867 [2024-07-15 15:35:11.693862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.867 qpair failed and we were unable to recover it. 00:30:07.867 [2024-07-15 15:35:11.694156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.867 [2024-07-15 15:35:11.694202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.867 qpair failed and we were unable to recover it. 00:30:07.867 [2024-07-15 15:35:11.694522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.867 [2024-07-15 15:35:11.694562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.867 qpair failed and we were unable to recover it. 00:30:07.867 [2024-07-15 15:35:11.694886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.867 [2024-07-15 15:35:11.694926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.867 qpair failed and we were unable to recover it. 00:30:07.867 [2024-07-15 15:35:11.695333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.867 [2024-07-15 15:35:11.695373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.867 qpair failed and we were unable to recover it. 00:30:07.867 [2024-07-15 15:35:11.695732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.867 [2024-07-15 15:35:11.695772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.867 qpair failed and we were unable to recover it. 00:30:07.867 [2024-07-15 15:35:11.696096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.867 [2024-07-15 15:35:11.696137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.867 qpair failed and we were unable to recover it. 00:30:07.867 [2024-07-15 15:35:11.696503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.867 [2024-07-15 15:35:11.696542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.867 qpair failed and we were unable to recover it. 00:30:07.867 [2024-07-15 15:35:11.696906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.867 [2024-07-15 15:35:11.696947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.867 qpair failed and we were unable to recover it. 00:30:07.867 [2024-07-15 15:35:11.697335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.867 [2024-07-15 15:35:11.697374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.867 qpair failed and we were unable to recover it. 00:30:07.867 [2024-07-15 15:35:11.697689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.867 [2024-07-15 15:35:11.697707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.867 qpair failed and we were unable to recover it. 00:30:07.867 [2024-07-15 15:35:11.697815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.867 [2024-07-15 15:35:11.697835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.867 qpair failed and we were unable to recover it. 00:30:07.867 [2024-07-15 15:35:11.698026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.867 [2024-07-15 15:35:11.698043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.867 qpair failed and we were unable to recover it. 00:30:07.868 [2024-07-15 15:35:11.698293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.868 [2024-07-15 15:35:11.698333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.868 qpair failed and we were unable to recover it. 00:30:07.868 [2024-07-15 15:35:11.698670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.868 [2024-07-15 15:35:11.698710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.868 qpair failed and we were unable to recover it. 00:30:07.868 [2024-07-15 15:35:11.699053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.868 [2024-07-15 15:35:11.699070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.868 qpair failed and we were unable to recover it. 00:30:07.868 [2024-07-15 15:35:11.699355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.868 [2024-07-15 15:35:11.699394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.868 qpair failed and we were unable to recover it. 00:30:07.868 [2024-07-15 15:35:11.699699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.868 [2024-07-15 15:35:11.699739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.868 qpair failed and we were unable to recover it. 00:30:07.868 [2024-07-15 15:35:11.700035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.868 [2024-07-15 15:35:11.700075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.868 qpair failed and we were unable to recover it. 00:30:07.868 [2024-07-15 15:35:11.700462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.868 [2024-07-15 15:35:11.700501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.868 qpair failed and we were unable to recover it. 00:30:07.868 [2024-07-15 15:35:11.700737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.868 [2024-07-15 15:35:11.700754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.868 qpair failed and we were unable to recover it. 00:30:07.868 [2024-07-15 15:35:11.701097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.868 [2024-07-15 15:35:11.701137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.868 qpair failed and we were unable to recover it. 00:30:07.868 [2024-07-15 15:35:11.701434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.868 [2024-07-15 15:35:11.701473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.868 qpair failed and we were unable to recover it. 00:30:07.868 [2024-07-15 15:35:11.701845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.868 [2024-07-15 15:35:11.701885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.868 qpair failed and we were unable to recover it. 00:30:07.868 [2024-07-15 15:35:11.702189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.868 [2024-07-15 15:35:11.702228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.868 qpair failed and we were unable to recover it. 00:30:07.868 [2024-07-15 15:35:11.702522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.868 [2024-07-15 15:35:11.702561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.868 qpair failed and we were unable to recover it. 00:30:07.868 [2024-07-15 15:35:11.702926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.868 [2024-07-15 15:35:11.702966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.868 qpair failed and we were unable to recover it. 00:30:07.868 [2024-07-15 15:35:11.703202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.868 [2024-07-15 15:35:11.703242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.868 qpair failed and we were unable to recover it. 00:30:07.868 [2024-07-15 15:35:11.703470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.868 [2024-07-15 15:35:11.703487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.868 qpair failed and we were unable to recover it. 00:30:07.868 [2024-07-15 15:35:11.703679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.868 [2024-07-15 15:35:11.703696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.868 qpair failed and we were unable to recover it. 00:30:07.868 [2024-07-15 15:35:11.704034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.868 [2024-07-15 15:35:11.704075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.868 qpair failed and we were unable to recover it. 00:30:07.868 [2024-07-15 15:35:11.704458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.868 [2024-07-15 15:35:11.704498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.868 qpair failed and we were unable to recover it. 00:30:07.868 [2024-07-15 15:35:11.704720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.868 [2024-07-15 15:35:11.704737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.868 qpair failed and we were unable to recover it. 00:30:07.868 [2024-07-15 15:35:11.705051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.868 [2024-07-15 15:35:11.705097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.868 qpair failed and we were unable to recover it. 00:30:07.868 [2024-07-15 15:35:11.705413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.868 [2024-07-15 15:35:11.705453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.868 qpair failed and we were unable to recover it. 00:30:07.868 [2024-07-15 15:35:11.705768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.868 [2024-07-15 15:35:11.705807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.868 qpair failed and we were unable to recover it. 00:30:07.868 [2024-07-15 15:35:11.706200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.868 [2024-07-15 15:35:11.706239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.868 qpair failed and we were unable to recover it. 00:30:07.868 [2024-07-15 15:35:11.706603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.868 [2024-07-15 15:35:11.706643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.868 qpair failed and we were unable to recover it. 00:30:07.868 [2024-07-15 15:35:11.706958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.868 [2024-07-15 15:35:11.706998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.868 qpair failed and we were unable to recover it. 00:30:07.868 [2024-07-15 15:35:11.707367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.868 [2024-07-15 15:35:11.707406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.868 qpair failed and we were unable to recover it. 00:30:07.868 [2024-07-15 15:35:11.707711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.868 [2024-07-15 15:35:11.707751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.868 qpair failed and we were unable to recover it. 00:30:07.868 [2024-07-15 15:35:11.708046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.868 [2024-07-15 15:35:11.708086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.868 qpair failed and we were unable to recover it. 00:30:07.868 [2024-07-15 15:35:11.708416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.868 [2024-07-15 15:35:11.708456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.868 qpair failed and we were unable to recover it. 00:30:07.868 [2024-07-15 15:35:11.708819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.868 [2024-07-15 15:35:11.708872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.868 qpair failed and we were unable to recover it. 00:30:07.868 [2024-07-15 15:35:11.709145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.868 [2024-07-15 15:35:11.709184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.868 qpair failed and we were unable to recover it. 00:30:07.868 [2024-07-15 15:35:11.709492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.868 [2024-07-15 15:35:11.709532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.868 qpair failed and we were unable to recover it. 00:30:07.868 [2024-07-15 15:35:11.709866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.868 [2024-07-15 15:35:11.709906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.868 qpair failed and we were unable to recover it. 00:30:07.868 [2024-07-15 15:35:11.710290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.868 [2024-07-15 15:35:11.710329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.868 qpair failed and we were unable to recover it. 00:30:07.868 [2024-07-15 15:35:11.710609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.868 [2024-07-15 15:35:11.710627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.868 qpair failed and we were unable to recover it. 00:30:07.868 [2024-07-15 15:35:11.710950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.868 [2024-07-15 15:35:11.710990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.868 qpair failed and we were unable to recover it. 00:30:07.868 [2024-07-15 15:35:11.711381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.868 [2024-07-15 15:35:11.711433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.868 qpair failed and we were unable to recover it. 00:30:07.868 [2024-07-15 15:35:11.711767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.868 [2024-07-15 15:35:11.711784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.868 qpair failed and we were unable to recover it. 00:30:07.868 [2024-07-15 15:35:11.712140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.868 [2024-07-15 15:35:11.712181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.868 qpair failed and we were unable to recover it. 00:30:07.868 [2024-07-15 15:35:11.712495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.868 [2024-07-15 15:35:11.712534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.868 qpair failed and we were unable to recover it. 00:30:07.868 [2024-07-15 15:35:11.712762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.869 [2024-07-15 15:35:11.712779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.869 qpair failed and we were unable to recover it. 00:30:07.869 [2024-07-15 15:35:11.713024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.869 [2024-07-15 15:35:11.713041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.869 qpair failed and we were unable to recover it. 00:30:07.869 [2024-07-15 15:35:11.713389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.869 [2024-07-15 15:35:11.713429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.869 qpair failed and we were unable to recover it. 00:30:07.869 [2024-07-15 15:35:11.713739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.869 [2024-07-15 15:35:11.713778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.869 qpair failed and we were unable to recover it. 00:30:07.869 [2024-07-15 15:35:11.714092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.869 [2024-07-15 15:35:11.714133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.869 qpair failed and we were unable to recover it. 00:30:07.869 [2024-07-15 15:35:11.714434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.869 [2024-07-15 15:35:11.714473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.869 qpair failed and we were unable to recover it. 00:30:07.869 [2024-07-15 15:35:11.714899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.869 [2024-07-15 15:35:11.714940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.869 qpair failed and we were unable to recover it. 00:30:07.869 [2024-07-15 15:35:11.715322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.869 [2024-07-15 15:35:11.715367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.869 qpair failed and we were unable to recover it. 00:30:07.869 [2024-07-15 15:35:11.715676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.869 [2024-07-15 15:35:11.715693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.869 qpair failed and we were unable to recover it. 00:30:07.869 [2024-07-15 15:35:11.715908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.869 [2024-07-15 15:35:11.715925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.869 qpair failed and we were unable to recover it. 00:30:07.869 [2024-07-15 15:35:11.716229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.869 [2024-07-15 15:35:11.716268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.869 qpair failed and we were unable to recover it. 00:30:07.869 [2024-07-15 15:35:11.716649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.869 [2024-07-15 15:35:11.716688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.869 qpair failed and we were unable to recover it. 00:30:07.869 [2024-07-15 15:35:11.717012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.869 [2024-07-15 15:35:11.717053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.869 qpair failed and we were unable to recover it. 00:30:07.869 [2024-07-15 15:35:11.717458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.869 [2024-07-15 15:35:11.717497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.869 qpair failed and we were unable to recover it. 00:30:07.869 [2024-07-15 15:35:11.717860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.869 [2024-07-15 15:35:11.717900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.869 qpair failed and we were unable to recover it. 00:30:07.869 [2024-07-15 15:35:11.718214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.869 [2024-07-15 15:35:11.718260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.869 qpair failed and we were unable to recover it. 00:30:07.869 [2024-07-15 15:35:11.718647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.869 [2024-07-15 15:35:11.718686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.869 qpair failed and we were unable to recover it. 00:30:07.869 [2024-07-15 15:35:11.718924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.869 [2024-07-15 15:35:11.718965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.869 qpair failed and we were unable to recover it. 00:30:07.869 [2024-07-15 15:35:11.719326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.869 [2024-07-15 15:35:11.719365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.869 qpair failed and we were unable to recover it. 00:30:07.869 [2024-07-15 15:35:11.719704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.869 [2024-07-15 15:35:11.719744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.869 qpair failed and we were unable to recover it. 00:30:07.869 [2024-07-15 15:35:11.720080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.869 [2024-07-15 15:35:11.720122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.869 qpair failed and we were unable to recover it. 00:30:07.869 [2024-07-15 15:35:11.720485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.869 [2024-07-15 15:35:11.720524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.869 qpair failed and we were unable to recover it. 00:30:07.869 [2024-07-15 15:35:11.720823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.869 [2024-07-15 15:35:11.720852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.869 qpair failed and we were unable to recover it. 00:30:07.869 [2024-07-15 15:35:11.721157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.869 [2024-07-15 15:35:11.721198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.869 qpair failed and we were unable to recover it. 00:30:07.869 [2024-07-15 15:35:11.721507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.869 [2024-07-15 15:35:11.721546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.869 qpair failed and we were unable to recover it. 00:30:07.869 [2024-07-15 15:35:11.721790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.869 [2024-07-15 15:35:11.721830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.869 qpair failed and we were unable to recover it. 00:30:07.869 [2024-07-15 15:35:11.722224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.869 [2024-07-15 15:35:11.722263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.869 qpair failed and we were unable to recover it. 00:30:07.869 [2024-07-15 15:35:11.722621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.869 [2024-07-15 15:35:11.722638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.869 qpair failed and we were unable to recover it. 00:30:07.869 [2024-07-15 15:35:11.722914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.869 [2024-07-15 15:35:11.722932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.869 qpair failed and we were unable to recover it. 00:30:07.869 [2024-07-15 15:35:11.723270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.869 [2024-07-15 15:35:11.723309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.869 qpair failed and we were unable to recover it. 00:30:07.869 [2024-07-15 15:35:11.723617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.869 [2024-07-15 15:35:11.723657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.869 qpair failed and we were unable to recover it. 00:30:07.869 [2024-07-15 15:35:11.723953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.869 [2024-07-15 15:35:11.723994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.869 qpair failed and we were unable to recover it. 00:30:07.869 [2024-07-15 15:35:11.724316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.869 [2024-07-15 15:35:11.724355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.869 qpair failed and we were unable to recover it. 00:30:07.869 [2024-07-15 15:35:11.724680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.869 [2024-07-15 15:35:11.724720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.869 qpair failed and we were unable to recover it. 00:30:07.869 [2024-07-15 15:35:11.725026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.869 [2024-07-15 15:35:11.725066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.869 qpair failed and we were unable to recover it. 00:30:07.869 [2024-07-15 15:35:11.725456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.869 [2024-07-15 15:35:11.725495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.869 qpair failed and we were unable to recover it. 00:30:07.869 [2024-07-15 15:35:11.725889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.869 [2024-07-15 15:35:11.725906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.869 qpair failed and we were unable to recover it. 00:30:07.869 [2024-07-15 15:35:11.726246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.869 [2024-07-15 15:35:11.726285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.869 qpair failed and we were unable to recover it. 00:30:07.869 [2024-07-15 15:35:11.726649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.869 [2024-07-15 15:35:11.726689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.869 qpair failed and we were unable to recover it. 00:30:07.869 [2024-07-15 15:35:11.727007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.869 [2024-07-15 15:35:11.727048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.869 qpair failed and we were unable to recover it. 00:30:07.869 [2024-07-15 15:35:11.727442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.869 [2024-07-15 15:35:11.727481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.869 qpair failed and we were unable to recover it. 00:30:07.869 [2024-07-15 15:35:11.727883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.869 [2024-07-15 15:35:11.727901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.870 qpair failed and we were unable to recover it. 00:30:07.870 [2024-07-15 15:35:11.728173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.870 [2024-07-15 15:35:11.728219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.870 qpair failed and we were unable to recover it. 00:30:07.870 [2024-07-15 15:35:11.728583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.870 [2024-07-15 15:35:11.728622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.870 qpair failed and we were unable to recover it. 00:30:07.870 [2024-07-15 15:35:11.728786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.870 [2024-07-15 15:35:11.728802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.870 qpair failed and we were unable to recover it. 00:30:07.870 [2024-07-15 15:35:11.729124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.870 [2024-07-15 15:35:11.729142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.870 qpair failed and we were unable to recover it. 00:30:07.870 [2024-07-15 15:35:11.729407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.870 [2024-07-15 15:35:11.729424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.870 qpair failed and we were unable to recover it. 00:30:07.870 [2024-07-15 15:35:11.729764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.870 [2024-07-15 15:35:11.729803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.870 qpair failed and we were unable to recover it. 00:30:07.870 [2024-07-15 15:35:11.730135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.870 [2024-07-15 15:35:11.730175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.870 qpair failed and we were unable to recover it. 00:30:07.870 [2024-07-15 15:35:11.730397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.870 [2024-07-15 15:35:11.730414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.870 qpair failed and we were unable to recover it. 00:30:07.870 [2024-07-15 15:35:11.730590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.870 [2024-07-15 15:35:11.730606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.870 qpair failed and we were unable to recover it. 00:30:07.870 [2024-07-15 15:35:11.730861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.870 [2024-07-15 15:35:11.730878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.870 qpair failed and we were unable to recover it. 00:30:07.870 [2024-07-15 15:35:11.731189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.870 [2024-07-15 15:35:11.731206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.870 qpair failed and we were unable to recover it. 00:30:07.870 [2024-07-15 15:35:11.731491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.870 [2024-07-15 15:35:11.731530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.870 qpair failed and we were unable to recover it. 00:30:07.870 [2024-07-15 15:35:11.731769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.870 [2024-07-15 15:35:11.731809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.870 qpair failed and we were unable to recover it. 00:30:07.870 [2024-07-15 15:35:11.732244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.870 [2024-07-15 15:35:11.732285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.870 qpair failed and we were unable to recover it. 00:30:07.870 [2024-07-15 15:35:11.732625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.870 [2024-07-15 15:35:11.732665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.870 qpair failed and we were unable to recover it. 00:30:07.870 [2024-07-15 15:35:11.733029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.870 [2024-07-15 15:35:11.733070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.870 qpair failed and we were unable to recover it. 00:30:07.870 [2024-07-15 15:35:11.733406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.870 [2024-07-15 15:35:11.733445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.870 qpair failed and we were unable to recover it. 00:30:07.870 [2024-07-15 15:35:11.733764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.870 [2024-07-15 15:35:11.733804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.870 qpair failed and we were unable to recover it. 00:30:07.870 [2024-07-15 15:35:11.734181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.870 [2024-07-15 15:35:11.734220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.870 qpair failed and we were unable to recover it. 00:30:07.870 [2024-07-15 15:35:11.734395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.870 [2024-07-15 15:35:11.734434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.870 qpair failed and we were unable to recover it. 00:30:07.870 [2024-07-15 15:35:11.734729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.870 [2024-07-15 15:35:11.734770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.870 qpair failed and we were unable to recover it. 00:30:07.870 [2024-07-15 15:35:11.735084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.870 [2024-07-15 15:35:11.735124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.870 qpair failed and we were unable to recover it. 00:30:07.870 [2024-07-15 15:35:11.735454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.870 [2024-07-15 15:35:11.735491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.870 qpair failed and we were unable to recover it. 00:30:07.870 [2024-07-15 15:35:11.735829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.870 [2024-07-15 15:35:11.735879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.870 qpair failed and we were unable to recover it. 00:30:07.870 [2024-07-15 15:35:11.736178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.870 [2024-07-15 15:35:11.736218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.870 qpair failed and we were unable to recover it. 00:30:07.870 [2024-07-15 15:35:11.736603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.870 [2024-07-15 15:35:11.736643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.870 qpair failed and we were unable to recover it. 00:30:07.870 [2024-07-15 15:35:11.736961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.870 [2024-07-15 15:35:11.737002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.870 qpair failed and we were unable to recover it. 00:30:07.870 [2024-07-15 15:35:11.737337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.870 [2024-07-15 15:35:11.737382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.870 qpair failed and we were unable to recover it. 00:30:07.870 [2024-07-15 15:35:11.737743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.870 [2024-07-15 15:35:11.737760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.870 qpair failed and we were unable to recover it. 00:30:07.870 [2024-07-15 15:35:11.738098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.870 [2024-07-15 15:35:11.738115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.870 qpair failed and we were unable to recover it. 00:30:07.870 [2024-07-15 15:35:11.738375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.870 [2024-07-15 15:35:11.738392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.870 qpair failed and we were unable to recover it. 00:30:07.870 [2024-07-15 15:35:11.738652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.870 [2024-07-15 15:35:11.738669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.870 qpair failed and we were unable to recover it. 00:30:07.870 [2024-07-15 15:35:11.738945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.870 [2024-07-15 15:35:11.738984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.870 qpair failed and we were unable to recover it. 00:30:07.870 [2024-07-15 15:35:11.739345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.870 [2024-07-15 15:35:11.739384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.870 qpair failed and we were unable to recover it. 00:30:07.870 [2024-07-15 15:35:11.739719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.870 [2024-07-15 15:35:11.739758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.870 qpair failed and we were unable to recover it. 00:30:07.870 [2024-07-15 15:35:11.740040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.870 [2024-07-15 15:35:11.740081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.870 qpair failed and we were unable to recover it. 00:30:07.870 [2024-07-15 15:35:11.740322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.870 [2024-07-15 15:35:11.740361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.870 qpair failed and we were unable to recover it. 00:30:07.870 [2024-07-15 15:35:11.740678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.870 [2024-07-15 15:35:11.740695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.870 qpair failed and we were unable to recover it. 00:30:07.870 [2024-07-15 15:35:11.740955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.870 [2024-07-15 15:35:11.740973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.870 qpair failed and we were unable to recover it. 00:30:07.870 [2024-07-15 15:35:11.741236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.870 [2024-07-15 15:35:11.741276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.871 qpair failed and we were unable to recover it. 00:30:07.871 [2024-07-15 15:35:11.741564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.871 [2024-07-15 15:35:11.741581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.871 qpair failed and we were unable to recover it. 00:30:07.871 [2024-07-15 15:35:11.741769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.871 [2024-07-15 15:35:11.741787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.871 qpair failed and we were unable to recover it. 00:30:07.871 [2024-07-15 15:35:11.742053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.871 [2024-07-15 15:35:11.742070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.871 qpair failed and we were unable to recover it. 00:30:07.871 [2024-07-15 15:35:11.742354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.871 [2024-07-15 15:35:11.742371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.871 qpair failed and we were unable to recover it. 00:30:07.871 [2024-07-15 15:35:11.742614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.871 [2024-07-15 15:35:11.742631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:07.871 qpair failed and we were unable to recover it. 00:30:08.142 [2024-07-15 15:35:11.742989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.142 [2024-07-15 15:35:11.743007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.142 qpair failed and we were unable to recover it. 00:30:08.142 [2024-07-15 15:35:11.743337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.142 [2024-07-15 15:35:11.743354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.142 qpair failed and we were unable to recover it. 00:30:08.142 [2024-07-15 15:35:11.743689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.142 [2024-07-15 15:35:11.743706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.142 qpair failed and we were unable to recover it. 00:30:08.142 [2024-07-15 15:35:11.744042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.142 [2024-07-15 15:35:11.744059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.142 qpair failed and we were unable to recover it. 00:30:08.142 [2024-07-15 15:35:11.744305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.142 [2024-07-15 15:35:11.744322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.142 qpair failed and we were unable to recover it. 00:30:08.142 [2024-07-15 15:35:11.744578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.142 [2024-07-15 15:35:11.744595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.142 qpair failed and we were unable to recover it. 00:30:08.142 [2024-07-15 15:35:11.744770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.142 [2024-07-15 15:35:11.744787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.142 qpair failed and we were unable to recover it. 00:30:08.142 [2024-07-15 15:35:11.744983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.142 [2024-07-15 15:35:11.745001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.142 qpair failed and we were unable to recover it. 00:30:08.142 [2024-07-15 15:35:11.745261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.142 [2024-07-15 15:35:11.745278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.142 qpair failed and we were unable to recover it. 00:30:08.142 [2024-07-15 15:35:11.745606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.142 [2024-07-15 15:35:11.745623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.142 qpair failed and we were unable to recover it. 00:30:08.142 [2024-07-15 15:35:11.745747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.142 [2024-07-15 15:35:11.745799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.142 qpair failed and we were unable to recover it. 00:30:08.142 [2024-07-15 15:35:11.746175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.142 [2024-07-15 15:35:11.746215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.142 qpair failed and we were unable to recover it. 00:30:08.142 [2024-07-15 15:35:11.746505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.142 [2024-07-15 15:35:11.746522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.142 qpair failed and we were unable to recover it. 00:30:08.142 [2024-07-15 15:35:11.746804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.142 [2024-07-15 15:35:11.746821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.142 qpair failed and we were unable to recover it. 00:30:08.142 [2024-07-15 15:35:11.747142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.142 [2024-07-15 15:35:11.747182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.142 qpair failed and we were unable to recover it. 00:30:08.142 [2024-07-15 15:35:11.747495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.142 [2024-07-15 15:35:11.747534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.142 qpair failed and we were unable to recover it. 00:30:08.142 [2024-07-15 15:35:11.747862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.142 [2024-07-15 15:35:11.747898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.142 qpair failed and we were unable to recover it. 00:30:08.142 [2024-07-15 15:35:11.748298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.142 [2024-07-15 15:35:11.748337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.142 qpair failed and we were unable to recover it. 00:30:08.142 [2024-07-15 15:35:11.748724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.142 [2024-07-15 15:35:11.748764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.142 qpair failed and we were unable to recover it. 00:30:08.142 [2024-07-15 15:35:11.749157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.142 [2024-07-15 15:35:11.749197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.142 qpair failed and we were unable to recover it. 00:30:08.142 [2024-07-15 15:35:11.749504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.142 [2024-07-15 15:35:11.749521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.142 qpair failed and we were unable to recover it. 00:30:08.142 [2024-07-15 15:35:11.749783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.143 [2024-07-15 15:35:11.749844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.143 qpair failed and we were unable to recover it. 00:30:08.143 [2024-07-15 15:35:11.750139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.143 [2024-07-15 15:35:11.750178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.143 qpair failed and we were unable to recover it. 00:30:08.143 [2024-07-15 15:35:11.750604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.143 [2024-07-15 15:35:11.750644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.143 qpair failed and we were unable to recover it. 00:30:08.143 [2024-07-15 15:35:11.750967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.143 [2024-07-15 15:35:11.751025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.143 qpair failed and we were unable to recover it. 00:30:08.143 [2024-07-15 15:35:11.751349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.143 [2024-07-15 15:35:11.751388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.143 qpair failed and we were unable to recover it. 00:30:08.143 [2024-07-15 15:35:11.751615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.143 [2024-07-15 15:35:11.751654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.143 qpair failed and we were unable to recover it. 00:30:08.143 [2024-07-15 15:35:11.751877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.143 [2024-07-15 15:35:11.751894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.143 qpair failed and we were unable to recover it. 00:30:08.143 [2024-07-15 15:35:11.752161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.143 [2024-07-15 15:35:11.752200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.143 qpair failed and we were unable to recover it. 00:30:08.143 [2024-07-15 15:35:11.752562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.143 [2024-07-15 15:35:11.752601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.143 qpair failed and we were unable to recover it. 00:30:08.143 [2024-07-15 15:35:11.752889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.143 [2024-07-15 15:35:11.752906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.143 qpair failed and we were unable to recover it. 00:30:08.143 [2024-07-15 15:35:11.753162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.143 [2024-07-15 15:35:11.753208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.143 qpair failed and we were unable to recover it. 00:30:08.143 [2024-07-15 15:35:11.753592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.143 [2024-07-15 15:35:11.753631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.143 qpair failed and we were unable to recover it. 00:30:08.143 [2024-07-15 15:35:11.753936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.143 [2024-07-15 15:35:11.753953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.143 qpair failed and we were unable to recover it. 00:30:08.143 [2024-07-15 15:35:11.754283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.143 [2024-07-15 15:35:11.754322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.143 qpair failed and we were unable to recover it. 00:30:08.143 [2024-07-15 15:35:11.754712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.143 [2024-07-15 15:35:11.754752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.143 qpair failed and we were unable to recover it. 00:30:08.143 [2024-07-15 15:35:11.754999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.143 [2024-07-15 15:35:11.755039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.143 qpair failed and we were unable to recover it. 00:30:08.143 [2024-07-15 15:35:11.755387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.143 [2024-07-15 15:35:11.755427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.143 qpair failed and we were unable to recover it. 00:30:08.143 [2024-07-15 15:35:11.755745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.143 [2024-07-15 15:35:11.755785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.143 qpair failed and we were unable to recover it. 00:30:08.143 [2024-07-15 15:35:11.756091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.143 [2024-07-15 15:35:11.756109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.143 qpair failed and we were unable to recover it. 00:30:08.143 [2024-07-15 15:35:11.756428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.143 [2024-07-15 15:35:11.756467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.143 qpair failed and we were unable to recover it. 00:30:08.143 [2024-07-15 15:35:11.756715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.143 [2024-07-15 15:35:11.756754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.143 qpair failed and we were unable to recover it. 00:30:08.143 [2024-07-15 15:35:11.757106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.143 [2024-07-15 15:35:11.757146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.143 qpair failed and we were unable to recover it. 00:30:08.143 [2024-07-15 15:35:11.757463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.143 [2024-07-15 15:35:11.757502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.143 qpair failed and we were unable to recover it. 00:30:08.143 [2024-07-15 15:35:11.757752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.143 [2024-07-15 15:35:11.757791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.143 qpair failed and we were unable to recover it. 00:30:08.143 [2024-07-15 15:35:11.758127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.143 [2024-07-15 15:35:11.758167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.143 qpair failed and we were unable to recover it. 00:30:08.143 [2024-07-15 15:35:11.758548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.143 [2024-07-15 15:35:11.758587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.143 qpair failed and we were unable to recover it. 00:30:08.143 [2024-07-15 15:35:11.758857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.143 [2024-07-15 15:35:11.758898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.143 qpair failed and we were unable to recover it. 00:30:08.143 [2024-07-15 15:35:11.759260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.143 [2024-07-15 15:35:11.759300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.143 qpair failed and we were unable to recover it. 00:30:08.143 [2024-07-15 15:35:11.759614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.143 [2024-07-15 15:35:11.759653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.143 qpair failed and we were unable to recover it. 00:30:08.143 [2024-07-15 15:35:11.759950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.143 [2024-07-15 15:35:11.759996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.143 qpair failed and we were unable to recover it. 00:30:08.143 [2024-07-15 15:35:11.760381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.143 [2024-07-15 15:35:11.760421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.143 qpair failed and we were unable to recover it. 00:30:08.143 [2024-07-15 15:35:11.760648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.143 [2024-07-15 15:35:11.760687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.143 qpair failed and we were unable to recover it. 00:30:08.143 [2024-07-15 15:35:11.761072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.143 [2024-07-15 15:35:11.761112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.144 qpair failed and we were unable to recover it. 00:30:08.144 [2024-07-15 15:35:11.761448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.144 [2024-07-15 15:35:11.761487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.144 qpair failed and we were unable to recover it. 00:30:08.144 [2024-07-15 15:35:11.761783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.144 [2024-07-15 15:35:11.761823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.144 qpair failed and we were unable to recover it. 00:30:08.144 [2024-07-15 15:35:11.762194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.144 [2024-07-15 15:35:11.762234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.144 qpair failed and we were unable to recover it. 00:30:08.144 [2024-07-15 15:35:11.762550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.144 [2024-07-15 15:35:11.762589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.144 qpair failed and we were unable to recover it. 00:30:08.144 [2024-07-15 15:35:11.762896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.144 [2024-07-15 15:35:11.762945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.144 qpair failed and we were unable to recover it. 00:30:08.144 [2024-07-15 15:35:11.763257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.144 [2024-07-15 15:35:11.763274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.144 qpair failed and we were unable to recover it. 00:30:08.144 [2024-07-15 15:35:11.763479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.144 [2024-07-15 15:35:11.763527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.144 qpair failed and we were unable to recover it. 00:30:08.144 [2024-07-15 15:35:11.763793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.144 [2024-07-15 15:35:11.763860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.144 qpair failed and we were unable to recover it. 00:30:08.144 [2024-07-15 15:35:11.764165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.144 [2024-07-15 15:35:11.764204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.144 qpair failed and we were unable to recover it. 00:30:08.144 [2024-07-15 15:35:11.764516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.144 [2024-07-15 15:35:11.764556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.144 qpair failed and we were unable to recover it. 00:30:08.144 [2024-07-15 15:35:11.764866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.144 [2024-07-15 15:35:11.764906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.144 qpair failed and we were unable to recover it. 00:30:08.144 [2024-07-15 15:35:11.765204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.144 [2024-07-15 15:35:11.765244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.144 qpair failed and we were unable to recover it. 00:30:08.144 [2024-07-15 15:35:11.765555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.144 [2024-07-15 15:35:11.765572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.144 qpair failed and we were unable to recover it. 00:30:08.144 [2024-07-15 15:35:11.765892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.144 [2024-07-15 15:35:11.765932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.144 qpair failed and we were unable to recover it. 00:30:08.144 [2024-07-15 15:35:11.766317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.144 [2024-07-15 15:35:11.766355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.144 qpair failed and we were unable to recover it. 00:30:08.144 [2024-07-15 15:35:11.766665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.144 [2024-07-15 15:35:11.766704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.144 qpair failed and we were unable to recover it. 00:30:08.144 [2024-07-15 15:35:11.767090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.144 [2024-07-15 15:35:11.767131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.144 qpair failed and we were unable to recover it. 00:30:08.144 [2024-07-15 15:35:11.767442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.144 [2024-07-15 15:35:11.767482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.144 qpair failed and we were unable to recover it. 00:30:08.144 [2024-07-15 15:35:11.767797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.144 [2024-07-15 15:35:11.767844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.144 qpair failed and we were unable to recover it. 00:30:08.144 [2024-07-15 15:35:11.768211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.144 [2024-07-15 15:35:11.768250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.144 qpair failed and we were unable to recover it. 00:30:08.144 [2024-07-15 15:35:11.768498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.144 [2024-07-15 15:35:11.768538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.144 qpair failed and we were unable to recover it. 00:30:08.144 [2024-07-15 15:35:11.768947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.144 [2024-07-15 15:35:11.768987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.144 qpair failed and we were unable to recover it. 00:30:08.144 [2024-07-15 15:35:11.769375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.144 [2024-07-15 15:35:11.769414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.144 qpair failed and we were unable to recover it. 00:30:08.144 [2024-07-15 15:35:11.769733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.144 [2024-07-15 15:35:11.769778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.144 qpair failed and we were unable to recover it. 00:30:08.144 [2024-07-15 15:35:11.770097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.144 [2024-07-15 15:35:11.770137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.144 qpair failed and we were unable to recover it. 00:30:08.144 [2024-07-15 15:35:11.770461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.144 [2024-07-15 15:35:11.770500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.144 qpair failed and we were unable to recover it. 00:30:08.144 [2024-07-15 15:35:11.770863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.144 [2024-07-15 15:35:11.770903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.144 qpair failed and we were unable to recover it. 00:30:08.144 [2024-07-15 15:35:11.771281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.144 [2024-07-15 15:35:11.771321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.144 qpair failed and we were unable to recover it. 00:30:08.144 [2024-07-15 15:35:11.771704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.144 [2024-07-15 15:35:11.771742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.144 qpair failed and we were unable to recover it. 00:30:08.144 [2024-07-15 15:35:11.772044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.145 [2024-07-15 15:35:11.772062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.145 qpair failed and we were unable to recover it. 00:30:08.145 [2024-07-15 15:35:11.772317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.145 [2024-07-15 15:35:11.772363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.145 qpair failed and we were unable to recover it. 00:30:08.145 [2024-07-15 15:35:11.772622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.145 [2024-07-15 15:35:11.772661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.145 qpair failed and we were unable to recover it. 00:30:08.145 [2024-07-15 15:35:11.773024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.145 [2024-07-15 15:35:11.773065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.145 qpair failed and we were unable to recover it. 00:30:08.145 [2024-07-15 15:35:11.773363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.145 [2024-07-15 15:35:11.773402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.145 qpair failed and we were unable to recover it. 00:30:08.145 [2024-07-15 15:35:11.773743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.145 [2024-07-15 15:35:11.773782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.145 qpair failed and we were unable to recover it. 00:30:08.145 [2024-07-15 15:35:11.774088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.145 [2024-07-15 15:35:11.774128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.145 qpair failed and we were unable to recover it. 00:30:08.145 [2024-07-15 15:35:11.774527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.145 [2024-07-15 15:35:11.774566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.145 qpair failed and we were unable to recover it. 00:30:08.145 [2024-07-15 15:35:11.774873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.145 [2024-07-15 15:35:11.774891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.145 qpair failed and we were unable to recover it. 00:30:08.145 [2024-07-15 15:35:11.775201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.145 [2024-07-15 15:35:11.775218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.145 qpair failed and we were unable to recover it. 00:30:08.145 [2024-07-15 15:35:11.775562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.145 [2024-07-15 15:35:11.775602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.145 qpair failed and we were unable to recover it. 00:30:08.145 [2024-07-15 15:35:11.775851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.145 [2024-07-15 15:35:11.775892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.145 qpair failed and we were unable to recover it. 00:30:08.145 [2024-07-15 15:35:11.776241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.145 [2024-07-15 15:35:11.776280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.145 qpair failed and we were unable to recover it. 00:30:08.145 [2024-07-15 15:35:11.776573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.145 [2024-07-15 15:35:11.776614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.145 qpair failed and we were unable to recover it. 00:30:08.145 [2024-07-15 15:35:11.776948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.145 [2024-07-15 15:35:11.776988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.145 qpair failed and we were unable to recover it. 00:30:08.145 [2024-07-15 15:35:11.777303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.145 [2024-07-15 15:35:11.777341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.145 qpair failed and we were unable to recover it. 00:30:08.145 [2024-07-15 15:35:11.777717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.145 [2024-07-15 15:35:11.777756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.145 qpair failed and we were unable to recover it. 00:30:08.145 [2024-07-15 15:35:11.778063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.145 [2024-07-15 15:35:11.778103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.145 qpair failed and we were unable to recover it. 00:30:08.145 [2024-07-15 15:35:11.778435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.145 [2024-07-15 15:35:11.778475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.145 qpair failed and we were unable to recover it. 00:30:08.145 [2024-07-15 15:35:11.778854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.145 [2024-07-15 15:35:11.778896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.145 qpair failed and we were unable to recover it. 00:30:08.145 [2024-07-15 15:35:11.779141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.145 [2024-07-15 15:35:11.779180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.145 qpair failed and we were unable to recover it. 00:30:08.145 [2024-07-15 15:35:11.779540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.145 [2024-07-15 15:35:11.779557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.145 qpair failed and we were unable to recover it. 00:30:08.145 [2024-07-15 15:35:11.779849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.145 [2024-07-15 15:35:11.779866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.145 qpair failed and we were unable to recover it. 00:30:08.145 [2024-07-15 15:35:11.780218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.145 [2024-07-15 15:35:11.780257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.145 qpair failed and we were unable to recover it. 00:30:08.145 [2024-07-15 15:35:11.780651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.145 [2024-07-15 15:35:11.780689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.145 qpair failed and we were unable to recover it. 00:30:08.145 [2024-07-15 15:35:11.781003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.145 [2024-07-15 15:35:11.781021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.145 qpair failed and we were unable to recover it. 00:30:08.145 [2024-07-15 15:35:11.781352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.145 [2024-07-15 15:35:11.781369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.145 qpair failed and we were unable to recover it. 00:30:08.145 [2024-07-15 15:35:11.781662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.145 [2024-07-15 15:35:11.781701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.145 qpair failed and we were unable to recover it. 00:30:08.145 [2024-07-15 15:35:11.782039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.145 [2024-07-15 15:35:11.782079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.145 qpair failed and we were unable to recover it. 00:30:08.145 [2024-07-15 15:35:11.782334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.146 [2024-07-15 15:35:11.782374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.146 qpair failed and we were unable to recover it. 00:30:08.146 [2024-07-15 15:35:11.782714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.146 [2024-07-15 15:35:11.782754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.146 qpair failed and we were unable to recover it. 00:30:08.146 [2024-07-15 15:35:11.783146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.146 [2024-07-15 15:35:11.783185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.146 qpair failed and we were unable to recover it. 00:30:08.146 [2024-07-15 15:35:11.783416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.146 [2024-07-15 15:35:11.783456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.146 qpair failed and we were unable to recover it. 00:30:08.146 [2024-07-15 15:35:11.783764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.146 [2024-07-15 15:35:11.783803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.146 qpair failed and we were unable to recover it. 00:30:08.146 [2024-07-15 15:35:11.784126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.146 [2024-07-15 15:35:11.784166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.146 qpair failed and we were unable to recover it. 00:30:08.146 [2024-07-15 15:35:11.784513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.146 [2024-07-15 15:35:11.784553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.146 qpair failed and we were unable to recover it. 00:30:08.146 [2024-07-15 15:35:11.784913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.146 [2024-07-15 15:35:11.784930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.146 qpair failed and we were unable to recover it. 00:30:08.146 [2024-07-15 15:35:11.785265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.146 [2024-07-15 15:35:11.785304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.146 qpair failed and we were unable to recover it. 00:30:08.146 [2024-07-15 15:35:11.785541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.146 [2024-07-15 15:35:11.785580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.146 qpair failed and we were unable to recover it. 00:30:08.146 [2024-07-15 15:35:11.785879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.146 [2024-07-15 15:35:11.785895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.146 qpair failed and we were unable to recover it. 00:30:08.146 [2024-07-15 15:35:11.786136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.146 [2024-07-15 15:35:11.786153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.146 qpair failed and we were unable to recover it. 00:30:08.146 [2024-07-15 15:35:11.786417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.146 [2024-07-15 15:35:11.786433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.146 qpair failed and we were unable to recover it. 00:30:08.146 [2024-07-15 15:35:11.786606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.146 [2024-07-15 15:35:11.786623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.146 qpair failed and we were unable to recover it. 00:30:08.146 [2024-07-15 15:35:11.786873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.146 [2024-07-15 15:35:11.786913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.146 qpair failed and we were unable to recover it. 00:30:08.146 [2024-07-15 15:35:11.787227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.146 [2024-07-15 15:35:11.787266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.146 qpair failed and we were unable to recover it. 00:30:08.146 [2024-07-15 15:35:11.787600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.146 [2024-07-15 15:35:11.787639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.146 qpair failed and we were unable to recover it. 00:30:08.146 [2024-07-15 15:35:11.787975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.146 [2024-07-15 15:35:11.788016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.146 qpair failed and we were unable to recover it. 00:30:08.146 [2024-07-15 15:35:11.788380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.146 [2024-07-15 15:35:11.788420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.146 qpair failed and we were unable to recover it. 00:30:08.146 [2024-07-15 15:35:11.788830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.146 [2024-07-15 15:35:11.788880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.146 qpair failed and we were unable to recover it. 00:30:08.146 [2024-07-15 15:35:11.789031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.146 [2024-07-15 15:35:11.789048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.146 qpair failed and we were unable to recover it. 00:30:08.146 [2024-07-15 15:35:11.789310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.146 [2024-07-15 15:35:11.789327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.146 qpair failed and we were unable to recover it. 00:30:08.146 [2024-07-15 15:35:11.789445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.146 [2024-07-15 15:35:11.789462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.146 qpair failed and we were unable to recover it. 00:30:08.146 [2024-07-15 15:35:11.789660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.146 [2024-07-15 15:35:11.789677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.146 qpair failed and we were unable to recover it. 00:30:08.146 [2024-07-15 15:35:11.789926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.146 [2024-07-15 15:35:11.789943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.146 qpair failed and we were unable to recover it. 00:30:08.146 [2024-07-15 15:35:11.790257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.146 [2024-07-15 15:35:11.790297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.146 qpair failed and we were unable to recover it. 00:30:08.146 [2024-07-15 15:35:11.790524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.146 [2024-07-15 15:35:11.790563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.146 qpair failed and we were unable to recover it. 00:30:08.146 [2024-07-15 15:35:11.790854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.146 [2024-07-15 15:35:11.790871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.146 qpair failed and we were unable to recover it. 00:30:08.146 [2024-07-15 15:35:11.791113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.146 [2024-07-15 15:35:11.791130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.146 qpair failed and we were unable to recover it. 00:30:08.146 [2024-07-15 15:35:11.791388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.146 [2024-07-15 15:35:11.791405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.146 qpair failed and we were unable to recover it. 00:30:08.146 [2024-07-15 15:35:11.791702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.146 [2024-07-15 15:35:11.791742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.146 qpair failed and we were unable to recover it. 00:30:08.146 [2024-07-15 15:35:11.792105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.146 [2024-07-15 15:35:11.792146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.146 qpair failed and we were unable to recover it. 00:30:08.147 [2024-07-15 15:35:11.792455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.147 [2024-07-15 15:35:11.792494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.147 qpair failed and we were unable to recover it. 00:30:08.147 [2024-07-15 15:35:11.792791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.147 [2024-07-15 15:35:11.792843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.147 qpair failed and we were unable to recover it. 00:30:08.147 [2024-07-15 15:35:11.793126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.147 [2024-07-15 15:35:11.793143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.147 qpair failed and we were unable to recover it. 00:30:08.147 [2024-07-15 15:35:11.793419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.147 [2024-07-15 15:35:11.793459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.147 qpair failed and we were unable to recover it. 00:30:08.147 [2024-07-15 15:35:11.793753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.147 [2024-07-15 15:35:11.793792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.147 qpair failed and we were unable to recover it. 00:30:08.147 [2024-07-15 15:35:11.794124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.147 [2024-07-15 15:35:11.794164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.147 qpair failed and we were unable to recover it. 00:30:08.147 [2024-07-15 15:35:11.794549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.147 [2024-07-15 15:35:11.794588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.147 qpair failed and we were unable to recover it. 00:30:08.147 [2024-07-15 15:35:11.794882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.147 [2024-07-15 15:35:11.794923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.147 qpair failed and we were unable to recover it. 00:30:08.147 [2024-07-15 15:35:11.795238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.147 [2024-07-15 15:35:11.795277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.147 qpair failed and we were unable to recover it. 00:30:08.147 [2024-07-15 15:35:11.795610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.147 [2024-07-15 15:35:11.795648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.147 qpair failed and we were unable to recover it. 00:30:08.147 [2024-07-15 15:35:11.795936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.147 [2024-07-15 15:35:11.795953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.147 qpair failed and we were unable to recover it. 00:30:08.147 [2024-07-15 15:35:11.796289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.147 [2024-07-15 15:35:11.796328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.147 qpair failed and we were unable to recover it. 00:30:08.147 [2024-07-15 15:35:11.796572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.147 [2024-07-15 15:35:11.796611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.147 qpair failed and we were unable to recover it. 00:30:08.147 [2024-07-15 15:35:11.796993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.147 [2024-07-15 15:35:11.797033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.147 qpair failed and we were unable to recover it. 00:30:08.147 [2024-07-15 15:35:11.797339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.147 [2024-07-15 15:35:11.797378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.147 qpair failed and we were unable to recover it. 00:30:08.147 [2024-07-15 15:35:11.797716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.147 [2024-07-15 15:35:11.797756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.147 qpair failed and we were unable to recover it. 00:30:08.147 [2024-07-15 15:35:11.798073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.147 [2024-07-15 15:35:11.798114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.147 qpair failed and we were unable to recover it. 00:30:08.147 [2024-07-15 15:35:11.798427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.147 [2024-07-15 15:35:11.798467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.147 qpair failed and we were unable to recover it. 00:30:08.147 [2024-07-15 15:35:11.798854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.147 [2024-07-15 15:35:11.798895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.147 qpair failed and we were unable to recover it. 00:30:08.147 [2024-07-15 15:35:11.799204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.147 [2024-07-15 15:35:11.799242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.147 qpair failed and we were unable to recover it. 00:30:08.147 [2024-07-15 15:35:11.799572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.147 [2024-07-15 15:35:11.799611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.147 qpair failed and we were unable to recover it. 00:30:08.147 [2024-07-15 15:35:11.799861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.147 [2024-07-15 15:35:11.799902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.147 qpair failed and we were unable to recover it. 00:30:08.147 [2024-07-15 15:35:11.800195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.147 [2024-07-15 15:35:11.800234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.147 qpair failed and we were unable to recover it. 00:30:08.147 [2024-07-15 15:35:11.800530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.147 [2024-07-15 15:35:11.800569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.147 qpair failed and we were unable to recover it. 00:30:08.147 [2024-07-15 15:35:11.800863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.147 [2024-07-15 15:35:11.800904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.147 qpair failed and we were unable to recover it. 00:30:08.147 [2024-07-15 15:35:11.801265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.147 [2024-07-15 15:35:11.801305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.147 qpair failed and we were unable to recover it. 00:30:08.147 [2024-07-15 15:35:11.801598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.147 [2024-07-15 15:35:11.801637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.147 qpair failed and we were unable to recover it. 00:30:08.147 [2024-07-15 15:35:11.801941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.148 [2024-07-15 15:35:11.801958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.148 qpair failed and we were unable to recover it. 00:30:08.148 [2024-07-15 15:35:11.802297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.148 [2024-07-15 15:35:11.802342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.148 qpair failed and we were unable to recover it. 00:30:08.148 [2024-07-15 15:35:11.802651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.148 [2024-07-15 15:35:11.802668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.148 qpair failed and we were unable to recover it. 00:30:08.148 [2024-07-15 15:35:11.802983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.148 [2024-07-15 15:35:11.803000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.148 qpair failed and we were unable to recover it. 00:30:08.148 [2024-07-15 15:35:11.803331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.148 [2024-07-15 15:35:11.803370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.148 qpair failed and we were unable to recover it. 00:30:08.148 [2024-07-15 15:35:11.803683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.148 [2024-07-15 15:35:11.803723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.148 qpair failed and we were unable to recover it. 00:30:08.148 [2024-07-15 15:35:11.803994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.148 [2024-07-15 15:35:11.804012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.148 qpair failed and we were unable to recover it. 00:30:08.148 [2024-07-15 15:35:11.804257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.148 [2024-07-15 15:35:11.804273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.148 qpair failed and we were unable to recover it. 00:30:08.148 [2024-07-15 15:35:11.804469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.148 [2024-07-15 15:35:11.804486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.148 qpair failed and we were unable to recover it. 00:30:08.148 [2024-07-15 15:35:11.804823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.148 [2024-07-15 15:35:11.804844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.148 qpair failed and we were unable to recover it. 00:30:08.148 [2024-07-15 15:35:11.805100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.148 [2024-07-15 15:35:11.805117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.148 qpair failed and we were unable to recover it. 00:30:08.148 [2024-07-15 15:35:11.805328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.148 [2024-07-15 15:35:11.805368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.148 qpair failed and we were unable to recover it. 00:30:08.148 [2024-07-15 15:35:11.805686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.148 [2024-07-15 15:35:11.805725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.148 qpair failed and we were unable to recover it. 00:30:08.148 [2024-07-15 15:35:11.805955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.148 [2024-07-15 15:35:11.805972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.148 qpair failed and we were unable to recover it. 00:30:08.148 [2024-07-15 15:35:11.806243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.148 [2024-07-15 15:35:11.806282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.148 qpair failed and we were unable to recover it. 00:30:08.148 [2024-07-15 15:35:11.806582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.148 [2024-07-15 15:35:11.806622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.148 qpair failed and we were unable to recover it. 00:30:08.148 [2024-07-15 15:35:11.806996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.148 [2024-07-15 15:35:11.807013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.148 qpair failed and we were unable to recover it. 00:30:08.148 [2024-07-15 15:35:11.807260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.148 [2024-07-15 15:35:11.807276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.148 qpair failed and we were unable to recover it. 00:30:08.148 [2024-07-15 15:35:11.807461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.148 [2024-07-15 15:35:11.807478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.148 qpair failed and we were unable to recover it. 00:30:08.148 [2024-07-15 15:35:11.807815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.148 [2024-07-15 15:35:11.807861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.148 qpair failed and we were unable to recover it. 00:30:08.148 [2024-07-15 15:35:11.808103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.148 [2024-07-15 15:35:11.808143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.148 qpair failed and we were unable to recover it. 00:30:08.148 [2024-07-15 15:35:11.808505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.148 [2024-07-15 15:35:11.808544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.148 qpair failed and we were unable to recover it. 00:30:08.148 [2024-07-15 15:35:11.808882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.148 [2024-07-15 15:35:11.808922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.148 qpair failed and we were unable to recover it. 00:30:08.148 [2024-07-15 15:35:11.809340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.148 [2024-07-15 15:35:11.809379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.148 qpair failed and we were unable to recover it. 00:30:08.148 [2024-07-15 15:35:11.809749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.148 [2024-07-15 15:35:11.809789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.148 qpair failed and we were unable to recover it. 00:30:08.148 [2024-07-15 15:35:11.810168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.148 [2024-07-15 15:35:11.810209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.148 qpair failed and we were unable to recover it. 00:30:08.148 [2024-07-15 15:35:11.810522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.148 [2024-07-15 15:35:11.810561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.148 qpair failed and we were unable to recover it. 00:30:08.148 [2024-07-15 15:35:11.810944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.148 [2024-07-15 15:35:11.810985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.148 qpair failed and we were unable to recover it. 00:30:08.148 [2024-07-15 15:35:11.811290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.149 [2024-07-15 15:35:11.811337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.149 qpair failed and we were unable to recover it. 00:30:08.149 [2024-07-15 15:35:11.811719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.149 [2024-07-15 15:35:11.811758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.149 qpair failed and we were unable to recover it. 00:30:08.149 [2024-07-15 15:35:11.812143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.149 [2024-07-15 15:35:11.812183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.149 qpair failed and we were unable to recover it. 00:30:08.149 [2024-07-15 15:35:11.812546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.149 [2024-07-15 15:35:11.812585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.149 qpair failed and we were unable to recover it. 00:30:08.149 [2024-07-15 15:35:11.812959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.149 [2024-07-15 15:35:11.812999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.149 qpair failed and we were unable to recover it. 00:30:08.149 [2024-07-15 15:35:11.813336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.149 [2024-07-15 15:35:11.813375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.149 qpair failed and we were unable to recover it. 00:30:08.149 [2024-07-15 15:35:11.813782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.149 [2024-07-15 15:35:11.813821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.149 qpair failed and we were unable to recover it. 00:30:08.149 [2024-07-15 15:35:11.814015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.149 [2024-07-15 15:35:11.814055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.149 qpair failed and we were unable to recover it. 00:30:08.149 [2024-07-15 15:35:11.814388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.149 [2024-07-15 15:35:11.814427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.149 qpair failed and we were unable to recover it. 00:30:08.149 [2024-07-15 15:35:11.814721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.149 [2024-07-15 15:35:11.814760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.149 qpair failed and we were unable to recover it. 00:30:08.149 [2024-07-15 15:35:11.815148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.149 [2024-07-15 15:35:11.815188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.149 qpair failed and we were unable to recover it. 00:30:08.149 [2024-07-15 15:35:11.815453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.149 [2024-07-15 15:35:11.815493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.149 qpair failed and we were unable to recover it. 00:30:08.149 [2024-07-15 15:35:11.815825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.149 [2024-07-15 15:35:11.815874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.149 qpair failed and we were unable to recover it. 00:30:08.149 [2024-07-15 15:35:11.816211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.149 [2024-07-15 15:35:11.816251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.149 qpair failed and we were unable to recover it. 00:30:08.149 [2024-07-15 15:35:11.816596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.149 [2024-07-15 15:35:11.816636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.149 qpair failed and we were unable to recover it. 00:30:08.149 [2024-07-15 15:35:11.816945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.149 [2024-07-15 15:35:11.816985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.149 qpair failed and we were unable to recover it. 00:30:08.149 [2024-07-15 15:35:11.817248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.149 [2024-07-15 15:35:11.817287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.149 qpair failed and we were unable to recover it. 00:30:08.149 [2024-07-15 15:35:11.817673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.149 [2024-07-15 15:35:11.817713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.149 qpair failed and we were unable to recover it. 00:30:08.149 [2024-07-15 15:35:11.818075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.149 [2024-07-15 15:35:11.818116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.149 qpair failed and we were unable to recover it. 00:30:08.149 [2024-07-15 15:35:11.818368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.149 [2024-07-15 15:35:11.818407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.149 qpair failed and we were unable to recover it. 00:30:08.149 [2024-07-15 15:35:11.818651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.149 [2024-07-15 15:35:11.818690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.149 qpair failed and we were unable to recover it. 00:30:08.149 [2024-07-15 15:35:11.819008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.149 [2024-07-15 15:35:11.819048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.149 qpair failed and we were unable to recover it. 00:30:08.149 [2024-07-15 15:35:11.819411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.149 [2024-07-15 15:35:11.819450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.149 qpair failed and we were unable to recover it. 00:30:08.149 [2024-07-15 15:35:11.819771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.149 [2024-07-15 15:35:11.819810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.149 qpair failed and we were unable to recover it. 00:30:08.149 [2024-07-15 15:35:11.820135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.149 [2024-07-15 15:35:11.820153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.149 qpair failed and we were unable to recover it. 00:30:08.149 [2024-07-15 15:35:11.820402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.149 [2024-07-15 15:35:11.820441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.149 qpair failed and we were unable to recover it. 00:30:08.149 [2024-07-15 15:35:11.820751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.149 [2024-07-15 15:35:11.820790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.149 qpair failed and we were unable to recover it. 00:30:08.149 [2024-07-15 15:35:11.821201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.149 [2024-07-15 15:35:11.821219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.149 qpair failed and we were unable to recover it. 00:30:08.149 [2024-07-15 15:35:11.821488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.149 [2024-07-15 15:35:11.821505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.149 qpair failed and we were unable to recover it. 00:30:08.149 [2024-07-15 15:35:11.821699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.149 [2024-07-15 15:35:11.821729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.149 qpair failed and we were unable to recover it. 00:30:08.149 [2024-07-15 15:35:11.822042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.149 [2024-07-15 15:35:11.822083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.150 qpair failed and we were unable to recover it. 00:30:08.150 [2024-07-15 15:35:11.822464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.150 [2024-07-15 15:35:11.822502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.150 qpair failed and we were unable to recover it. 00:30:08.150 [2024-07-15 15:35:11.822808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.150 [2024-07-15 15:35:11.822857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.150 qpair failed and we were unable to recover it. 00:30:08.150 [2024-07-15 15:35:11.823158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.150 [2024-07-15 15:35:11.823197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.150 qpair failed and we were unable to recover it. 00:30:08.150 [2024-07-15 15:35:11.823511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.150 [2024-07-15 15:35:11.823550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.150 qpair failed and we were unable to recover it. 00:30:08.150 [2024-07-15 15:35:11.823925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.150 [2024-07-15 15:35:11.823942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.150 qpair failed and we were unable to recover it. 00:30:08.150 [2024-07-15 15:35:11.824277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.150 [2024-07-15 15:35:11.824316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.150 qpair failed and we were unable to recover it. 00:30:08.150 [2024-07-15 15:35:11.824610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.150 [2024-07-15 15:35:11.824649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.150 qpair failed and we were unable to recover it. 00:30:08.150 [2024-07-15 15:35:11.824961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.150 [2024-07-15 15:35:11.824977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.150 qpair failed and we were unable to recover it. 00:30:08.150 [2024-07-15 15:35:11.825296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.150 [2024-07-15 15:35:11.825335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.150 qpair failed and we were unable to recover it. 00:30:08.150 [2024-07-15 15:35:11.825591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.150 [2024-07-15 15:35:11.825630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.150 qpair failed and we were unable to recover it. 00:30:08.150 [2024-07-15 15:35:11.826015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.150 [2024-07-15 15:35:11.826056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.150 qpair failed and we were unable to recover it. 00:30:08.150 [2024-07-15 15:35:11.826458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.150 [2024-07-15 15:35:11.826497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.150 qpair failed and we were unable to recover it. 00:30:08.150 [2024-07-15 15:35:11.826882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.150 [2024-07-15 15:35:11.826922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.150 qpair failed and we were unable to recover it. 00:30:08.150 [2024-07-15 15:35:11.827306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.150 [2024-07-15 15:35:11.827346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.150 qpair failed and we were unable to recover it. 00:30:08.150 [2024-07-15 15:35:11.827652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.150 [2024-07-15 15:35:11.827691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.150 qpair failed and we were unable to recover it. 00:30:08.150 [2024-07-15 15:35:11.828052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.150 [2024-07-15 15:35:11.828092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.150 qpair failed and we were unable to recover it. 00:30:08.150 [2024-07-15 15:35:11.828393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.150 [2024-07-15 15:35:11.828433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.150 qpair failed and we were unable to recover it. 00:30:08.150 [2024-07-15 15:35:11.828745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.150 [2024-07-15 15:35:11.828762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.150 qpair failed and we were unable to recover it. 00:30:08.150 [2024-07-15 15:35:11.829013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.150 [2024-07-15 15:35:11.829052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.150 qpair failed and we were unable to recover it. 00:30:08.150 [2024-07-15 15:35:11.829301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.150 [2024-07-15 15:35:11.829340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.150 qpair failed and we were unable to recover it. 00:30:08.150 [2024-07-15 15:35:11.829636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.150 [2024-07-15 15:35:11.829675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.150 qpair failed and we were unable to recover it. 00:30:08.150 [2024-07-15 15:35:11.829939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.150 [2024-07-15 15:35:11.829979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.150 qpair failed and we were unable to recover it. 00:30:08.150 [2024-07-15 15:35:11.830315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.150 [2024-07-15 15:35:11.830354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.150 qpair failed and we were unable to recover it. 00:30:08.150 [2024-07-15 15:35:11.830681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.150 [2024-07-15 15:35:11.830720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.150 qpair failed and we were unable to recover it. 00:30:08.150 [2024-07-15 15:35:11.830971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.150 [2024-07-15 15:35:11.830988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.150 qpair failed and we were unable to recover it. 00:30:08.150 [2024-07-15 15:35:11.831264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.150 [2024-07-15 15:35:11.831303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.150 qpair failed and we were unable to recover it. 00:30:08.150 [2024-07-15 15:35:11.831669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.150 [2024-07-15 15:35:11.831708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.150 qpair failed and we were unable to recover it. 00:30:08.150 [2024-07-15 15:35:11.832045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.150 [2024-07-15 15:35:11.832063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.150 qpair failed and we were unable to recover it. 00:30:08.150 [2024-07-15 15:35:11.832325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.150 [2024-07-15 15:35:11.832342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.150 qpair failed and we were unable to recover it. 00:30:08.151 [2024-07-15 15:35:11.832681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.151 [2024-07-15 15:35:11.832720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.151 qpair failed and we were unable to recover it. 00:30:08.151 [2024-07-15 15:35:11.833051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.151 [2024-07-15 15:35:11.833091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.151 qpair failed and we were unable to recover it. 00:30:08.151 [2024-07-15 15:35:11.833357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.151 [2024-07-15 15:35:11.833396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.151 qpair failed and we were unable to recover it. 00:30:08.151 [2024-07-15 15:35:11.833760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.151 [2024-07-15 15:35:11.833799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.151 qpair failed and we were unable to recover it. 00:30:08.151 [2024-07-15 15:35:11.834107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.151 [2024-07-15 15:35:11.834147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.151 qpair failed and we were unable to recover it. 00:30:08.151 [2024-07-15 15:35:11.834520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.151 [2024-07-15 15:35:11.834559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.151 qpair failed and we were unable to recover it. 00:30:08.151 [2024-07-15 15:35:11.834787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.151 [2024-07-15 15:35:11.834826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.151 qpair failed and we were unable to recover it. 00:30:08.151 [2024-07-15 15:35:11.835175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.151 [2024-07-15 15:35:11.835215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.151 qpair failed and we were unable to recover it. 00:30:08.151 [2024-07-15 15:35:11.835559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.151 [2024-07-15 15:35:11.835603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.151 qpair failed and we were unable to recover it. 00:30:08.151 [2024-07-15 15:35:11.835949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.151 [2024-07-15 15:35:11.835991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.151 qpair failed and we were unable to recover it. 00:30:08.151 [2024-07-15 15:35:11.836402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.151 [2024-07-15 15:35:11.836442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.151 qpair failed and we were unable to recover it. 00:30:08.151 [2024-07-15 15:35:11.836829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.151 [2024-07-15 15:35:11.836879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.151 qpair failed and we were unable to recover it. 00:30:08.151 [2024-07-15 15:35:11.837195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.151 [2024-07-15 15:35:11.837212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.151 qpair failed and we were unable to recover it. 00:30:08.151 [2024-07-15 15:35:11.837393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.151 [2024-07-15 15:35:11.837410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.151 qpair failed and we were unable to recover it. 00:30:08.151 [2024-07-15 15:35:11.837692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.151 [2024-07-15 15:35:11.837731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.151 qpair failed and we were unable to recover it. 00:30:08.151 [2024-07-15 15:35:11.837981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.151 [2024-07-15 15:35:11.838032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.151 qpair failed and we were unable to recover it. 00:30:08.151 [2024-07-15 15:35:11.838371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.151 [2024-07-15 15:35:11.838410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.151 qpair failed and we were unable to recover it. 00:30:08.151 [2024-07-15 15:35:11.838717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.151 [2024-07-15 15:35:11.838757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.151 qpair failed and we were unable to recover it. 00:30:08.151 [2024-07-15 15:35:11.839124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.151 [2024-07-15 15:35:11.839141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.151 qpair failed and we were unable to recover it. 00:30:08.151 [2024-07-15 15:35:11.839471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.151 [2024-07-15 15:35:11.839510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.151 qpair failed and we were unable to recover it. 00:30:08.151 [2024-07-15 15:35:11.839893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.151 [2024-07-15 15:35:11.839933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.151 qpair failed and we were unable to recover it. 00:30:08.151 [2024-07-15 15:35:11.840231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.151 [2024-07-15 15:35:11.840248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.151 qpair failed and we were unable to recover it. 00:30:08.151 [2024-07-15 15:35:11.840452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.151 [2024-07-15 15:35:11.840469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.151 qpair failed and we were unable to recover it. 00:30:08.151 [2024-07-15 15:35:11.840795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.151 [2024-07-15 15:35:11.840841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.151 qpair failed and we were unable to recover it. 00:30:08.151 [2024-07-15 15:35:11.841140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.151 [2024-07-15 15:35:11.841180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.151 qpair failed and we were unable to recover it. 00:30:08.151 [2024-07-15 15:35:11.841416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.151 [2024-07-15 15:35:11.841455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.151 qpair failed and we were unable to recover it. 00:30:08.151 [2024-07-15 15:35:11.841851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.151 [2024-07-15 15:35:11.841891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.151 qpair failed and we were unable to recover it. 00:30:08.151 [2024-07-15 15:35:11.842253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.151 [2024-07-15 15:35:11.842293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.151 qpair failed and we were unable to recover it. 00:30:08.151 [2024-07-15 15:35:11.842698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.151 [2024-07-15 15:35:11.842737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.151 qpair failed and we were unable to recover it. 00:30:08.151 [2024-07-15 15:35:11.843018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.151 [2024-07-15 15:35:11.843066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.151 qpair failed and we were unable to recover it. 00:30:08.151 [2024-07-15 15:35:11.843412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.152 [2024-07-15 15:35:11.843451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.152 qpair failed and we were unable to recover it. 00:30:08.152 [2024-07-15 15:35:11.843862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.152 [2024-07-15 15:35:11.843903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.152 qpair failed and we were unable to recover it. 00:30:08.152 [2024-07-15 15:35:11.844261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.152 [2024-07-15 15:35:11.844278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.152 qpair failed and we were unable to recover it. 00:30:08.152 [2024-07-15 15:35:11.844519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.152 [2024-07-15 15:35:11.844536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.152 qpair failed and we were unable to recover it. 00:30:08.152 [2024-07-15 15:35:11.844806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.152 [2024-07-15 15:35:11.844856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.152 qpair failed and we were unable to recover it. 00:30:08.152 [2024-07-15 15:35:11.845220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.152 [2024-07-15 15:35:11.845265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.152 qpair failed and we were unable to recover it. 00:30:08.152 [2024-07-15 15:35:11.845599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.152 [2024-07-15 15:35:11.845639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.152 qpair failed and we were unable to recover it. 00:30:08.152 [2024-07-15 15:35:11.846020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.152 [2024-07-15 15:35:11.846038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.152 qpair failed and we were unable to recover it. 00:30:08.152 [2024-07-15 15:35:11.846253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.152 [2024-07-15 15:35:11.846270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.152 qpair failed and we were unable to recover it. 00:30:08.152 [2024-07-15 15:35:11.846469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.152 [2024-07-15 15:35:11.846508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.152 qpair failed and we were unable to recover it. 00:30:08.152 [2024-07-15 15:35:11.846826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.152 [2024-07-15 15:35:11.846874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.152 qpair failed and we were unable to recover it. 00:30:08.152 [2024-07-15 15:35:11.847190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.152 [2024-07-15 15:35:11.847230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.152 qpair failed and we were unable to recover it. 00:30:08.152 [2024-07-15 15:35:11.847592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.152 [2024-07-15 15:35:11.847632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.152 qpair failed and we were unable to recover it. 00:30:08.152 [2024-07-15 15:35:11.847841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.152 [2024-07-15 15:35:11.847858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.152 qpair failed and we were unable to recover it. 00:30:08.152 [2024-07-15 15:35:11.848118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.152 [2024-07-15 15:35:11.848135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.152 qpair failed and we were unable to recover it. 00:30:08.152 [2024-07-15 15:35:11.848350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.152 [2024-07-15 15:35:11.848368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.152 qpair failed and we were unable to recover it. 00:30:08.152 [2024-07-15 15:35:11.848556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.152 [2024-07-15 15:35:11.848573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.152 qpair failed and we were unable to recover it. 00:30:08.152 [2024-07-15 15:35:11.848894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.152 [2024-07-15 15:35:11.848911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.152 qpair failed and we were unable to recover it. 00:30:08.152 [2024-07-15 15:35:11.849270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.152 [2024-07-15 15:35:11.849287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.152 qpair failed and we were unable to recover it. 00:30:08.152 [2024-07-15 15:35:11.849550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.152 [2024-07-15 15:35:11.849568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.152 qpair failed and we were unable to recover it. 00:30:08.152 [2024-07-15 15:35:11.849820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.152 [2024-07-15 15:35:11.849873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.152 qpair failed and we were unable to recover it. 00:30:08.152 [2024-07-15 15:35:11.850238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.152 [2024-07-15 15:35:11.850278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.152 qpair failed and we were unable to recover it. 00:30:08.152 [2024-07-15 15:35:11.850572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.152 [2024-07-15 15:35:11.850612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.152 qpair failed and we were unable to recover it. 00:30:08.152 [2024-07-15 15:35:11.850929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.152 [2024-07-15 15:35:11.850969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.152 qpair failed and we were unable to recover it. 00:30:08.152 [2024-07-15 15:35:11.851273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.152 [2024-07-15 15:35:11.851312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.152 qpair failed and we were unable to recover it. 00:30:08.152 [2024-07-15 15:35:11.851617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.152 [2024-07-15 15:35:11.851656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.152 qpair failed and we were unable to recover it. 00:30:08.152 [2024-07-15 15:35:11.851969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.152 [2024-07-15 15:35:11.852009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.152 qpair failed and we were unable to recover it. 00:30:08.152 [2024-07-15 15:35:11.852230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.152 [2024-07-15 15:35:11.852269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.152 qpair failed and we were unable to recover it. 00:30:08.152 [2024-07-15 15:35:11.852657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.152 [2024-07-15 15:35:11.852696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.152 qpair failed and we were unable to recover it. 00:30:08.152 [2024-07-15 15:35:11.853027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.152 [2024-07-15 15:35:11.853068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.152 qpair failed and we were unable to recover it. 00:30:08.152 [2024-07-15 15:35:11.853375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.152 [2024-07-15 15:35:11.853414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.152 qpair failed and we were unable to recover it. 00:30:08.152 [2024-07-15 15:35:11.853729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.153 [2024-07-15 15:35:11.853768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.153 qpair failed and we were unable to recover it. 00:30:08.153 [2024-07-15 15:35:11.854080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.153 [2024-07-15 15:35:11.854121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.153 qpair failed and we were unable to recover it. 00:30:08.153 [2024-07-15 15:35:11.854413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.153 [2024-07-15 15:35:11.854453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.153 qpair failed and we were unable to recover it. 00:30:08.153 [2024-07-15 15:35:11.854853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.153 [2024-07-15 15:35:11.854906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.153 qpair failed and we were unable to recover it. 00:30:08.153 [2024-07-15 15:35:11.855112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.153 [2024-07-15 15:35:11.855129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.153 qpair failed and we were unable to recover it. 00:30:08.153 [2024-07-15 15:35:11.855439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.153 [2024-07-15 15:35:11.855456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.153 qpair failed and we were unable to recover it. 00:30:08.153 [2024-07-15 15:35:11.855721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.153 [2024-07-15 15:35:11.855760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.153 qpair failed and we were unable to recover it. 00:30:08.153 [2024-07-15 15:35:11.856077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.153 [2024-07-15 15:35:11.856094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.153 qpair failed and we were unable to recover it. 00:30:08.153 [2024-07-15 15:35:11.856365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.153 [2024-07-15 15:35:11.856414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.153 qpair failed and we were unable to recover it. 00:30:08.153 [2024-07-15 15:35:11.856726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.153 [2024-07-15 15:35:11.856765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.153 qpair failed and we were unable to recover it. 00:30:08.153 [2024-07-15 15:35:11.857071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.153 [2024-07-15 15:35:11.857088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.153 qpair failed and we were unable to recover it. 00:30:08.153 [2024-07-15 15:35:11.857381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.153 [2024-07-15 15:35:11.857420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.153 qpair failed and we were unable to recover it. 00:30:08.153 [2024-07-15 15:35:11.857718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.153 [2024-07-15 15:35:11.857756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.153 qpair failed and we were unable to recover it. 00:30:08.153 [2024-07-15 15:35:11.858102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.153 [2024-07-15 15:35:11.858143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.153 qpair failed and we were unable to recover it. 00:30:08.153 [2024-07-15 15:35:11.858455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.153 [2024-07-15 15:35:11.858495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.153 qpair failed and we were unable to recover it. 00:30:08.153 [2024-07-15 15:35:11.858880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.153 [2024-07-15 15:35:11.858897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.153 qpair failed and we were unable to recover it. 00:30:08.153 [2024-07-15 15:35:11.859144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.153 [2024-07-15 15:35:11.859161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.153 qpair failed and we were unable to recover it. 00:30:08.153 [2024-07-15 15:35:11.859490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.153 [2024-07-15 15:35:11.859530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.153 qpair failed and we were unable to recover it. 00:30:08.153 [2024-07-15 15:35:11.859780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.153 [2024-07-15 15:35:11.859797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.153 qpair failed and we were unable to recover it. 00:30:08.153 [2024-07-15 15:35:11.860123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.153 [2024-07-15 15:35:11.860141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.153 qpair failed and we were unable to recover it. 00:30:08.153 [2024-07-15 15:35:11.860391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.153 [2024-07-15 15:35:11.860408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.153 qpair failed and we were unable to recover it. 00:30:08.153 [2024-07-15 15:35:11.860723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.153 [2024-07-15 15:35:11.860762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.153 qpair failed and we were unable to recover it. 00:30:08.153 [2024-07-15 15:35:11.860998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.153 [2024-07-15 15:35:11.861038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.153 qpair failed and we were unable to recover it. 00:30:08.153 [2024-07-15 15:35:11.861331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.154 [2024-07-15 15:35:11.861371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.154 qpair failed and we were unable to recover it. 00:30:08.154 [2024-07-15 15:35:11.861748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.154 [2024-07-15 15:35:11.861765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.154 qpair failed and we were unable to recover it. 00:30:08.154 [2024-07-15 15:35:11.862029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.154 [2024-07-15 15:35:11.862069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.154 qpair failed and we were unable to recover it. 00:30:08.154 [2024-07-15 15:35:11.862362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.154 [2024-07-15 15:35:11.862401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.154 qpair failed and we were unable to recover it. 00:30:08.154 [2024-07-15 15:35:11.862710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.154 [2024-07-15 15:35:11.862749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.154 qpair failed and we were unable to recover it. 00:30:08.154 [2024-07-15 15:35:11.863082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.154 [2024-07-15 15:35:11.863123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.154 qpair failed and we were unable to recover it. 00:30:08.154 [2024-07-15 15:35:11.863473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.154 [2024-07-15 15:35:11.863512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.154 qpair failed and we were unable to recover it. 00:30:08.154 [2024-07-15 15:35:11.863801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.154 [2024-07-15 15:35:11.863818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.154 qpair failed and we were unable to recover it. 00:30:08.154 [2024-07-15 15:35:11.864032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.154 [2024-07-15 15:35:11.864049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.154 qpair failed and we were unable to recover it. 00:30:08.154 [2024-07-15 15:35:11.864263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.154 [2024-07-15 15:35:11.864303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.154 qpair failed and we were unable to recover it. 00:30:08.154 [2024-07-15 15:35:11.864527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.154 [2024-07-15 15:35:11.864567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.154 qpair failed and we were unable to recover it. 00:30:08.154 [2024-07-15 15:35:11.864938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.154 [2024-07-15 15:35:11.864978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.154 qpair failed and we were unable to recover it. 00:30:08.154 [2024-07-15 15:35:11.865284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.154 [2024-07-15 15:35:11.865323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.154 qpair failed and we were unable to recover it. 00:30:08.154 [2024-07-15 15:35:11.865487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.154 [2024-07-15 15:35:11.865526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.154 qpair failed and we were unable to recover it. 00:30:08.154 [2024-07-15 15:35:11.865851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.154 [2024-07-15 15:35:11.865892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.154 qpair failed and we were unable to recover it. 00:30:08.154 [2024-07-15 15:35:11.866259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.154 [2024-07-15 15:35:11.866298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.154 qpair failed and we were unable to recover it. 00:30:08.154 [2024-07-15 15:35:11.866689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.154 [2024-07-15 15:35:11.866729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.154 qpair failed and we were unable to recover it. 00:30:08.154 [2024-07-15 15:35:11.867009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.154 [2024-07-15 15:35:11.867027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.154 qpair failed and we were unable to recover it. 00:30:08.154 [2024-07-15 15:35:11.867218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.154 [2024-07-15 15:35:11.867235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.154 qpair failed and we were unable to recover it. 00:30:08.154 [2024-07-15 15:35:11.868073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.154 [2024-07-15 15:35:11.868121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.154 qpair failed and we were unable to recover it. 00:30:08.154 [2024-07-15 15:35:11.868442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.154 [2024-07-15 15:35:11.868482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.154 qpair failed and we were unable to recover it. 00:30:08.154 [2024-07-15 15:35:11.868851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.154 [2024-07-15 15:35:11.868893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.154 qpair failed and we were unable to recover it. 00:30:08.154 [2024-07-15 15:35:11.869254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.154 [2024-07-15 15:35:11.869293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.154 qpair failed and we were unable to recover it. 00:30:08.154 [2024-07-15 15:35:11.869655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.154 [2024-07-15 15:35:11.869694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.154 qpair failed and we were unable to recover it. 00:30:08.154 [2024-07-15 15:35:11.870000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.154 [2024-07-15 15:35:11.870017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.154 qpair failed and we were unable to recover it. 00:30:08.154 [2024-07-15 15:35:11.870303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.154 [2024-07-15 15:35:11.870343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.154 qpair failed and we were unable to recover it. 00:30:08.154 [2024-07-15 15:35:11.870636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.154 [2024-07-15 15:35:11.870675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.154 qpair failed and we were unable to recover it. 00:30:08.154 [2024-07-15 15:35:11.870973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.154 [2024-07-15 15:35:11.871013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.154 qpair failed and we were unable to recover it. 00:30:08.154 [2024-07-15 15:35:11.871396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.154 [2024-07-15 15:35:11.871436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.154 qpair failed and we were unable to recover it. 00:30:08.154 [2024-07-15 15:35:11.871730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.154 [2024-07-15 15:35:11.871769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.154 qpair failed and we were unable to recover it. 00:30:08.155 [2024-07-15 15:35:11.872092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.155 [2024-07-15 15:35:11.872133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.155 qpair failed and we were unable to recover it. 00:30:08.155 [2024-07-15 15:35:11.872470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.155 [2024-07-15 15:35:11.872510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.155 qpair failed and we were unable to recover it. 00:30:08.155 [2024-07-15 15:35:11.872828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.155 [2024-07-15 15:35:11.872875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.155 qpair failed and we were unable to recover it. 00:30:08.155 [2024-07-15 15:35:11.873270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.155 [2024-07-15 15:35:11.873309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.155 qpair failed and we were unable to recover it. 00:30:08.155 [2024-07-15 15:35:11.873544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.155 [2024-07-15 15:35:11.873582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.155 qpair failed and we were unable to recover it. 00:30:08.155 [2024-07-15 15:35:11.873753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.155 [2024-07-15 15:35:11.873794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.155 qpair failed and we were unable to recover it. 00:30:08.155 [2024-07-15 15:35:11.874101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.155 [2024-07-15 15:35:11.874118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.155 qpair failed and we were unable to recover it. 00:30:08.155 [2024-07-15 15:35:11.874379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.155 [2024-07-15 15:35:11.874396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.155 qpair failed and we were unable to recover it. 00:30:08.155 [2024-07-15 15:35:11.874747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.155 [2024-07-15 15:35:11.874786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.155 qpair failed and we were unable to recover it. 00:30:08.155 [2024-07-15 15:35:11.875161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.155 [2024-07-15 15:35:11.875201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.155 qpair failed and we were unable to recover it. 00:30:08.155 [2024-07-15 15:35:11.875517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.155 [2024-07-15 15:35:11.875557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.155 qpair failed and we were unable to recover it. 00:30:08.155 [2024-07-15 15:35:11.875868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.155 [2024-07-15 15:35:11.875908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.155 qpair failed and we were unable to recover it. 00:30:08.155 [2024-07-15 15:35:11.876214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.155 [2024-07-15 15:35:11.876251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.155 qpair failed and we were unable to recover it. 00:30:08.155 [2024-07-15 15:35:11.876516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.155 [2024-07-15 15:35:11.876533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.155 qpair failed and we were unable to recover it. 00:30:08.155 [2024-07-15 15:35:11.876865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.155 [2024-07-15 15:35:11.876882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.155 qpair failed and we were unable to recover it. 00:30:08.155 [2024-07-15 15:35:11.877133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.155 [2024-07-15 15:35:11.877172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.155 qpair failed and we were unable to recover it. 00:30:08.155 [2024-07-15 15:35:11.877535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.155 [2024-07-15 15:35:11.877581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.155 qpair failed and we were unable to recover it. 00:30:08.155 [2024-07-15 15:35:11.877909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.155 [2024-07-15 15:35:11.877949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.155 qpair failed and we were unable to recover it. 00:30:08.155 [2024-07-15 15:35:11.878171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.155 [2024-07-15 15:35:11.878188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.155 qpair failed and we were unable to recover it. 00:30:08.155 [2024-07-15 15:35:11.878469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.155 [2024-07-15 15:35:11.878509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.155 qpair failed and we were unable to recover it. 00:30:08.155 [2024-07-15 15:35:11.878824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.155 [2024-07-15 15:35:11.878871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.155 qpair failed and we were unable to recover it. 00:30:08.155 [2024-07-15 15:35:11.879277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.155 [2024-07-15 15:35:11.879295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.155 qpair failed and we were unable to recover it. 00:30:08.155 [2024-07-15 15:35:11.879608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.155 [2024-07-15 15:35:11.879625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.155 qpair failed and we were unable to recover it. 00:30:08.155 [2024-07-15 15:35:11.879990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.155 [2024-07-15 15:35:11.880031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.155 qpair failed and we were unable to recover it. 00:30:08.155 [2024-07-15 15:35:11.880349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.155 [2024-07-15 15:35:11.880388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.155 qpair failed and we were unable to recover it. 00:30:08.155 [2024-07-15 15:35:11.880561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.155 [2024-07-15 15:35:11.880601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.155 qpair failed and we were unable to recover it. 00:30:08.155 [2024-07-15 15:35:11.880966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.155 [2024-07-15 15:35:11.881007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.155 qpair failed and we were unable to recover it. 00:30:08.155 [2024-07-15 15:35:11.881325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.155 [2024-07-15 15:35:11.881365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.155 qpair failed and we were unable to recover it. 00:30:08.155 [2024-07-15 15:35:11.881727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.155 [2024-07-15 15:35:11.881766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.155 qpair failed and we were unable to recover it. 00:30:08.155 [2024-07-15 15:35:11.881997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.155 [2024-07-15 15:35:11.882015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.155 qpair failed and we were unable to recover it. 00:30:08.155 [2024-07-15 15:35:11.882267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.155 [2024-07-15 15:35:11.882284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.155 qpair failed and we were unable to recover it. 00:30:08.155 [2024-07-15 15:35:11.882548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.155 [2024-07-15 15:35:11.882587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.155 qpair failed and we were unable to recover it. 00:30:08.155 [2024-07-15 15:35:11.882921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.155 [2024-07-15 15:35:11.882961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.155 qpair failed and we were unable to recover it. 00:30:08.155 [2024-07-15 15:35:11.883274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.156 [2024-07-15 15:35:11.883313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.156 qpair failed and we were unable to recover it. 00:30:08.156 [2024-07-15 15:35:11.883702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.156 [2024-07-15 15:35:11.883741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.156 qpair failed and we were unable to recover it. 00:30:08.156 [2024-07-15 15:35:11.884057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.156 [2024-07-15 15:35:11.884099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.156 qpair failed and we were unable to recover it. 00:30:08.156 [2024-07-15 15:35:11.884424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.156 [2024-07-15 15:35:11.884464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.156 qpair failed and we were unable to recover it. 00:30:08.156 [2024-07-15 15:35:11.884847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.156 [2024-07-15 15:35:11.884888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.156 qpair failed and we were unable to recover it. 00:30:08.156 [2024-07-15 15:35:11.885213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.156 [2024-07-15 15:35:11.885230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.156 qpair failed and we were unable to recover it. 00:30:08.156 [2024-07-15 15:35:11.885503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.156 [2024-07-15 15:35:11.885547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.156 qpair failed and we were unable to recover it. 00:30:08.156 [2024-07-15 15:35:11.885785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.156 [2024-07-15 15:35:11.885825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.156 qpair failed and we were unable to recover it. 00:30:08.156 [2024-07-15 15:35:11.886143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.156 [2024-07-15 15:35:11.886182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.156 qpair failed and we were unable to recover it. 00:30:08.156 [2024-07-15 15:35:11.886586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.156 [2024-07-15 15:35:11.886625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.156 qpair failed and we were unable to recover it. 00:30:08.156 [2024-07-15 15:35:11.886854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.156 [2024-07-15 15:35:11.886875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.156 qpair failed and we were unable to recover it. 00:30:08.156 [2024-07-15 15:35:11.887186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.156 [2024-07-15 15:35:11.887203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.156 qpair failed and we were unable to recover it. 00:30:08.156 [2024-07-15 15:35:11.887431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.156 [2024-07-15 15:35:11.887448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.156 qpair failed and we were unable to recover it. 00:30:08.156 [2024-07-15 15:35:11.887764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.156 [2024-07-15 15:35:11.887803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.156 qpair failed and we were unable to recover it. 00:30:08.156 [2024-07-15 15:35:11.888039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.156 [2024-07-15 15:35:11.888079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.156 qpair failed and we were unable to recover it. 00:30:08.156 [2024-07-15 15:35:11.888372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.156 [2024-07-15 15:35:11.888411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.156 qpair failed and we were unable to recover it. 00:30:08.156 [2024-07-15 15:35:11.888641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.156 [2024-07-15 15:35:11.888681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.156 qpair failed and we were unable to recover it. 00:30:08.156 [2024-07-15 15:35:11.888927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.156 [2024-07-15 15:35:11.888944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.156 qpair failed and we were unable to recover it. 00:30:08.156 [2024-07-15 15:35:11.889261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.156 [2024-07-15 15:35:11.889300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.156 qpair failed and we were unable to recover it. 00:30:08.156 [2024-07-15 15:35:11.889712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.156 [2024-07-15 15:35:11.889751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.156 qpair failed and we were unable to recover it. 00:30:08.156 [2024-07-15 15:35:11.890071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.156 [2024-07-15 15:35:11.890088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.156 qpair failed and we were unable to recover it. 00:30:08.156 [2024-07-15 15:35:11.890347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.156 [2024-07-15 15:35:11.890396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.156 qpair failed and we were unable to recover it. 00:30:08.156 [2024-07-15 15:35:11.890782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.156 [2024-07-15 15:35:11.890799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.156 qpair failed and we were unable to recover it. 00:30:08.156 [2024-07-15 15:35:11.891057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.156 [2024-07-15 15:35:11.891074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.156 qpair failed and we were unable to recover it. 00:30:08.156 [2024-07-15 15:35:11.891421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.156 [2024-07-15 15:35:11.891461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.156 qpair failed and we were unable to recover it. 00:30:08.156 [2024-07-15 15:35:11.891795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.156 [2024-07-15 15:35:11.891842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.156 qpair failed and we were unable to recover it. 00:30:08.156 [2024-07-15 15:35:11.892228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.156 [2024-07-15 15:35:11.892267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.156 qpair failed and we were unable to recover it. 00:30:08.156 [2024-07-15 15:35:11.892567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.156 [2024-07-15 15:35:11.892607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.156 qpair failed and we were unable to recover it. 00:30:08.156 [2024-07-15 15:35:11.892914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.156 [2024-07-15 15:35:11.892955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.156 qpair failed and we were unable to recover it. 00:30:08.156 [2024-07-15 15:35:11.893245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.156 [2024-07-15 15:35:11.893262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.156 qpair failed and we were unable to recover it. 00:30:08.156 [2024-07-15 15:35:11.893520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.156 [2024-07-15 15:35:11.893559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.156 qpair failed and we were unable to recover it. 00:30:08.157 [2024-07-15 15:35:11.893867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.157 [2024-07-15 15:35:11.893908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.157 qpair failed and we were unable to recover it. 00:30:08.157 [2024-07-15 15:35:11.894268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.157 [2024-07-15 15:35:11.894285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.157 qpair failed and we were unable to recover it. 00:30:08.157 [2024-07-15 15:35:11.894453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.157 [2024-07-15 15:35:11.894471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.157 qpair failed and we were unable to recover it. 00:30:08.157 [2024-07-15 15:35:11.894677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.157 [2024-07-15 15:35:11.894717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.157 qpair failed and we were unable to recover it. 00:30:08.157 [2024-07-15 15:35:11.895023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.157 [2024-07-15 15:35:11.895063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.157 qpair failed and we were unable to recover it. 00:30:08.157 [2024-07-15 15:35:11.895426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.157 [2024-07-15 15:35:11.895465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.157 qpair failed and we were unable to recover it. 00:30:08.157 [2024-07-15 15:35:11.895737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.157 [2024-07-15 15:35:11.895777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.157 qpair failed and we were unable to recover it. 00:30:08.157 [2024-07-15 15:35:11.896110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.157 [2024-07-15 15:35:11.896151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.157 qpair failed and we were unable to recover it. 00:30:08.157 [2024-07-15 15:35:11.896534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.157 [2024-07-15 15:35:11.896573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.157 qpair failed and we were unable to recover it. 00:30:08.157 [2024-07-15 15:35:11.896876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.157 [2024-07-15 15:35:11.896893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.157 qpair failed and we were unable to recover it. 00:30:08.157 [2024-07-15 15:35:11.897148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.157 [2024-07-15 15:35:11.897188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.157 qpair failed and we were unable to recover it. 00:30:08.157 [2024-07-15 15:35:11.897489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.157 [2024-07-15 15:35:11.897529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.157 qpair failed and we were unable to recover it. 00:30:08.157 [2024-07-15 15:35:11.897784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.157 [2024-07-15 15:35:11.897824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.157 qpair failed and we were unable to recover it. 00:30:08.157 [2024-07-15 15:35:11.898232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.157 [2024-07-15 15:35:11.898271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.157 qpair failed and we were unable to recover it. 00:30:08.157 [2024-07-15 15:35:11.898590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.157 [2024-07-15 15:35:11.898629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.157 qpair failed and we were unable to recover it. 00:30:08.157 [2024-07-15 15:35:11.898945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.157 [2024-07-15 15:35:11.898986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.157 qpair failed and we were unable to recover it. 00:30:08.157 [2024-07-15 15:35:11.899382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.157 [2024-07-15 15:35:11.899423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.157 qpair failed and we were unable to recover it. 00:30:08.157 [2024-07-15 15:35:11.899795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.157 [2024-07-15 15:35:11.899841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.157 qpair failed and we were unable to recover it. 00:30:08.157 [2024-07-15 15:35:11.900091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.157 [2024-07-15 15:35:11.900108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.157 qpair failed and we were unable to recover it. 00:30:08.157 [2024-07-15 15:35:11.900372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.157 [2024-07-15 15:35:11.900412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.157 qpair failed and we were unable to recover it. 00:30:08.157 [2024-07-15 15:35:11.900667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.157 [2024-07-15 15:35:11.900707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.157 qpair failed and we were unable to recover it. 00:30:08.157 [2024-07-15 15:35:11.901079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.157 [2024-07-15 15:35:11.901119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.157 qpair failed and we were unable to recover it. 00:30:08.157 [2024-07-15 15:35:11.901470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.157 [2024-07-15 15:35:11.901509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.157 qpair failed and we were unable to recover it. 00:30:08.157 [2024-07-15 15:35:11.901862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.157 [2024-07-15 15:35:11.901879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.157 qpair failed and we were unable to recover it. 00:30:08.157 [2024-07-15 15:35:11.902156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.157 [2024-07-15 15:35:11.902195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.157 qpair failed and we were unable to recover it. 00:30:08.157 [2024-07-15 15:35:11.902605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.157 [2024-07-15 15:35:11.902645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.157 qpair failed and we were unable to recover it. 00:30:08.157 [2024-07-15 15:35:11.902976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.157 [2024-07-15 15:35:11.902994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.157 qpair failed and we were unable to recover it. 00:30:08.157 [2024-07-15 15:35:11.903184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.157 [2024-07-15 15:35:11.903202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.157 qpair failed and we were unable to recover it. 00:30:08.157 [2024-07-15 15:35:11.903500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.157 [2024-07-15 15:35:11.903517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.157 qpair failed and we were unable to recover it. 00:30:08.157 [2024-07-15 15:35:11.903861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.158 [2024-07-15 15:35:11.903901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.158 qpair failed and we were unable to recover it. 00:30:08.158 [2024-07-15 15:35:11.904227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.158 [2024-07-15 15:35:11.904267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.158 qpair failed and we were unable to recover it. 00:30:08.158 [2024-07-15 15:35:11.904629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.158 [2024-07-15 15:35:11.904668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.158 qpair failed and we were unable to recover it. 00:30:08.158 [2024-07-15 15:35:11.905072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.158 [2024-07-15 15:35:11.905113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.158 qpair failed and we were unable to recover it. 00:30:08.158 [2024-07-15 15:35:11.905364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.158 [2024-07-15 15:35:11.905403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.158 qpair failed and we were unable to recover it. 00:30:08.158 [2024-07-15 15:35:11.905635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.158 [2024-07-15 15:35:11.905674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.158 qpair failed and we were unable to recover it. 00:30:08.158 [2024-07-15 15:35:11.906050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.158 [2024-07-15 15:35:11.906090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.158 qpair failed and we were unable to recover it. 00:30:08.158 [2024-07-15 15:35:11.906397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.158 [2024-07-15 15:35:11.906436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.158 qpair failed and we were unable to recover it. 00:30:08.158 [2024-07-15 15:35:11.906824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.158 [2024-07-15 15:35:11.906871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.158 qpair failed and we were unable to recover it. 00:30:08.158 [2024-07-15 15:35:11.907258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.158 [2024-07-15 15:35:11.907298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.158 qpair failed and we were unable to recover it. 00:30:08.158 [2024-07-15 15:35:11.907681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.158 [2024-07-15 15:35:11.907720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.158 qpair failed and we were unable to recover it. 00:30:08.158 [2024-07-15 15:35:11.908113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.158 [2024-07-15 15:35:11.908154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.158 qpair failed and we were unable to recover it. 00:30:08.158 [2024-07-15 15:35:11.908468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.158 [2024-07-15 15:35:11.908507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.158 qpair failed and we were unable to recover it. 00:30:08.158 [2024-07-15 15:35:11.908876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.158 [2024-07-15 15:35:11.908917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.158 qpair failed and we were unable to recover it. 00:30:08.158 [2024-07-15 15:35:11.909225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.158 [2024-07-15 15:35:11.909265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.158 qpair failed and we were unable to recover it. 00:30:08.158 [2024-07-15 15:35:11.909655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.158 [2024-07-15 15:35:11.909694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.158 qpair failed and we were unable to recover it. 00:30:08.158 [2024-07-15 15:35:11.910050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.158 [2024-07-15 15:35:11.910067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.158 qpair failed and we were unable to recover it. 00:30:08.158 [2024-07-15 15:35:11.910400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.158 [2024-07-15 15:35:11.910454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.158 qpair failed and we were unable to recover it. 00:30:08.158 [2024-07-15 15:35:11.910750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.158 [2024-07-15 15:35:11.910795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.158 qpair failed and we were unable to recover it. 00:30:08.158 [2024-07-15 15:35:11.911107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.158 [2024-07-15 15:35:11.911147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.158 qpair failed and we were unable to recover it. 00:30:08.158 [2024-07-15 15:35:11.911453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.158 [2024-07-15 15:35:11.911492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.158 qpair failed and we were unable to recover it. 00:30:08.158 [2024-07-15 15:35:11.911787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.158 [2024-07-15 15:35:11.911827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.158 qpair failed and we were unable to recover it. 00:30:08.158 [2024-07-15 15:35:11.912150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.158 [2024-07-15 15:35:11.912190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.158 qpair failed and we were unable to recover it. 00:30:08.158 [2024-07-15 15:35:11.912563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.158 [2024-07-15 15:35:11.912602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.158 qpair failed and we were unable to recover it. 00:30:08.158 [2024-07-15 15:35:11.913014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.158 [2024-07-15 15:35:11.913056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.158 qpair failed and we were unable to recover it. 00:30:08.158 [2024-07-15 15:35:11.913285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.158 [2024-07-15 15:35:11.913302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.158 qpair failed and we were unable to recover it. 00:30:08.158 [2024-07-15 15:35:11.913567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.158 [2024-07-15 15:35:11.913607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.158 qpair failed and we were unable to recover it. 00:30:08.158 [2024-07-15 15:35:11.913910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.158 [2024-07-15 15:35:11.913951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.158 qpair failed and we were unable to recover it. 00:30:08.158 [2024-07-15 15:35:11.914270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.158 [2024-07-15 15:35:11.914287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.158 qpair failed and we were unable to recover it. 00:30:08.158 [2024-07-15 15:35:11.914572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.158 [2024-07-15 15:35:11.914589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.158 qpair failed and we were unable to recover it. 00:30:08.158 [2024-07-15 15:35:11.914841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.158 [2024-07-15 15:35:11.914880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.158 qpair failed and we were unable to recover it. 00:30:08.158 [2024-07-15 15:35:11.915176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.158 [2024-07-15 15:35:11.915216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.158 qpair failed and we were unable to recover it. 00:30:08.159 [2024-07-15 15:35:11.915542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.159 [2024-07-15 15:35:11.915583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.159 qpair failed and we were unable to recover it. 00:30:08.159 [2024-07-15 15:35:11.915946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.159 [2024-07-15 15:35:11.915986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.159 qpair failed and we were unable to recover it. 00:30:08.159 [2024-07-15 15:35:11.916295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.159 [2024-07-15 15:35:11.916335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.159 qpair failed and we were unable to recover it. 00:30:08.159 [2024-07-15 15:35:11.916732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.159 [2024-07-15 15:35:11.916771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.159 qpair failed and we were unable to recover it. 00:30:08.159 [2024-07-15 15:35:11.917031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.159 [2024-07-15 15:35:11.917049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.159 qpair failed and we were unable to recover it. 00:30:08.159 [2024-07-15 15:35:11.917300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.159 [2024-07-15 15:35:11.917339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.159 qpair failed and we were unable to recover it. 00:30:08.159 [2024-07-15 15:35:11.917605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.159 [2024-07-15 15:35:11.917644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.159 qpair failed and we were unable to recover it. 00:30:08.159 [2024-07-15 15:35:11.917981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.159 [2024-07-15 15:35:11.918022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.159 qpair failed and we were unable to recover it. 00:30:08.159 [2024-07-15 15:35:11.918320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.159 [2024-07-15 15:35:11.918337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.159 qpair failed and we were unable to recover it. 00:30:08.159 [2024-07-15 15:35:11.918536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.159 [2024-07-15 15:35:11.918553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.159 qpair failed and we were unable to recover it. 00:30:08.159 [2024-07-15 15:35:11.918808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.159 [2024-07-15 15:35:11.918825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.159 qpair failed and we were unable to recover it. 00:30:08.159 [2024-07-15 15:35:11.919019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.159 [2024-07-15 15:35:11.919059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.159 qpair failed and we were unable to recover it. 00:30:08.159 [2024-07-15 15:35:11.919443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.159 [2024-07-15 15:35:11.919482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.159 qpair failed and we were unable to recover it. 00:30:08.159 [2024-07-15 15:35:11.919851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.159 [2024-07-15 15:35:11.919902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.159 qpair failed and we were unable to recover it. 00:30:08.159 [2024-07-15 15:35:11.920267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.159 [2024-07-15 15:35:11.920308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.159 qpair failed and we were unable to recover it. 00:30:08.159 [2024-07-15 15:35:11.920688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.159 [2024-07-15 15:35:11.920727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.159 qpair failed and we were unable to recover it. 00:30:08.159 [2024-07-15 15:35:11.921036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.159 [2024-07-15 15:35:11.921054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.159 qpair failed and we were unable to recover it. 00:30:08.159 [2024-07-15 15:35:11.921364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.159 [2024-07-15 15:35:11.921382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.159 qpair failed and we were unable to recover it. 00:30:08.159 [2024-07-15 15:35:11.921630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.159 [2024-07-15 15:35:11.921670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.159 qpair failed and we were unable to recover it. 00:30:08.159 [2024-07-15 15:35:11.921920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.159 [2024-07-15 15:35:11.921961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.159 qpair failed and we were unable to recover it. 00:30:08.159 [2024-07-15 15:35:11.922309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.159 [2024-07-15 15:35:11.922348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.159 qpair failed and we were unable to recover it. 00:30:08.159 [2024-07-15 15:35:11.922660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.159 [2024-07-15 15:35:11.922700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.159 qpair failed and we were unable to recover it. 00:30:08.159 [2024-07-15 15:35:11.923055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.159 [2024-07-15 15:35:11.923072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.159 qpair failed and we were unable to recover it. 00:30:08.159 [2024-07-15 15:35:11.923349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.159 [2024-07-15 15:35:11.923388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.159 qpair failed and we were unable to recover it. 00:30:08.159 [2024-07-15 15:35:11.923686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.159 [2024-07-15 15:35:11.923725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.159 qpair failed and we were unable to recover it. 00:30:08.159 [2024-07-15 15:35:11.924107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.160 [2024-07-15 15:35:11.924147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.160 qpair failed and we were unable to recover it. 00:30:08.160 [2024-07-15 15:35:11.924485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.160 [2024-07-15 15:35:11.924525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.160 qpair failed and we were unable to recover it. 00:30:08.160 [2024-07-15 15:35:11.924829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.160 [2024-07-15 15:35:11.924876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.160 qpair failed and we were unable to recover it. 00:30:08.160 [2024-07-15 15:35:11.925176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.160 [2024-07-15 15:35:11.925215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.160 qpair failed and we were unable to recover it. 00:30:08.160 [2024-07-15 15:35:11.925600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.160 [2024-07-15 15:35:11.925639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.160 qpair failed and we were unable to recover it. 00:30:08.160 [2024-07-15 15:35:11.925937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.160 [2024-07-15 15:35:11.925978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.160 qpair failed and we were unable to recover it. 00:30:08.160 [2024-07-15 15:35:11.926347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.160 [2024-07-15 15:35:11.926385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.160 qpair failed and we were unable to recover it. 00:30:08.160 [2024-07-15 15:35:11.926603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.160 [2024-07-15 15:35:11.926642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.160 qpair failed and we were unable to recover it. 00:30:08.160 [2024-07-15 15:35:11.926931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.160 [2024-07-15 15:35:11.926948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.160 qpair failed and we were unable to recover it. 00:30:08.160 [2024-07-15 15:35:11.927280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.160 [2024-07-15 15:35:11.927297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.160 qpair failed and we were unable to recover it. 00:30:08.160 [2024-07-15 15:35:11.927605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.160 [2024-07-15 15:35:11.927622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.160 qpair failed and we were unable to recover it. 00:30:08.160 [2024-07-15 15:35:11.927877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.160 [2024-07-15 15:35:11.927895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.160 qpair failed and we were unable to recover it. 00:30:08.160 [2024-07-15 15:35:11.928148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.160 [2024-07-15 15:35:11.928187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.160 qpair failed and we were unable to recover it. 00:30:08.160 [2024-07-15 15:35:11.928444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.160 [2024-07-15 15:35:11.928483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.160 qpair failed and we were unable to recover it. 00:30:08.160 [2024-07-15 15:35:11.928788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.160 [2024-07-15 15:35:11.928827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.160 qpair failed and we were unable to recover it. 00:30:08.160 [2024-07-15 15:35:11.929210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.160 [2024-07-15 15:35:11.929249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.160 qpair failed and we were unable to recover it. 00:30:08.160 [2024-07-15 15:35:11.929575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.160 [2024-07-15 15:35:11.929615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.160 qpair failed and we were unable to recover it. 00:30:08.160 [2024-07-15 15:35:11.929867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.160 [2024-07-15 15:35:11.929907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.160 qpair failed and we were unable to recover it. 00:30:08.160 [2024-07-15 15:35:11.930269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.160 [2024-07-15 15:35:11.930308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.160 qpair failed and we were unable to recover it. 00:30:08.160 [2024-07-15 15:35:11.930605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.160 [2024-07-15 15:35:11.930644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.160 qpair failed and we were unable to recover it. 00:30:08.160 [2024-07-15 15:35:11.931029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.160 [2024-07-15 15:35:11.931069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.160 qpair failed and we were unable to recover it. 00:30:08.160 [2024-07-15 15:35:11.931311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.160 [2024-07-15 15:35:11.931350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.160 qpair failed and we were unable to recover it. 00:30:08.160 [2024-07-15 15:35:11.931665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.160 [2024-07-15 15:35:11.931704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.160 qpair failed and we were unable to recover it. 00:30:08.160 [2024-07-15 15:35:11.932055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.160 [2024-07-15 15:35:11.932095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.160 qpair failed and we were unable to recover it. 00:30:08.160 [2024-07-15 15:35:11.932390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.160 [2024-07-15 15:35:11.932429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.160 qpair failed and we were unable to recover it. 00:30:08.160 [2024-07-15 15:35:11.932775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.160 [2024-07-15 15:35:11.932815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.160 qpair failed and we were unable to recover it. 00:30:08.160 [2024-07-15 15:35:11.933118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.160 [2024-07-15 15:35:11.933169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.160 qpair failed and we were unable to recover it. 00:30:08.160 [2024-07-15 15:35:11.933357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.160 [2024-07-15 15:35:11.933374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.160 qpair failed and we were unable to recover it. 00:30:08.160 [2024-07-15 15:35:11.933629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.160 [2024-07-15 15:35:11.933646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.160 qpair failed and we were unable to recover it. 00:30:08.160 [2024-07-15 15:35:11.933891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.160 [2024-07-15 15:35:11.933908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.160 qpair failed and we were unable to recover it. 00:30:08.160 [2024-07-15 15:35:11.934088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.160 [2024-07-15 15:35:11.934105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.161 qpair failed and we were unable to recover it. 00:30:08.161 [2024-07-15 15:35:11.934356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.161 [2024-07-15 15:35:11.934395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.161 qpair failed and we were unable to recover it. 00:30:08.161 [2024-07-15 15:35:11.934711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.161 [2024-07-15 15:35:11.934750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.161 qpair failed and we were unable to recover it. 00:30:08.161 [2024-07-15 15:35:11.935069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.161 [2024-07-15 15:35:11.935108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.161 qpair failed and we were unable to recover it. 00:30:08.161 [2024-07-15 15:35:11.935433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.161 [2024-07-15 15:35:11.935472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.161 qpair failed and we were unable to recover it. 00:30:08.161 [2024-07-15 15:35:11.935792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.161 [2024-07-15 15:35:11.935841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.161 qpair failed and we were unable to recover it. 00:30:08.161 [2024-07-15 15:35:11.936205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.161 [2024-07-15 15:35:11.936245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.161 qpair failed and we were unable to recover it. 00:30:08.161 [2024-07-15 15:35:11.936482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.161 [2024-07-15 15:35:11.936521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.161 qpair failed and we were unable to recover it. 00:30:08.161 [2024-07-15 15:35:11.936818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.161 [2024-07-15 15:35:11.936882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.161 qpair failed and we were unable to recover it. 00:30:08.161 [2024-07-15 15:35:11.937259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.161 [2024-07-15 15:35:11.937297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.161 qpair failed and we were unable to recover it. 00:30:08.161 [2024-07-15 15:35:11.937660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.161 [2024-07-15 15:35:11.937700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.161 qpair failed and we were unable to recover it. 00:30:08.161 [2024-07-15 15:35:11.937944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.161 [2024-07-15 15:35:11.937985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.161 qpair failed and we were unable to recover it. 00:30:08.161 [2024-07-15 15:35:11.938387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.161 [2024-07-15 15:35:11.938426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.161 qpair failed and we were unable to recover it. 00:30:08.161 [2024-07-15 15:35:11.938725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.161 [2024-07-15 15:35:11.938764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.161 qpair failed and we were unable to recover it. 00:30:08.161 [2024-07-15 15:35:11.939115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.161 [2024-07-15 15:35:11.939132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.161 qpair failed and we were unable to recover it. 00:30:08.161 [2024-07-15 15:35:11.939454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.161 [2024-07-15 15:35:11.939493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.161 qpair failed and we were unable to recover it. 00:30:08.161 [2024-07-15 15:35:11.939870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.161 [2024-07-15 15:35:11.939910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.161 qpair failed and we were unable to recover it. 00:30:08.161 [2024-07-15 15:35:11.940216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.161 [2024-07-15 15:35:11.940233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.161 qpair failed and we were unable to recover it. 00:30:08.161 [2024-07-15 15:35:11.940529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.161 [2024-07-15 15:35:11.940568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.161 qpair failed and we were unable to recover it. 00:30:08.161 [2024-07-15 15:35:11.940864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.161 [2024-07-15 15:35:11.940904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.161 qpair failed and we were unable to recover it. 00:30:08.161 [2024-07-15 15:35:11.941200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.161 [2024-07-15 15:35:11.941217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.161 qpair failed and we were unable to recover it. 00:30:08.161 [2024-07-15 15:35:11.941424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.161 [2024-07-15 15:35:11.941441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.161 qpair failed and we were unable to recover it. 00:30:08.161 [2024-07-15 15:35:11.941773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.161 [2024-07-15 15:35:11.941813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.161 qpair failed and we were unable to recover it. 00:30:08.161 [2024-07-15 15:35:11.942188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.161 [2024-07-15 15:35:11.942227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.161 qpair failed and we were unable to recover it. 00:30:08.161 [2024-07-15 15:35:11.942528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.161 [2024-07-15 15:35:11.942567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.161 qpair failed and we were unable to recover it. 00:30:08.161 [2024-07-15 15:35:11.942874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.161 [2024-07-15 15:35:11.942915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.161 qpair failed and we were unable to recover it. 00:30:08.161 [2024-07-15 15:35:11.943240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.161 [2024-07-15 15:35:11.943285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.161 qpair failed and we were unable to recover it. 00:30:08.161 [2024-07-15 15:35:11.943606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.161 [2024-07-15 15:35:11.943645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.161 qpair failed and we were unable to recover it. 00:30:08.161 [2024-07-15 15:35:11.944038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.161 [2024-07-15 15:35:11.944078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.161 qpair failed and we were unable to recover it. 00:30:08.161 [2024-07-15 15:35:11.944467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.161 [2024-07-15 15:35:11.944506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.161 qpair failed and we were unable to recover it. 00:30:08.161 [2024-07-15 15:35:11.944817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.161 [2024-07-15 15:35:11.944866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.161 qpair failed and we were unable to recover it. 00:30:08.161 [2024-07-15 15:35:11.945199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.161 [2024-07-15 15:35:11.945216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.161 qpair failed and we were unable to recover it. 00:30:08.162 [2024-07-15 15:35:11.945554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.162 [2024-07-15 15:35:11.945594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.162 qpair failed and we were unable to recover it. 00:30:08.162 [2024-07-15 15:35:11.945890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.162 [2024-07-15 15:35:11.945943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.162 qpair failed and we were unable to recover it. 00:30:08.162 [2024-07-15 15:35:11.946206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.162 [2024-07-15 15:35:11.946223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.162 qpair failed and we were unable to recover it. 00:30:08.162 [2024-07-15 15:35:11.946506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.162 [2024-07-15 15:35:11.946523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.162 qpair failed and we were unable to recover it. 00:30:08.162 [2024-07-15 15:35:11.946790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.162 [2024-07-15 15:35:11.946830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.162 qpair failed and we were unable to recover it. 00:30:08.162 [2024-07-15 15:35:11.947216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.162 [2024-07-15 15:35:11.947255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.162 qpair failed and we were unable to recover it. 00:30:08.162 [2024-07-15 15:35:11.947603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.162 [2024-07-15 15:35:11.947642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.162 qpair failed and we were unable to recover it. 00:30:08.162 [2024-07-15 15:35:11.948001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.162 [2024-07-15 15:35:11.948042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.162 qpair failed and we were unable to recover it. 00:30:08.162 [2024-07-15 15:35:11.948461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.162 [2024-07-15 15:35:11.948501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.162 qpair failed and we were unable to recover it. 00:30:08.162 [2024-07-15 15:35:11.948851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.162 [2024-07-15 15:35:11.948899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.162 qpair failed and we were unable to recover it. 00:30:08.162 [2024-07-15 15:35:11.949237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.162 [2024-07-15 15:35:11.949253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.162 qpair failed and we were unable to recover it. 00:30:08.162 [2024-07-15 15:35:11.949512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.162 [2024-07-15 15:35:11.949530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.162 qpair failed and we were unable to recover it. 00:30:08.162 [2024-07-15 15:35:11.949742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.162 [2024-07-15 15:35:11.949781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.162 qpair failed and we were unable to recover it. 00:30:08.162 [2024-07-15 15:35:11.950127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.162 [2024-07-15 15:35:11.950168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.162 qpair failed and we were unable to recover it. 00:30:08.162 [2024-07-15 15:35:11.950419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.162 [2024-07-15 15:35:11.950436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.162 qpair failed and we were unable to recover it. 00:30:08.162 [2024-07-15 15:35:11.950753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.162 [2024-07-15 15:35:11.950769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.162 qpair failed and we were unable to recover it. 00:30:08.162 [2024-07-15 15:35:11.951103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.162 [2024-07-15 15:35:11.951120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.162 qpair failed and we were unable to recover it. 00:30:08.162 [2024-07-15 15:35:11.951465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.162 [2024-07-15 15:35:11.951504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.162 qpair failed and we were unable to recover it. 00:30:08.162 [2024-07-15 15:35:11.951789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.162 [2024-07-15 15:35:11.951829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.162 qpair failed and we were unable to recover it. 00:30:08.162 [2024-07-15 15:35:11.952224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.162 [2024-07-15 15:35:11.952263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.162 qpair failed and we were unable to recover it. 00:30:08.162 [2024-07-15 15:35:11.952581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.162 [2024-07-15 15:35:11.952620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.162 qpair failed and we were unable to recover it. 00:30:08.162 [2024-07-15 15:35:11.953013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.162 [2024-07-15 15:35:11.953059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.162 qpair failed and we were unable to recover it. 00:30:08.162 [2024-07-15 15:35:11.953352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.162 [2024-07-15 15:35:11.953369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.162 qpair failed and we were unable to recover it. 00:30:08.162 [2024-07-15 15:35:11.953618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.162 [2024-07-15 15:35:11.953658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.162 qpair failed and we were unable to recover it. 00:30:08.162 [2024-07-15 15:35:11.953989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.162 [2024-07-15 15:35:11.954029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.162 qpair failed and we were unable to recover it. 00:30:08.162 [2024-07-15 15:35:11.954400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.162 [2024-07-15 15:35:11.954416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.162 qpair failed and we were unable to recover it. 00:30:08.162 [2024-07-15 15:35:11.954734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.162 [2024-07-15 15:35:11.954773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.162 qpair failed and we were unable to recover it. 00:30:08.162 [2024-07-15 15:35:11.955184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.162 [2024-07-15 15:35:11.955224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.162 qpair failed and we were unable to recover it. 00:30:08.162 [2024-07-15 15:35:11.955544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.162 [2024-07-15 15:35:11.955584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.162 qpair failed and we were unable to recover it. 00:30:08.162 [2024-07-15 15:35:11.955943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.162 [2024-07-15 15:35:11.955983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.162 qpair failed and we were unable to recover it. 00:30:08.162 [2024-07-15 15:35:11.956305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.162 [2024-07-15 15:35:11.956344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.162 qpair failed and we were unable to recover it. 00:30:08.162 [2024-07-15 15:35:11.956599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.162 [2024-07-15 15:35:11.956638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.162 qpair failed and we were unable to recover it. 00:30:08.162 [2024-07-15 15:35:11.956974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.163 [2024-07-15 15:35:11.957014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.163 qpair failed and we were unable to recover it. 00:30:08.163 [2024-07-15 15:35:11.957391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.163 [2024-07-15 15:35:11.957430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.163 qpair failed and we were unable to recover it. 00:30:08.163 [2024-07-15 15:35:11.957726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.163 [2024-07-15 15:35:11.957766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.163 qpair failed and we were unable to recover it. 00:30:08.163 [2024-07-15 15:35:11.958084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.163 [2024-07-15 15:35:11.958101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.163 qpair failed and we were unable to recover it. 00:30:08.163 [2024-07-15 15:35:11.958350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.163 [2024-07-15 15:35:11.958390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.163 qpair failed and we were unable to recover it. 00:30:08.163 [2024-07-15 15:35:11.958626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.163 [2024-07-15 15:35:11.958665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.163 qpair failed and we were unable to recover it. 00:30:08.163 [2024-07-15 15:35:11.959003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.163 [2024-07-15 15:35:11.959058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.163 qpair failed and we were unable to recover it. 00:30:08.163 [2024-07-15 15:35:11.959320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.163 [2024-07-15 15:35:11.959359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.163 qpair failed and we were unable to recover it. 00:30:08.163 [2024-07-15 15:35:11.959722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.163 [2024-07-15 15:35:11.959760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.163 qpair failed and we were unable to recover it. 00:30:08.163 [2024-07-15 15:35:11.960062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.163 [2024-07-15 15:35:11.960102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.163 qpair failed and we were unable to recover it. 00:30:08.163 [2024-07-15 15:35:11.960449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.163 [2024-07-15 15:35:11.960489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.163 qpair failed and we were unable to recover it. 00:30:08.163 [2024-07-15 15:35:11.960802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.163 [2024-07-15 15:35:11.960850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.163 qpair failed and we were unable to recover it. 00:30:08.163 [2024-07-15 15:35:11.961016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.163 [2024-07-15 15:35:11.961055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.163 qpair failed and we were unable to recover it. 00:30:08.163 [2024-07-15 15:35:11.961280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.163 [2024-07-15 15:35:11.961320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.163 qpair failed and we were unable to recover it. 00:30:08.163 [2024-07-15 15:35:11.961611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.163 [2024-07-15 15:35:11.961650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.163 qpair failed and we were unable to recover it. 00:30:08.163 [2024-07-15 15:35:11.962012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.163 [2024-07-15 15:35:11.962053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.163 qpair failed and we were unable to recover it. 00:30:08.163 [2024-07-15 15:35:11.962365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.163 [2024-07-15 15:35:11.962410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.163 qpair failed and we were unable to recover it. 00:30:08.163 [2024-07-15 15:35:11.962773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.163 [2024-07-15 15:35:11.962812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.163 qpair failed and we were unable to recover it. 00:30:08.163 [2024-07-15 15:35:11.963125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.163 [2024-07-15 15:35:11.963165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.163 qpair failed and we were unable to recover it. 00:30:08.163 [2024-07-15 15:35:11.963482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.163 [2024-07-15 15:35:11.963521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.163 qpair failed and we were unable to recover it. 00:30:08.163 [2024-07-15 15:35:11.963878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.163 [2024-07-15 15:35:11.963918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.163 qpair failed and we were unable to recover it. 00:30:08.163 [2024-07-15 15:35:11.964142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.163 [2024-07-15 15:35:11.964159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.163 qpair failed and we were unable to recover it. 00:30:08.163 [2024-07-15 15:35:11.964477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.163 [2024-07-15 15:35:11.964517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.163 qpair failed and we were unable to recover it. 00:30:08.163 [2024-07-15 15:35:11.964898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.163 [2024-07-15 15:35:11.964939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.163 qpair failed and we were unable to recover it. 00:30:08.163 [2024-07-15 15:35:11.965305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.163 [2024-07-15 15:35:11.965345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.163 qpair failed and we were unable to recover it. 00:30:08.163 [2024-07-15 15:35:11.965641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.163 [2024-07-15 15:35:11.965680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.163 qpair failed and we were unable to recover it. 00:30:08.163 [2024-07-15 15:35:11.966042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.163 [2024-07-15 15:35:11.966088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.163 qpair failed and we were unable to recover it. 00:30:08.163 [2024-07-15 15:35:11.966412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.163 [2024-07-15 15:35:11.966452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.163 qpair failed and we were unable to recover it. 00:30:08.163 [2024-07-15 15:35:11.966753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.163 [2024-07-15 15:35:11.966792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.163 qpair failed and we were unable to recover it. 00:30:08.163 [2024-07-15 15:35:11.967202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.163 [2024-07-15 15:35:11.967243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.163 qpair failed and we were unable to recover it. 00:30:08.163 [2024-07-15 15:35:11.967583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.163 [2024-07-15 15:35:11.967623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.163 qpair failed and we were unable to recover it. 00:30:08.163 [2024-07-15 15:35:11.967879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.163 [2024-07-15 15:35:11.967920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.163 qpair failed and we were unable to recover it. 00:30:08.163 [2024-07-15 15:35:11.968233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.164 [2024-07-15 15:35:11.968273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.164 qpair failed and we were unable to recover it. 00:30:08.164 [2024-07-15 15:35:11.968639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.164 [2024-07-15 15:35:11.968679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.164 qpair failed and we were unable to recover it. 00:30:08.164 [2024-07-15 15:35:11.969006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.164 [2024-07-15 15:35:11.969023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.164 qpair failed and we were unable to recover it. 00:30:08.164 [2024-07-15 15:35:11.969319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.164 [2024-07-15 15:35:11.969359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.164 qpair failed and we were unable to recover it. 00:30:08.164 [2024-07-15 15:35:11.969743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.164 [2024-07-15 15:35:11.969783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.164 qpair failed and we were unable to recover it. 00:30:08.164 [2024-07-15 15:35:11.970200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.164 [2024-07-15 15:35:11.970253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.164 qpair failed and we were unable to recover it. 00:30:08.164 [2024-07-15 15:35:11.970453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.164 [2024-07-15 15:35:11.970470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.164 qpair failed and we were unable to recover it. 00:30:08.164 [2024-07-15 15:35:11.970807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.164 [2024-07-15 15:35:11.970855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.164 qpair failed and we were unable to recover it. 00:30:08.164 [2024-07-15 15:35:11.971112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.164 [2024-07-15 15:35:11.971151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.164 qpair failed and we were unable to recover it. 00:30:08.164 [2024-07-15 15:35:11.971519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.164 [2024-07-15 15:35:11.971555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.164 qpair failed and we were unable to recover it. 00:30:08.164 [2024-07-15 15:35:11.971799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.164 [2024-07-15 15:35:11.971847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.164 qpair failed and we were unable to recover it. 00:30:08.164 [2024-07-15 15:35:11.972201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.164 [2024-07-15 15:35:11.972240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.164 qpair failed and we were unable to recover it. 00:30:08.164 [2024-07-15 15:35:11.972549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.164 [2024-07-15 15:35:11.972589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.164 qpair failed and we were unable to recover it. 00:30:08.164 [2024-07-15 15:35:11.972951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.164 [2024-07-15 15:35:11.972991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.164 qpair failed and we were unable to recover it. 00:30:08.164 [2024-07-15 15:35:11.973347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.164 [2024-07-15 15:35:11.973364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.164 qpair failed and we were unable to recover it. 00:30:08.164 [2024-07-15 15:35:11.973553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.164 [2024-07-15 15:35:11.973571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.164 qpair failed and we were unable to recover it. 00:30:08.164 [2024-07-15 15:35:11.973866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.164 [2024-07-15 15:35:11.973906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.164 qpair failed and we were unable to recover it. 00:30:08.164 [2024-07-15 15:35:11.974238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.164 [2024-07-15 15:35:11.974277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.164 qpair failed and we were unable to recover it. 00:30:08.164 [2024-07-15 15:35:11.974587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.164 [2024-07-15 15:35:11.974627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.164 qpair failed and we were unable to recover it. 00:30:08.164 [2024-07-15 15:35:11.974947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.164 [2024-07-15 15:35:11.974988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.164 qpair failed and we were unable to recover it. 00:30:08.164 [2024-07-15 15:35:11.975366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.164 [2024-07-15 15:35:11.975404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.164 qpair failed and we were unable to recover it. 00:30:08.164 [2024-07-15 15:35:11.975658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.164 [2024-07-15 15:35:11.975697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.164 qpair failed and we were unable to recover it. 00:30:08.164 [2024-07-15 15:35:11.976028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.164 [2024-07-15 15:35:11.976068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.164 qpair failed and we were unable to recover it. 00:30:08.164 [2024-07-15 15:35:11.976318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.164 [2024-07-15 15:35:11.976357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.164 qpair failed and we were unable to recover it. 00:30:08.164 [2024-07-15 15:35:11.976648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.164 [2024-07-15 15:35:11.976688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.164 qpair failed and we were unable to recover it. 00:30:08.164 [2024-07-15 15:35:11.976990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.164 [2024-07-15 15:35:11.977031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.164 qpair failed and we were unable to recover it. 00:30:08.164 [2024-07-15 15:35:11.977413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.164 [2024-07-15 15:35:11.977453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.164 qpair failed and we were unable to recover it. 00:30:08.164 [2024-07-15 15:35:11.977766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.164 [2024-07-15 15:35:11.977805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.164 qpair failed and we were unable to recover it. 00:30:08.164 [2024-07-15 15:35:11.978107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.164 [2024-07-15 15:35:11.978146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.164 qpair failed and we were unable to recover it. 00:30:08.164 [2024-07-15 15:35:11.978521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.164 [2024-07-15 15:35:11.978560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.164 qpair failed and we were unable to recover it. 00:30:08.164 [2024-07-15 15:35:11.978940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.164 [2024-07-15 15:35:11.978980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.164 qpair failed and we were unable to recover it. 00:30:08.164 [2024-07-15 15:35:11.979293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.164 [2024-07-15 15:35:11.979331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.164 qpair failed and we were unable to recover it. 00:30:08.164 [2024-07-15 15:35:11.979638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.164 [2024-07-15 15:35:11.979677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.164 qpair failed and we were unable to recover it. 00:30:08.164 [2024-07-15 15:35:11.980040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.164 [2024-07-15 15:35:11.980080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.164 qpair failed and we were unable to recover it. 00:30:08.165 [2024-07-15 15:35:11.980437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.165 [2024-07-15 15:35:11.980454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.165 qpair failed and we were unable to recover it. 00:30:08.165 [2024-07-15 15:35:11.980808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.165 [2024-07-15 15:35:11.980858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.165 qpair failed and we were unable to recover it. 00:30:08.165 [2024-07-15 15:35:11.981146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.165 [2024-07-15 15:35:11.981185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.165 qpair failed and we were unable to recover it. 00:30:08.165 [2024-07-15 15:35:11.981517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.165 [2024-07-15 15:35:11.981534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.165 qpair failed and we were unable to recover it. 00:30:08.165 [2024-07-15 15:35:11.981823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.165 [2024-07-15 15:35:11.981872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.165 qpair failed and we were unable to recover it. 00:30:08.165 [2024-07-15 15:35:11.982195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.165 [2024-07-15 15:35:11.982235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.165 qpair failed and we were unable to recover it. 00:30:08.165 [2024-07-15 15:35:11.982599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.165 [2024-07-15 15:35:11.982638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.165 qpair failed and we were unable to recover it. 00:30:08.165 [2024-07-15 15:35:11.982950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.165 [2024-07-15 15:35:11.982991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.165 qpair failed and we were unable to recover it. 00:30:08.165 [2024-07-15 15:35:11.983321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.165 [2024-07-15 15:35:11.983338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.165 qpair failed and we were unable to recover it. 00:30:08.165 [2024-07-15 15:35:11.983539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.165 [2024-07-15 15:35:11.983556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.165 qpair failed and we were unable to recover it. 00:30:08.165 [2024-07-15 15:35:11.983811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.165 [2024-07-15 15:35:11.983828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.165 qpair failed and we were unable to recover it. 00:30:08.165 [2024-07-15 15:35:11.984077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.165 [2024-07-15 15:35:11.984093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.165 qpair failed and we were unable to recover it. 00:30:08.165 [2024-07-15 15:35:11.984355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.165 [2024-07-15 15:35:11.984372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.165 qpair failed and we were unable to recover it. 00:30:08.165 [2024-07-15 15:35:11.984638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.165 [2024-07-15 15:35:11.984677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.165 qpair failed and we were unable to recover it. 00:30:08.165 [2024-07-15 15:35:11.984915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.165 [2024-07-15 15:35:11.984955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.165 qpair failed and we were unable to recover it. 00:30:08.165 [2024-07-15 15:35:11.985213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.165 [2024-07-15 15:35:11.985252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.165 qpair failed and we were unable to recover it. 00:30:08.165 [2024-07-15 15:35:11.985628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.165 [2024-07-15 15:35:11.985667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.165 qpair failed and we were unable to recover it. 00:30:08.165 [2024-07-15 15:35:11.985989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.165 [2024-07-15 15:35:11.986030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.165 qpair failed and we were unable to recover it. 00:30:08.165 [2024-07-15 15:35:11.986343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.165 [2024-07-15 15:35:11.986383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.165 qpair failed and we were unable to recover it. 00:30:08.165 [2024-07-15 15:35:11.986664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.165 [2024-07-15 15:35:11.986680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.165 qpair failed and we were unable to recover it. 00:30:08.165 [2024-07-15 15:35:11.986925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.165 [2024-07-15 15:35:11.986966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.165 qpair failed and we were unable to recover it. 00:30:08.165 [2024-07-15 15:35:11.987263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.165 [2024-07-15 15:35:11.987302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.165 qpair failed and we were unable to recover it. 00:30:08.165 [2024-07-15 15:35:11.987529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.165 [2024-07-15 15:35:11.987568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.165 qpair failed and we were unable to recover it. 00:30:08.165 [2024-07-15 15:35:11.987930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.165 [2024-07-15 15:35:11.987969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.165 qpair failed and we were unable to recover it. 00:30:08.166 [2024-07-15 15:35:11.988309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.166 [2024-07-15 15:35:11.988348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.166 qpair failed and we were unable to recover it. 00:30:08.166 [2024-07-15 15:35:11.988677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.166 [2024-07-15 15:35:11.988716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.166 qpair failed and we were unable to recover it. 00:30:08.166 [2024-07-15 15:35:11.988950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.166 [2024-07-15 15:35:11.988990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.166 qpair failed and we were unable to recover it. 00:30:08.166 [2024-07-15 15:35:11.989287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.166 [2024-07-15 15:35:11.989326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.166 qpair failed and we were unable to recover it. 00:30:08.166 [2024-07-15 15:35:11.989600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.166 [2024-07-15 15:35:11.989637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.166 qpair failed and we were unable to recover it. 00:30:08.166 [2024-07-15 15:35:11.990000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.166 [2024-07-15 15:35:11.990040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.166 qpair failed and we were unable to recover it. 00:30:08.166 [2024-07-15 15:35:11.990401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.166 [2024-07-15 15:35:11.990440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.166 qpair failed and we were unable to recover it. 00:30:08.166 [2024-07-15 15:35:11.990879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.166 [2024-07-15 15:35:11.990920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.166 qpair failed and we were unable to recover it. 00:30:08.166 [2024-07-15 15:35:11.991320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.166 [2024-07-15 15:35:11.991359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.166 qpair failed and we were unable to recover it. 00:30:08.166 [2024-07-15 15:35:11.991726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.166 [2024-07-15 15:35:11.991766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.166 qpair failed and we were unable to recover it. 00:30:08.166 [2024-07-15 15:35:11.991958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.166 [2024-07-15 15:35:11.991998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.166 qpair failed and we were unable to recover it. 00:30:08.166 [2024-07-15 15:35:11.992317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.166 [2024-07-15 15:35:11.992356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.166 qpair failed and we were unable to recover it. 00:30:08.166 [2024-07-15 15:35:11.992762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.166 [2024-07-15 15:35:11.992801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.166 qpair failed and we were unable to recover it. 00:30:08.166 [2024-07-15 15:35:11.993039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.166 [2024-07-15 15:35:11.993079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.166 qpair failed and we were unable to recover it. 00:30:08.166 [2024-07-15 15:35:11.993475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.166 [2024-07-15 15:35:11.993514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.166 qpair failed and we were unable to recover it. 00:30:08.166 [2024-07-15 15:35:11.993829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.166 [2024-07-15 15:35:11.993876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.166 qpair failed and we were unable to recover it. 00:30:08.166 [2024-07-15 15:35:11.994178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.166 [2024-07-15 15:35:11.994218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.166 qpair failed and we were unable to recover it. 00:30:08.166 [2024-07-15 15:35:11.994516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.166 [2024-07-15 15:35:11.994556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.166 qpair failed and we were unable to recover it. 00:30:08.166 [2024-07-15 15:35:11.994875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.166 [2024-07-15 15:35:11.994916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.166 qpair failed and we were unable to recover it. 00:30:08.166 [2024-07-15 15:35:11.995252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.166 [2024-07-15 15:35:11.995291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.166 qpair failed and we were unable to recover it. 00:30:08.166 [2024-07-15 15:35:11.995551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.166 [2024-07-15 15:35:11.995567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.166 qpair failed and we were unable to recover it. 00:30:08.166 [2024-07-15 15:35:11.995884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.166 [2024-07-15 15:35:11.995931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.166 qpair failed and we were unable to recover it. 00:30:08.166 [2024-07-15 15:35:11.996186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.166 [2024-07-15 15:35:11.996226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.166 qpair failed and we were unable to recover it. 00:30:08.166 [2024-07-15 15:35:11.996532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.166 [2024-07-15 15:35:11.996548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.166 qpair failed and we were unable to recover it. 00:30:08.166 [2024-07-15 15:35:11.996824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.166 [2024-07-15 15:35:11.996881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.166 qpair failed and we were unable to recover it. 00:30:08.166 [2024-07-15 15:35:11.997243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.166 [2024-07-15 15:35:11.997282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.166 qpair failed and we were unable to recover it. 00:30:08.166 [2024-07-15 15:35:11.997626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.166 [2024-07-15 15:35:11.997665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.166 qpair failed and we were unable to recover it. 00:30:08.166 [2024-07-15 15:35:11.998073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.166 [2024-07-15 15:35:11.998113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.166 qpair failed and we were unable to recover it. 00:30:08.166 [2024-07-15 15:35:11.998470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.166 [2024-07-15 15:35:11.998509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.166 qpair failed and we were unable to recover it. 00:30:08.166 [2024-07-15 15:35:11.998871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.166 [2024-07-15 15:35:11.998912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.166 qpair failed and we were unable to recover it. 00:30:08.166 [2024-07-15 15:35:11.999130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.166 [2024-07-15 15:35:11.999147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.167 qpair failed and we were unable to recover it. 00:30:08.167 [2024-07-15 15:35:11.999332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.167 [2024-07-15 15:35:11.999372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.167 qpair failed and we were unable to recover it. 00:30:08.167 [2024-07-15 15:35:11.999616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.167 [2024-07-15 15:35:11.999655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.167 qpair failed and we were unable to recover it. 00:30:08.167 [2024-07-15 15:35:11.999981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.167 [2024-07-15 15:35:12.000025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.167 qpair failed and we were unable to recover it. 00:30:08.167 [2024-07-15 15:35:12.000382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.167 [2024-07-15 15:35:12.000421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.167 qpair failed and we were unable to recover it. 00:30:08.167 [2024-07-15 15:35:12.000790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.167 [2024-07-15 15:35:12.000830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.167 qpair failed and we were unable to recover it. 00:30:08.167 [2024-07-15 15:35:12.001069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.167 [2024-07-15 15:35:12.001109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.167 qpair failed and we were unable to recover it. 00:30:08.167 [2024-07-15 15:35:12.001492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.167 [2024-07-15 15:35:12.001530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.167 qpair failed and we were unable to recover it. 00:30:08.167 [2024-07-15 15:35:12.001847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.167 [2024-07-15 15:35:12.001887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.167 qpair failed and we were unable to recover it. 00:30:08.167 [2024-07-15 15:35:12.002133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.167 [2024-07-15 15:35:12.002150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.167 qpair failed and we were unable to recover it. 00:30:08.167 [2024-07-15 15:35:12.002494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.167 [2024-07-15 15:35:12.002534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.167 qpair failed and we were unable to recover it. 00:30:08.167 [2024-07-15 15:35:12.002874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.167 [2024-07-15 15:35:12.002917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.167 qpair failed and we were unable to recover it. 00:30:08.167 [2024-07-15 15:35:12.003303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.167 [2024-07-15 15:35:12.003343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.167 qpair failed and we were unable to recover it. 00:30:08.167 [2024-07-15 15:35:12.003688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.167 [2024-07-15 15:35:12.003705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.167 qpair failed and we were unable to recover it. 00:30:08.167 [2024-07-15 15:35:12.004036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.167 [2024-07-15 15:35:12.004076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.167 qpair failed and we were unable to recover it. 00:30:08.167 [2024-07-15 15:35:12.004380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.167 [2024-07-15 15:35:12.004419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.167 qpair failed and we were unable to recover it. 00:30:08.167 [2024-07-15 15:35:12.004824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.167 [2024-07-15 15:35:12.004871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.167 qpair failed and we were unable to recover it. 00:30:08.167 [2024-07-15 15:35:12.005231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.167 [2024-07-15 15:35:12.005271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.167 qpair failed and we were unable to recover it. 00:30:08.167 [2024-07-15 15:35:12.005684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.167 [2024-07-15 15:35:12.005723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.167 qpair failed and we were unable to recover it. 00:30:08.167 [2024-07-15 15:35:12.005976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.167 [2024-07-15 15:35:12.006017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.167 qpair failed and we were unable to recover it. 00:30:08.167 [2024-07-15 15:35:12.006340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.167 [2024-07-15 15:35:12.006380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.167 qpair failed and we were unable to recover it. 00:30:08.167 [2024-07-15 15:35:12.006750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.167 [2024-07-15 15:35:12.006790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.167 qpair failed and we were unable to recover it. 00:30:08.167 [2024-07-15 15:35:12.007203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.167 [2024-07-15 15:35:12.007244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.167 qpair failed and we were unable to recover it. 00:30:08.167 [2024-07-15 15:35:12.007512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.167 [2024-07-15 15:35:12.007529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.167 qpair failed and we were unable to recover it. 00:30:08.167 [2024-07-15 15:35:12.007826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.167 [2024-07-15 15:35:12.007848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.167 qpair failed and we were unable to recover it. 00:30:08.167 [2024-07-15 15:35:12.008144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.167 [2024-07-15 15:35:12.008183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.167 qpair failed and we were unable to recover it. 00:30:08.167 [2024-07-15 15:35:12.008487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.167 [2024-07-15 15:35:12.008526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.167 qpair failed and we were unable to recover it. 00:30:08.167 [2024-07-15 15:35:12.008850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.167 [2024-07-15 15:35:12.008890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.167 qpair failed and we were unable to recover it. 00:30:08.167 [2024-07-15 15:35:12.009186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.167 [2024-07-15 15:35:12.009226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.167 qpair failed and we were unable to recover it. 00:30:08.167 [2024-07-15 15:35:12.009526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.167 [2024-07-15 15:35:12.009542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.167 qpair failed and we were unable to recover it. 00:30:08.167 [2024-07-15 15:35:12.009742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.167 [2024-07-15 15:35:12.009781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.168 qpair failed and we were unable to recover it. 00:30:08.168 [2024-07-15 15:35:12.010088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.168 [2024-07-15 15:35:12.010129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.168 qpair failed and we were unable to recover it. 00:30:08.168 [2024-07-15 15:35:12.010450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.168 [2024-07-15 15:35:12.010467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.168 qpair failed and we were unable to recover it. 00:30:08.168 [2024-07-15 15:35:12.010657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.168 [2024-07-15 15:35:12.010696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.168 qpair failed and we were unable to recover it. 00:30:08.168 [2024-07-15 15:35:12.010994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.168 [2024-07-15 15:35:12.011037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.168 qpair failed and we were unable to recover it. 00:30:08.168 [2024-07-15 15:35:12.011354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.168 [2024-07-15 15:35:12.011393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.168 qpair failed and we were unable to recover it. 00:30:08.168 [2024-07-15 15:35:12.011763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.168 [2024-07-15 15:35:12.011802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.168 qpair failed and we were unable to recover it. 00:30:08.168 [2024-07-15 15:35:12.012176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.168 [2024-07-15 15:35:12.012216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.168 qpair failed and we were unable to recover it. 00:30:08.168 [2024-07-15 15:35:12.012573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.168 [2024-07-15 15:35:12.012604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.168 qpair failed and we were unable to recover it. 00:30:08.168 [2024-07-15 15:35:12.012919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.168 [2024-07-15 15:35:12.012959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.168 qpair failed and we were unable to recover it. 00:30:08.168 [2024-07-15 15:35:12.013347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.168 [2024-07-15 15:35:12.013386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.168 qpair failed and we were unable to recover it. 00:30:08.168 [2024-07-15 15:35:12.013751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.168 [2024-07-15 15:35:12.013791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.168 qpair failed and we were unable to recover it. 00:30:08.168 [2024-07-15 15:35:12.014095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.168 [2024-07-15 15:35:12.014135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.168 qpair failed and we were unable to recover it. 00:30:08.168 [2024-07-15 15:35:12.014380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.168 [2024-07-15 15:35:12.014418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.168 qpair failed and we were unable to recover it. 00:30:08.168 [2024-07-15 15:35:12.014853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.168 [2024-07-15 15:35:12.014894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.168 qpair failed and we were unable to recover it. 00:30:08.168 [2024-07-15 15:35:12.015199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.168 [2024-07-15 15:35:12.015238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.168 qpair failed and we were unable to recover it. 00:30:08.168 [2024-07-15 15:35:12.015627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.168 [2024-07-15 15:35:12.015667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.168 qpair failed and we were unable to recover it. 00:30:08.168 [2024-07-15 15:35:12.016074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.168 [2024-07-15 15:35:12.016115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.168 qpair failed and we were unable to recover it. 00:30:08.168 [2024-07-15 15:35:12.016413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.168 [2024-07-15 15:35:12.016430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.168 qpair failed and we were unable to recover it. 00:30:08.168 [2024-07-15 15:35:12.016789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.168 [2024-07-15 15:35:12.016828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.168 qpair failed and we were unable to recover it. 00:30:08.168 [2024-07-15 15:35:12.017114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.168 [2024-07-15 15:35:12.017131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.168 qpair failed and we were unable to recover it. 00:30:08.168 [2024-07-15 15:35:12.017326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.168 [2024-07-15 15:35:12.017343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.168 qpair failed and we were unable to recover it. 00:30:08.168 [2024-07-15 15:35:12.017631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.168 [2024-07-15 15:35:12.017671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.168 qpair failed and we were unable to recover it. 00:30:08.168 [2024-07-15 15:35:12.017827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.168 [2024-07-15 15:35:12.017875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.168 qpair failed and we were unable to recover it. 00:30:08.168 [2024-07-15 15:35:12.018260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.168 [2024-07-15 15:35:12.018300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.168 qpair failed and we were unable to recover it. 00:30:08.168 [2024-07-15 15:35:12.018638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.168 [2024-07-15 15:35:12.018679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.168 qpair failed and we were unable to recover it. 00:30:08.168 [2024-07-15 15:35:12.019045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.168 [2024-07-15 15:35:12.019086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.168 qpair failed and we were unable to recover it. 00:30:08.168 [2024-07-15 15:35:12.019473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.168 [2024-07-15 15:35:12.019512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.168 qpair failed and we were unable to recover it. 00:30:08.168 [2024-07-15 15:35:12.019880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.168 [2024-07-15 15:35:12.019920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.168 qpair failed and we were unable to recover it. 00:30:08.168 [2024-07-15 15:35:12.020271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.168 [2024-07-15 15:35:12.020315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.168 qpair failed and we were unable to recover it. 00:30:08.168 [2024-07-15 15:35:12.020620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.168 [2024-07-15 15:35:12.020636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.168 qpair failed and we were unable to recover it. 00:30:08.169 [2024-07-15 15:35:12.020898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.169 [2024-07-15 15:35:12.020916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.169 qpair failed and we were unable to recover it. 00:30:08.169 [2024-07-15 15:35:12.021233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.169 [2024-07-15 15:35:12.021272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.169 qpair failed and we were unable to recover it. 00:30:08.169 [2024-07-15 15:35:12.021575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.169 [2024-07-15 15:35:12.021615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.169 qpair failed and we were unable to recover it. 00:30:08.169 [2024-07-15 15:35:12.021922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.169 [2024-07-15 15:35:12.021963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.169 qpair failed and we were unable to recover it. 00:30:08.169 [2024-07-15 15:35:12.022153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.169 [2024-07-15 15:35:12.022170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.169 qpair failed and we were unable to recover it. 00:30:08.169 [2024-07-15 15:35:12.022486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.169 [2024-07-15 15:35:12.022526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.169 qpair failed and we were unable to recover it. 00:30:08.169 [2024-07-15 15:35:12.022859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.169 [2024-07-15 15:35:12.022899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.169 qpair failed and we were unable to recover it. 00:30:08.169 [2024-07-15 15:35:12.023137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.169 [2024-07-15 15:35:12.023176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.169 qpair failed and we were unable to recover it. 00:30:08.169 [2024-07-15 15:35:12.023486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.169 [2024-07-15 15:35:12.023526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.169 qpair failed and we were unable to recover it. 00:30:08.169 [2024-07-15 15:35:12.023821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.169 [2024-07-15 15:35:12.023884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.169 qpair failed and we were unable to recover it. 00:30:08.169 [2024-07-15 15:35:12.024201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.169 [2024-07-15 15:35:12.024241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.169 qpair failed and we were unable to recover it. 00:30:08.169 [2024-07-15 15:35:12.024541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.169 [2024-07-15 15:35:12.024581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.169 qpair failed and we were unable to recover it. 00:30:08.169 [2024-07-15 15:35:12.024884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.169 [2024-07-15 15:35:12.024925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.169 qpair failed and we were unable to recover it. 00:30:08.169 [2024-07-15 15:35:12.025248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.169 [2024-07-15 15:35:12.025287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.169 qpair failed and we were unable to recover it. 00:30:08.169 [2024-07-15 15:35:12.025656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.169 [2024-07-15 15:35:12.025696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.169 qpair failed and we were unable to recover it. 00:30:08.169 [2024-07-15 15:35:12.025996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.169 [2024-07-15 15:35:12.026037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.169 qpair failed and we were unable to recover it. 00:30:08.169 [2024-07-15 15:35:12.026301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.169 [2024-07-15 15:35:12.026340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.169 qpair failed and we were unable to recover it. 00:30:08.169 [2024-07-15 15:35:12.026571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.169 [2024-07-15 15:35:12.026611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.169 qpair failed and we were unable to recover it. 00:30:08.169 [2024-07-15 15:35:12.026915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.169 [2024-07-15 15:35:12.026956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.169 qpair failed and we were unable to recover it. 00:30:08.169 [2024-07-15 15:35:12.027204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.169 [2024-07-15 15:35:12.027244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.169 qpair failed and we were unable to recover it. 00:30:08.169 [2024-07-15 15:35:12.027440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.169 [2024-07-15 15:35:12.027457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.169 qpair failed and we were unable to recover it. 00:30:08.169 [2024-07-15 15:35:12.027725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.169 [2024-07-15 15:35:12.027764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.169 qpair failed and we were unable to recover it. 00:30:08.169 [2024-07-15 15:35:12.028079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.169 [2024-07-15 15:35:12.028119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.169 qpair failed and we were unable to recover it. 00:30:08.169 [2024-07-15 15:35:12.028447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.169 [2024-07-15 15:35:12.028486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.169 qpair failed and we were unable to recover it. 00:30:08.169 [2024-07-15 15:35:12.028799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.169 [2024-07-15 15:35:12.028847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.169 qpair failed and we were unable to recover it. 00:30:08.169 [2024-07-15 15:35:12.029147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.169 [2024-07-15 15:35:12.029192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.169 qpair failed and we were unable to recover it. 00:30:08.169 [2024-07-15 15:35:12.029554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.169 [2024-07-15 15:35:12.029594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.169 qpair failed and we were unable to recover it. 00:30:08.169 [2024-07-15 15:35:12.029857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.169 [2024-07-15 15:35:12.029898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.169 qpair failed and we were unable to recover it. 00:30:08.169 [2024-07-15 15:35:12.030280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.169 [2024-07-15 15:35:12.030320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.169 qpair failed and we were unable to recover it. 00:30:08.169 [2024-07-15 15:35:12.030561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.169 [2024-07-15 15:35:12.030601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.169 qpair failed and we were unable to recover it. 00:30:08.169 [2024-07-15 15:35:12.030958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.169 [2024-07-15 15:35:12.030999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.170 qpair failed and we were unable to recover it. 00:30:08.170 [2024-07-15 15:35:12.031292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.170 [2024-07-15 15:35:12.031332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.170 qpair failed and we were unable to recover it. 00:30:08.170 [2024-07-15 15:35:12.031638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.170 [2024-07-15 15:35:12.031677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.170 qpair failed and we were unable to recover it. 00:30:08.170 [2024-07-15 15:35:12.032003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.170 [2024-07-15 15:35:12.032047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.170 qpair failed and we were unable to recover it. 00:30:08.170 [2024-07-15 15:35:12.032307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.170 [2024-07-15 15:35:12.032324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.170 qpair failed and we were unable to recover it. 00:30:08.170 [2024-07-15 15:35:12.032633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.170 [2024-07-15 15:35:12.032650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.170 qpair failed and we were unable to recover it. 00:30:08.170 [2024-07-15 15:35:12.032960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.170 [2024-07-15 15:35:12.032977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.170 qpair failed and we were unable to recover it. 00:30:08.170 [2024-07-15 15:35:12.033243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.170 [2024-07-15 15:35:12.033259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.170 qpair failed and we were unable to recover it. 00:30:08.170 [2024-07-15 15:35:12.033505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.170 [2024-07-15 15:35:12.033544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.170 qpair failed and we were unable to recover it. 00:30:08.170 [2024-07-15 15:35:12.033803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.170 [2024-07-15 15:35:12.033853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.170 qpair failed and we were unable to recover it. 00:30:08.170 [2024-07-15 15:35:12.034129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.170 [2024-07-15 15:35:12.034169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.170 qpair failed and we were unable to recover it. 00:30:08.170 [2024-07-15 15:35:12.034487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.170 [2024-07-15 15:35:12.034526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.170 qpair failed and we were unable to recover it. 00:30:08.170 [2024-07-15 15:35:12.034754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.170 [2024-07-15 15:35:12.034793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.170 qpair failed and we were unable to recover it. 00:30:08.170 [2024-07-15 15:35:12.035121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.170 [2024-07-15 15:35:12.035161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.170 qpair failed and we were unable to recover it. 00:30:08.170 [2024-07-15 15:35:12.035471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.170 [2024-07-15 15:35:12.035488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.170 qpair failed and we were unable to recover it. 00:30:08.170 [2024-07-15 15:35:12.035843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.170 [2024-07-15 15:35:12.035861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.170 qpair failed and we were unable to recover it. 00:30:08.170 [2024-07-15 15:35:12.036184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.170 [2024-07-15 15:35:12.036223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.170 qpair failed and we were unable to recover it. 00:30:08.170 [2024-07-15 15:35:12.036386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.170 [2024-07-15 15:35:12.036425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.170 qpair failed and we were unable to recover it. 00:30:08.170 [2024-07-15 15:35:12.036712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.170 [2024-07-15 15:35:12.036729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.170 qpair failed and we were unable to recover it. 00:30:08.170 [2024-07-15 15:35:12.036909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.170 [2024-07-15 15:35:12.036926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.170 qpair failed and we were unable to recover it. 00:30:08.170 [2024-07-15 15:35:12.037102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.170 [2024-07-15 15:35:12.037119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.170 qpair failed and we were unable to recover it. 00:30:08.445 [2024-07-15 15:35:12.037429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.445 [2024-07-15 15:35:12.037446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.445 qpair failed and we were unable to recover it. 00:30:08.445 [2024-07-15 15:35:12.037758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.445 [2024-07-15 15:35:12.037777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.445 qpair failed and we were unable to recover it. 00:30:08.445 [2024-07-15 15:35:12.038042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.445 [2024-07-15 15:35:12.038059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.445 qpair failed and we were unable to recover it. 00:30:08.445 [2024-07-15 15:35:12.038260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.445 [2024-07-15 15:35:12.038278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.445 qpair failed and we were unable to recover it. 00:30:08.445 [2024-07-15 15:35:12.038592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.445 [2024-07-15 15:35:12.038609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.445 qpair failed and we were unable to recover it. 00:30:08.445 [2024-07-15 15:35:12.038925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.445 [2024-07-15 15:35:12.038942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.445 qpair failed and we were unable to recover it. 00:30:08.445 [2024-07-15 15:35:12.039139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.445 [2024-07-15 15:35:12.039156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.445 qpair failed and we were unable to recover it. 00:30:08.445 [2024-07-15 15:35:12.039348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.445 [2024-07-15 15:35:12.039365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.445 qpair failed and we were unable to recover it. 00:30:08.445 [2024-07-15 15:35:12.039674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.445 [2024-07-15 15:35:12.039692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.445 qpair failed and we were unable to recover it. 00:30:08.445 [2024-07-15 15:35:12.039949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.445 [2024-07-15 15:35:12.039968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.445 qpair failed and we were unable to recover it. 00:30:08.445 [2024-07-15 15:35:12.040149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.445 [2024-07-15 15:35:12.040166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.445 qpair failed and we were unable to recover it. 00:30:08.445 [2024-07-15 15:35:12.040411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.445 [2024-07-15 15:35:12.040428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.445 qpair failed and we were unable to recover it. 00:30:08.445 [2024-07-15 15:35:12.040671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.445 [2024-07-15 15:35:12.040689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.445 qpair failed and we were unable to recover it. 00:30:08.445 [2024-07-15 15:35:12.040901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.445 [2024-07-15 15:35:12.040919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.445 qpair failed and we were unable to recover it. 00:30:08.445 [2024-07-15 15:35:12.041163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.445 [2024-07-15 15:35:12.041180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.445 qpair failed and we were unable to recover it. 00:30:08.445 [2024-07-15 15:35:12.041384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.445 [2024-07-15 15:35:12.041422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.445 qpair failed and we were unable to recover it. 00:30:08.445 [2024-07-15 15:35:12.041714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.445 [2024-07-15 15:35:12.041734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.445 qpair failed and we were unable to recover it. 00:30:08.445 [2024-07-15 15:35:12.041878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.445 [2024-07-15 15:35:12.041897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.445 qpair failed and we were unable to recover it. 00:30:08.445 [2024-07-15 15:35:12.042143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.445 [2024-07-15 15:35:12.042160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.445 qpair failed and we were unable to recover it. 00:30:08.445 [2024-07-15 15:35:12.042484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.445 [2024-07-15 15:35:12.042501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.445 qpair failed and we were unable to recover it. 00:30:08.445 [2024-07-15 15:35:12.042809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.445 [2024-07-15 15:35:12.042863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.445 qpair failed and we were unable to recover it. 00:30:08.445 [2024-07-15 15:35:12.043195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.445 [2024-07-15 15:35:12.043213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.445 qpair failed and we were unable to recover it. 00:30:08.445 [2024-07-15 15:35:12.043475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.445 [2024-07-15 15:35:12.043494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.445 qpair failed and we were unable to recover it. 00:30:08.445 [2024-07-15 15:35:12.043804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.445 [2024-07-15 15:35:12.043852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.445 qpair failed and we were unable to recover it. 00:30:08.445 [2024-07-15 15:35:12.044215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.445 [2024-07-15 15:35:12.044255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.445 qpair failed and we were unable to recover it. 00:30:08.445 [2024-07-15 15:35:12.044645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.445 [2024-07-15 15:35:12.044685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.445 qpair failed and we were unable to recover it. 00:30:08.445 [2024-07-15 15:35:12.045039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.445 [2024-07-15 15:35:12.045079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.445 qpair failed and we were unable to recover it. 00:30:08.445 [2024-07-15 15:35:12.045394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.445 [2024-07-15 15:35:12.045434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.445 qpair failed and we were unable to recover it. 00:30:08.445 [2024-07-15 15:35:12.045757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.445 [2024-07-15 15:35:12.045805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.445 qpair failed and we were unable to recover it. 00:30:08.445 [2024-07-15 15:35:12.046208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.446 [2024-07-15 15:35:12.046248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.446 qpair failed and we were unable to recover it. 00:30:08.446 [2024-07-15 15:35:12.046560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.446 [2024-07-15 15:35:12.046578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.446 qpair failed and we were unable to recover it. 00:30:08.446 [2024-07-15 15:35:12.046854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.446 [2024-07-15 15:35:12.046895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.446 qpair failed and we were unable to recover it. 00:30:08.446 [2024-07-15 15:35:12.047285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.446 [2024-07-15 15:35:12.047325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.446 qpair failed and we were unable to recover it. 00:30:08.446 [2024-07-15 15:35:12.047548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.446 [2024-07-15 15:35:12.047564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.446 qpair failed and we were unable to recover it. 00:30:08.446 [2024-07-15 15:35:12.047812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.446 [2024-07-15 15:35:12.047867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.446 qpair failed and we were unable to recover it. 00:30:08.446 [2024-07-15 15:35:12.048255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.446 [2024-07-15 15:35:12.048295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.446 qpair failed and we were unable to recover it. 00:30:08.446 [2024-07-15 15:35:12.048605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.446 [2024-07-15 15:35:12.048645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.446 qpair failed and we were unable to recover it. 00:30:08.446 [2024-07-15 15:35:12.049054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.446 [2024-07-15 15:35:12.049096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.446 qpair failed and we were unable to recover it. 00:30:08.446 [2024-07-15 15:35:12.049372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.446 [2024-07-15 15:35:12.049413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.446 qpair failed and we were unable to recover it. 00:30:08.446 [2024-07-15 15:35:12.049706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.446 [2024-07-15 15:35:12.049745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.446 qpair failed and we were unable to recover it. 00:30:08.446 [2024-07-15 15:35:12.050117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.446 [2024-07-15 15:35:12.050158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.446 qpair failed and we were unable to recover it. 00:30:08.446 [2024-07-15 15:35:12.050492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.446 [2024-07-15 15:35:12.050531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.446 qpair failed and we were unable to recover it. 00:30:08.446 [2024-07-15 15:35:12.050940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.446 [2024-07-15 15:35:12.050981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.446 qpair failed and we were unable to recover it. 00:30:08.446 [2024-07-15 15:35:12.051311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.446 [2024-07-15 15:35:12.051351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.446 qpair failed and we were unable to recover it. 00:30:08.446 [2024-07-15 15:35:12.051646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.446 [2024-07-15 15:35:12.051685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.446 qpair failed and we were unable to recover it. 00:30:08.446 [2024-07-15 15:35:12.051986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.446 [2024-07-15 15:35:12.052027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.446 qpair failed and we were unable to recover it. 00:30:08.446 [2024-07-15 15:35:12.052355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.446 [2024-07-15 15:35:12.052395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.446 qpair failed and we were unable to recover it. 00:30:08.446 [2024-07-15 15:35:12.052647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.446 [2024-07-15 15:35:12.052686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.446 qpair failed and we were unable to recover it. 00:30:08.446 [2024-07-15 15:35:12.052926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.446 [2024-07-15 15:35:12.052966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.446 qpair failed and we were unable to recover it. 00:30:08.446 [2024-07-15 15:35:12.053331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.446 [2024-07-15 15:35:12.053371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.446 qpair failed and we were unable to recover it. 00:30:08.446 [2024-07-15 15:35:12.053751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.446 [2024-07-15 15:35:12.053792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.446 qpair failed and we were unable to recover it. 00:30:08.446 [2024-07-15 15:35:12.054104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.446 [2024-07-15 15:35:12.054145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.446 qpair failed and we were unable to recover it. 00:30:08.446 [2024-07-15 15:35:12.054533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.446 [2024-07-15 15:35:12.054573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.446 qpair failed and we were unable to recover it. 00:30:08.446 [2024-07-15 15:35:12.054980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.446 [2024-07-15 15:35:12.055020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.446 qpair failed and we were unable to recover it. 00:30:08.446 [2024-07-15 15:35:12.055312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.446 [2024-07-15 15:35:12.055329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.446 qpair failed and we were unable to recover it. 00:30:08.446 [2024-07-15 15:35:12.055671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.446 [2024-07-15 15:35:12.055712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.446 qpair failed and we were unable to recover it. 00:30:08.446 [2024-07-15 15:35:12.056016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.446 [2024-07-15 15:35:12.056057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.446 qpair failed and we were unable to recover it. 00:30:08.446 [2024-07-15 15:35:12.056402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.446 [2024-07-15 15:35:12.056442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.446 qpair failed and we were unable to recover it. 00:30:08.446 [2024-07-15 15:35:12.056698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.446 [2024-07-15 15:35:12.056737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.446 qpair failed and we were unable to recover it. 00:30:08.446 [2024-07-15 15:35:12.057035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.446 [2024-07-15 15:35:12.057076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.446 qpair failed and we were unable to recover it. 00:30:08.446 [2024-07-15 15:35:12.057354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.446 [2024-07-15 15:35:12.057371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.446 qpair failed and we were unable to recover it. 00:30:08.446 [2024-07-15 15:35:12.057617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.446 [2024-07-15 15:35:12.057633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.446 qpair failed and we were unable to recover it. 00:30:08.446 [2024-07-15 15:35:12.057895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.446 [2024-07-15 15:35:12.057946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.446 qpair failed and we were unable to recover it. 00:30:08.446 [2024-07-15 15:35:12.058255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.446 [2024-07-15 15:35:12.058294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.446 qpair failed and we were unable to recover it. 00:30:08.446 [2024-07-15 15:35:12.058599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.446 [2024-07-15 15:35:12.058638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.446 qpair failed and we were unable to recover it. 00:30:08.446 [2024-07-15 15:35:12.058881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.446 [2024-07-15 15:35:12.058922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.446 qpair failed and we were unable to recover it. 00:30:08.446 [2024-07-15 15:35:12.059187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.446 [2024-07-15 15:35:12.059226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.446 qpair failed and we were unable to recover it. 00:30:08.446 [2024-07-15 15:35:12.059500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.446 [2024-07-15 15:35:12.059538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.446 qpair failed and we were unable to recover it. 00:30:08.446 [2024-07-15 15:35:12.059871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.446 [2024-07-15 15:35:12.059918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.446 qpair failed and we were unable to recover it. 00:30:08.446 [2024-07-15 15:35:12.060218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.447 [2024-07-15 15:35:12.060258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.447 qpair failed and we were unable to recover it. 00:30:08.447 [2024-07-15 15:35:12.060517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.447 [2024-07-15 15:35:12.060556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.447 qpair failed and we were unable to recover it. 00:30:08.447 [2024-07-15 15:35:12.060844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.447 [2024-07-15 15:35:12.060885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.447 qpair failed and we were unable to recover it. 00:30:08.447 [2024-07-15 15:35:12.061193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.447 [2024-07-15 15:35:12.061232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.447 qpair failed and we were unable to recover it. 00:30:08.447 [2024-07-15 15:35:12.061542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.447 [2024-07-15 15:35:12.061581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.447 qpair failed and we were unable to recover it. 00:30:08.447 [2024-07-15 15:35:12.061915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.447 [2024-07-15 15:35:12.061955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.447 qpair failed and we were unable to recover it. 00:30:08.447 [2024-07-15 15:35:12.062259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.447 [2024-07-15 15:35:12.062299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.447 qpair failed and we were unable to recover it. 00:30:08.447 [2024-07-15 15:35:12.062590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.447 [2024-07-15 15:35:12.062607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.447 qpair failed and we were unable to recover it. 00:30:08.447 [2024-07-15 15:35:12.062943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.447 [2024-07-15 15:35:12.063001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.447 qpair failed and we were unable to recover it. 00:30:08.447 [2024-07-15 15:35:12.063343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.447 [2024-07-15 15:35:12.063383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.447 qpair failed and we were unable to recover it. 00:30:08.447 [2024-07-15 15:35:12.063634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.447 [2024-07-15 15:35:12.063652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.447 qpair failed and we were unable to recover it. 00:30:08.447 [2024-07-15 15:35:12.063900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.447 [2024-07-15 15:35:12.063940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.447 qpair failed and we were unable to recover it. 00:30:08.447 [2024-07-15 15:35:12.064276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.447 [2024-07-15 15:35:12.064315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.447 qpair failed and we were unable to recover it. 00:30:08.447 [2024-07-15 15:35:12.064606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.447 [2024-07-15 15:35:12.064623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.447 qpair failed and we were unable to recover it. 00:30:08.447 [2024-07-15 15:35:12.064880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.447 [2024-07-15 15:35:12.064897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.447 qpair failed and we were unable to recover it. 00:30:08.447 [2024-07-15 15:35:12.065103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.447 [2024-07-15 15:35:12.065120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.447 qpair failed and we were unable to recover it. 00:30:08.447 [2024-07-15 15:35:12.065318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.447 [2024-07-15 15:35:12.065336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.447 qpair failed and we were unable to recover it. 00:30:08.447 [2024-07-15 15:35:12.065658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.447 [2024-07-15 15:35:12.065698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.473 qpair failed and we were unable to recover it. 00:30:08.473 [2024-07-15 15:35:12.066007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.473 [2024-07-15 15:35:12.066048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.473 qpair failed and we were unable to recover it. 00:30:08.473 [2024-07-15 15:35:12.066278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.473 [2024-07-15 15:35:12.066317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.473 qpair failed and we were unable to recover it. 00:30:08.473 [2024-07-15 15:35:12.066620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.473 [2024-07-15 15:35:12.066660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.473 qpair failed and we were unable to recover it. 00:30:08.473 [2024-07-15 15:35:12.066973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.473 [2024-07-15 15:35:12.067014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.473 qpair failed and we were unable to recover it. 00:30:08.473 [2024-07-15 15:35:12.067311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.473 [2024-07-15 15:35:12.067350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.473 qpair failed and we were unable to recover it. 00:30:08.473 [2024-07-15 15:35:12.067583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.473 [2024-07-15 15:35:12.067600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.473 qpair failed and we were unable to recover it. 00:30:08.473 [2024-07-15 15:35:12.067894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.473 [2024-07-15 15:35:12.067935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.473 qpair failed and we were unable to recover it. 00:30:08.474 [2024-07-15 15:35:12.068178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.474 [2024-07-15 15:35:12.068216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.474 qpair failed and we were unable to recover it. 00:30:08.474 [2024-07-15 15:35:12.068559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.474 [2024-07-15 15:35:12.068600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.474 qpair failed and we were unable to recover it. 00:30:08.474 [2024-07-15 15:35:12.068898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.474 [2024-07-15 15:35:12.068939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.474 qpair failed and we were unable to recover it. 00:30:08.474 [2024-07-15 15:35:12.069330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.474 [2024-07-15 15:35:12.069371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.474 qpair failed and we were unable to recover it. 00:30:08.474 [2024-07-15 15:35:12.069678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.474 [2024-07-15 15:35:12.069695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.474 qpair failed and we were unable to recover it. 00:30:08.474 [2024-07-15 15:35:12.069987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.474 [2024-07-15 15:35:12.070027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.474 qpair failed and we were unable to recover it. 00:30:08.474 [2024-07-15 15:35:12.070258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.474 [2024-07-15 15:35:12.070299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.474 qpair failed and we were unable to recover it. 00:30:08.474 [2024-07-15 15:35:12.070605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.474 [2024-07-15 15:35:12.070644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.474 qpair failed and we were unable to recover it. 00:30:08.474 [2024-07-15 15:35:12.071008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.474 [2024-07-15 15:35:12.071048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.474 qpair failed and we were unable to recover it. 00:30:08.474 [2024-07-15 15:35:12.071359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.474 [2024-07-15 15:35:12.071398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.474 qpair failed and we were unable to recover it. 00:30:08.474 [2024-07-15 15:35:12.071626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.474 [2024-07-15 15:35:12.071666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.474 qpair failed and we were unable to recover it. 00:30:08.474 [2024-07-15 15:35:12.072029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.474 [2024-07-15 15:35:12.072069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.474 qpair failed and we were unable to recover it. 00:30:08.474 [2024-07-15 15:35:12.072366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.474 [2024-07-15 15:35:12.072407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.474 qpair failed and we were unable to recover it. 00:30:08.474 [2024-07-15 15:35:12.072766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.474 [2024-07-15 15:35:12.072783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.474 qpair failed and we were unable to recover it. 00:30:08.474 [2024-07-15 15:35:12.073103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.474 [2024-07-15 15:35:12.073122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.474 qpair failed and we were unable to recover it. 00:30:08.474 [2024-07-15 15:35:12.073387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.474 [2024-07-15 15:35:12.073426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.474 qpair failed and we were unable to recover it. 00:30:08.474 [2024-07-15 15:35:12.073737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.474 [2024-07-15 15:35:12.073777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.474 qpair failed and we were unable to recover it. 00:30:08.474 [2024-07-15 15:35:12.074039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.474 [2024-07-15 15:35:12.074080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.474 qpair failed and we were unable to recover it. 00:30:08.474 [2024-07-15 15:35:12.074366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.474 [2024-07-15 15:35:12.074382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.474 qpair failed and we were unable to recover it. 00:30:08.474 [2024-07-15 15:35:12.074631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.474 [2024-07-15 15:35:12.074648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.474 qpair failed and we were unable to recover it. 00:30:08.474 [2024-07-15 15:35:12.074986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.474 [2024-07-15 15:35:12.075027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.474 qpair failed and we were unable to recover it. 00:30:08.474 [2024-07-15 15:35:12.075345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.474 [2024-07-15 15:35:12.075385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.474 qpair failed and we were unable to recover it. 00:30:08.474 [2024-07-15 15:35:12.075681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.474 [2024-07-15 15:35:12.075715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.474 qpair failed and we were unable to recover it. 00:30:08.474 [2024-07-15 15:35:12.076037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.474 [2024-07-15 15:35:12.076078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.474 qpair failed and we were unable to recover it. 00:30:08.474 [2024-07-15 15:35:12.076318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.474 [2024-07-15 15:35:12.076357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.474 qpair failed and we were unable to recover it. 00:30:08.474 [2024-07-15 15:35:12.076536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.474 [2024-07-15 15:35:12.076553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.474 qpair failed and we were unable to recover it. 00:30:08.474 [2024-07-15 15:35:12.076910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.474 [2024-07-15 15:35:12.076927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.474 qpair failed and we were unable to recover it. 00:30:08.474 [2024-07-15 15:35:12.077122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.474 [2024-07-15 15:35:12.077139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.474 qpair failed and we were unable to recover it. 00:30:08.474 [2024-07-15 15:35:12.077421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.474 [2024-07-15 15:35:12.077456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.474 qpair failed and we were unable to recover it. 00:30:08.474 [2024-07-15 15:35:12.077704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.474 [2024-07-15 15:35:12.077743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.474 qpair failed and we were unable to recover it. 00:30:08.474 [2024-07-15 15:35:12.078152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.474 [2024-07-15 15:35:12.078192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.474 qpair failed and we were unable to recover it. 00:30:08.474 [2024-07-15 15:35:12.078505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.474 [2024-07-15 15:35:12.078544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.474 qpair failed and we were unable to recover it. 00:30:08.474 [2024-07-15 15:35:12.078845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.474 [2024-07-15 15:35:12.078863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.474 qpair failed and we were unable to recover it. 00:30:08.474 [2024-07-15 15:35:12.079123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.474 [2024-07-15 15:35:12.079140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.474 qpair failed and we were unable to recover it. 00:30:08.474 [2024-07-15 15:35:12.079387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.474 [2024-07-15 15:35:12.079404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.474 qpair failed and we were unable to recover it. 00:30:08.474 [2024-07-15 15:35:12.079648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.474 [2024-07-15 15:35:12.079665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.474 qpair failed and we were unable to recover it. 00:30:08.474 [2024-07-15 15:35:12.079873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.474 [2024-07-15 15:35:12.079891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.474 qpair failed and we were unable to recover it. 00:30:08.474 [2024-07-15 15:35:12.080313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.474 [2024-07-15 15:35:12.080338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.474 qpair failed and we were unable to recover it. 00:30:08.474 [2024-07-15 15:35:12.080614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.474 [2024-07-15 15:35:12.080655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.474 qpair failed and we were unable to recover it. 00:30:08.474 [2024-07-15 15:35:12.080969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.474 [2024-07-15 15:35:12.081011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.474 qpair failed and we were unable to recover it. 00:30:08.474 [2024-07-15 15:35:12.081321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.475 [2024-07-15 15:35:12.081362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.475 qpair failed and we were unable to recover it. 00:30:08.475 [2024-07-15 15:35:12.081735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.475 [2024-07-15 15:35:12.081776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.475 qpair failed and we were unable to recover it. 00:30:08.475 [2024-07-15 15:35:12.082049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.475 [2024-07-15 15:35:12.082090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.475 qpair failed and we were unable to recover it. 00:30:08.475 [2024-07-15 15:35:12.082457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.475 [2024-07-15 15:35:12.082498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.475 qpair failed and we were unable to recover it. 00:30:08.475 [2024-07-15 15:35:12.082842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.475 [2024-07-15 15:35:12.082859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.475 qpair failed and we were unable to recover it. 00:30:08.475 [2024-07-15 15:35:12.083105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.475 [2024-07-15 15:35:12.083121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.475 qpair failed and we were unable to recover it. 00:30:08.475 [2024-07-15 15:35:12.083317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.475 [2024-07-15 15:35:12.083351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.475 qpair failed and we were unable to recover it. 00:30:08.475 [2024-07-15 15:35:12.083714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.475 [2024-07-15 15:35:12.083754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.475 qpair failed and we were unable to recover it. 00:30:08.475 [2024-07-15 15:35:12.084178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.475 [2024-07-15 15:35:12.084219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.475 qpair failed and we were unable to recover it. 00:30:08.475 [2024-07-15 15:35:12.084442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.475 [2024-07-15 15:35:12.084459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.475 qpair failed and we were unable to recover it. 00:30:08.475 [2024-07-15 15:35:12.084710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.475 [2024-07-15 15:35:12.084749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.475 qpair failed and we were unable to recover it. 00:30:08.475 [2024-07-15 15:35:12.085039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.475 [2024-07-15 15:35:12.085081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.475 qpair failed and we were unable to recover it. 00:30:08.475 [2024-07-15 15:35:12.085465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.475 [2024-07-15 15:35:12.085504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.475 qpair failed and we were unable to recover it. 00:30:08.475 [2024-07-15 15:35:12.085829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.475 [2024-07-15 15:35:12.085876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.475 qpair failed and we were unable to recover it. 00:30:08.475 [2024-07-15 15:35:12.086106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.475 [2024-07-15 15:35:12.086152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.475 qpair failed and we were unable to recover it. 00:30:08.475 [2024-07-15 15:35:12.086519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.475 [2024-07-15 15:35:12.086557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.475 qpair failed and we were unable to recover it. 00:30:08.475 [2024-07-15 15:35:12.086863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.475 [2024-07-15 15:35:12.086880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.475 qpair failed and we were unable to recover it. 00:30:08.475 [2024-07-15 15:35:12.087070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.475 [2024-07-15 15:35:12.087086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.475 qpair failed and we were unable to recover it. 00:30:08.475 [2024-07-15 15:35:12.087277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.475 [2024-07-15 15:35:12.087294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.475 qpair failed and we were unable to recover it. 00:30:08.475 [2024-07-15 15:35:12.087614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.475 [2024-07-15 15:35:12.087654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.475 qpair failed and we were unable to recover it. 00:30:08.475 [2024-07-15 15:35:12.087963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.475 [2024-07-15 15:35:12.088003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.475 qpair failed and we were unable to recover it. 00:30:08.475 [2024-07-15 15:35:12.088365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.475 [2024-07-15 15:35:12.088405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.475 qpair failed and we were unable to recover it. 00:30:08.475 [2024-07-15 15:35:12.088715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.475 [2024-07-15 15:35:12.088732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.475 qpair failed and we were unable to recover it. 00:30:08.475 [2024-07-15 15:35:12.088915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.475 [2024-07-15 15:35:12.088933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.475 qpair failed and we were unable to recover it. 00:30:08.475 [2024-07-15 15:35:12.089192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.475 [2024-07-15 15:35:12.089209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.475 qpair failed and we were unable to recover it. 00:30:08.475 [2024-07-15 15:35:12.089478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.475 [2024-07-15 15:35:12.089496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.475 qpair failed and we were unable to recover it. 00:30:08.475 [2024-07-15 15:35:12.089795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.475 [2024-07-15 15:35:12.089843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.475 qpair failed and we were unable to recover it. 00:30:08.475 [2024-07-15 15:35:12.090192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.475 [2024-07-15 15:35:12.090237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.475 qpair failed and we were unable to recover it. 00:30:08.475 [2024-07-15 15:35:12.090498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.475 [2024-07-15 15:35:12.090544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.475 qpair failed and we were unable to recover it. 00:30:08.475 [2024-07-15 15:35:12.090854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.475 [2024-07-15 15:35:12.090912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.475 qpair failed and we were unable to recover it. 00:30:08.475 [2024-07-15 15:35:12.091162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.475 [2024-07-15 15:35:12.091201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.475 qpair failed and we were unable to recover it. 00:30:08.475 [2024-07-15 15:35:12.091503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.475 [2024-07-15 15:35:12.091520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.475 qpair failed and we were unable to recover it. 00:30:08.475 [2024-07-15 15:35:12.091787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.475 [2024-07-15 15:35:12.091804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.475 qpair failed and we were unable to recover it. 00:30:08.475 [2024-07-15 15:35:12.092050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.475 [2024-07-15 15:35:12.092067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.475 qpair failed and we were unable to recover it. 00:30:08.475 [2024-07-15 15:35:12.093100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.475 [2024-07-15 15:35:12.093133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.475 qpair failed and we were unable to recover it. 00:30:08.475 [2024-07-15 15:35:12.093391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.476 [2024-07-15 15:35:12.093410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.476 qpair failed and we were unable to recover it. 00:30:08.476 [2024-07-15 15:35:12.093734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.476 [2024-07-15 15:35:12.093774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.476 qpair failed and we were unable to recover it. 00:30:08.476 [2024-07-15 15:35:12.094052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.476 [2024-07-15 15:35:12.094093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.476 qpair failed and we were unable to recover it. 00:30:08.476 [2024-07-15 15:35:12.094483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.476 [2024-07-15 15:35:12.094523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.476 qpair failed and we were unable to recover it. 00:30:08.476 [2024-07-15 15:35:12.094862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.476 [2024-07-15 15:35:12.094903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.476 qpair failed and we were unable to recover it. 00:30:08.476 [2024-07-15 15:35:12.095294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.476 [2024-07-15 15:35:12.095334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.476 qpair failed and we were unable to recover it. 00:30:08.476 [2024-07-15 15:35:12.095596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.476 [2024-07-15 15:35:12.095643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.476 qpair failed and we were unable to recover it. 00:30:08.476 [2024-07-15 15:35:12.095899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.476 [2024-07-15 15:35:12.095917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.476 qpair failed and we were unable to recover it. 00:30:08.476 [2024-07-15 15:35:12.096233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.476 [2024-07-15 15:35:12.096273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.476 qpair failed and we were unable to recover it. 00:30:08.476 [2024-07-15 15:35:12.096529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.476 [2024-07-15 15:35:12.096546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.476 qpair failed and we were unable to recover it. 00:30:08.476 [2024-07-15 15:35:12.096794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.476 [2024-07-15 15:35:12.096812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.476 qpair failed and we were unable to recover it. 00:30:08.476 [2024-07-15 15:35:12.097115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.476 [2024-07-15 15:35:12.097156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.476 qpair failed and we were unable to recover it. 00:30:08.476 [2024-07-15 15:35:12.097401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.476 [2024-07-15 15:35:12.097440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.476 qpair failed and we were unable to recover it. 00:30:08.476 [2024-07-15 15:35:12.097777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.476 [2024-07-15 15:35:12.097817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.476 qpair failed and we were unable to recover it. 00:30:08.476 [2024-07-15 15:35:12.098062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.476 [2024-07-15 15:35:12.098102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.476 qpair failed and we were unable to recover it. 00:30:08.476 [2024-07-15 15:35:12.098461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.476 [2024-07-15 15:35:12.098505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.476 qpair failed and we were unable to recover it. 00:30:08.476 [2024-07-15 15:35:12.098762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.476 [2024-07-15 15:35:12.098801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.476 qpair failed and we were unable to recover it. 00:30:08.476 [2024-07-15 15:35:12.099200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.476 [2024-07-15 15:35:12.099240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.476 qpair failed and we were unable to recover it. 00:30:08.476 [2024-07-15 15:35:12.099533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.476 [2024-07-15 15:35:12.099572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.476 qpair failed and we were unable to recover it. 00:30:08.476 [2024-07-15 15:35:12.099876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.476 [2024-07-15 15:35:12.099896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.476 qpair failed and we were unable to recover it. 00:30:08.476 [2024-07-15 15:35:12.100079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.476 [2024-07-15 15:35:12.100096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.476 qpair failed and we were unable to recover it. 00:30:08.476 [2024-07-15 15:35:12.100277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.476 [2024-07-15 15:35:12.100294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.476 qpair failed and we were unable to recover it. 00:30:08.476 [2024-07-15 15:35:12.100498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.476 [2024-07-15 15:35:12.100515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.476 qpair failed and we were unable to recover it. 00:30:08.476 [2024-07-15 15:35:12.100709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.476 [2024-07-15 15:35:12.100747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.476 qpair failed and we were unable to recover it. 00:30:08.476 [2024-07-15 15:35:12.100979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.476 [2024-07-15 15:35:12.101021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.476 qpair failed and we were unable to recover it. 00:30:08.476 [2024-07-15 15:35:12.101359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.476 [2024-07-15 15:35:12.101404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.476 qpair failed and we were unable to recover it. 00:30:08.476 [2024-07-15 15:35:12.101648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.476 [2024-07-15 15:35:12.101666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.476 qpair failed and we were unable to recover it. 00:30:08.476 [2024-07-15 15:35:12.101934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.476 [2024-07-15 15:35:12.101952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.476 qpair failed and we were unable to recover it. 00:30:08.476 [2024-07-15 15:35:12.102238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.476 [2024-07-15 15:35:12.102255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.476 qpair failed and we were unable to recover it. 00:30:08.476 [2024-07-15 15:35:12.102546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.476 [2024-07-15 15:35:12.102563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.476 qpair failed and we were unable to recover it. 00:30:08.476 [2024-07-15 15:35:12.103713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.476 [2024-07-15 15:35:12.103746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.476 qpair failed and we were unable to recover it. 00:30:08.476 [2024-07-15 15:35:12.104013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.476 [2024-07-15 15:35:12.104033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.476 qpair failed and we were unable to recover it. 00:30:08.476 [2024-07-15 15:35:12.104310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.476 [2024-07-15 15:35:12.104360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.476 qpair failed and we were unable to recover it. 00:30:08.476 [2024-07-15 15:35:12.104759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.476 [2024-07-15 15:35:12.104799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.476 qpair failed and we were unable to recover it. 00:30:08.476 [2024-07-15 15:35:12.105123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.476 [2024-07-15 15:35:12.105164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.476 qpair failed and we were unable to recover it. 00:30:08.476 [2024-07-15 15:35:12.105472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.476 [2024-07-15 15:35:12.105512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.476 qpair failed and we were unable to recover it. 00:30:08.476 [2024-07-15 15:35:12.105702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.476 [2024-07-15 15:35:12.105742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.476 qpair failed and we were unable to recover it. 00:30:08.476 [2024-07-15 15:35:12.106032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.476 [2024-07-15 15:35:12.106049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.476 qpair failed and we were unable to recover it. 00:30:08.477 [2024-07-15 15:35:12.106250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.477 [2024-07-15 15:35:12.106267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.477 qpair failed and we were unable to recover it. 00:30:08.477 [2024-07-15 15:35:12.106462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.477 [2024-07-15 15:35:12.106480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.477 qpair failed and we were unable to recover it. 00:30:08.477 [2024-07-15 15:35:12.106755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.477 [2024-07-15 15:35:12.106773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.477 qpair failed and we were unable to recover it. 00:30:08.477 [2024-07-15 15:35:12.107028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.477 [2024-07-15 15:35:12.107045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.477 qpair failed and we were unable to recover it. 00:30:08.477 [2024-07-15 15:35:12.107304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.477 [2024-07-15 15:35:12.107321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.477 qpair failed and we were unable to recover it. 00:30:08.477 [2024-07-15 15:35:12.107579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.477 [2024-07-15 15:35:12.107596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.477 qpair failed and we were unable to recover it. 00:30:08.477 [2024-07-15 15:35:12.107847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.477 [2024-07-15 15:35:12.107864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.477 qpair failed and we were unable to recover it. 00:30:08.477 [2024-07-15 15:35:12.108065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.477 [2024-07-15 15:35:12.108081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.477 qpair failed and we were unable to recover it. 00:30:08.477 [2024-07-15 15:35:12.108258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.477 [2024-07-15 15:35:12.108277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.477 qpair failed and we were unable to recover it. 00:30:08.477 [2024-07-15 15:35:12.108527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.477 [2024-07-15 15:35:12.108544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.477 qpair failed and we were unable to recover it. 00:30:08.477 [2024-07-15 15:35:12.108743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.477 [2024-07-15 15:35:12.108760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.477 qpair failed and we were unable to recover it. 00:30:08.477 [2024-07-15 15:35:12.109024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.477 [2024-07-15 15:35:12.109041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.477 qpair failed and we were unable to recover it. 00:30:08.477 [2024-07-15 15:35:12.109230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.477 [2024-07-15 15:35:12.109247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.477 qpair failed and we were unable to recover it. 00:30:08.477 [2024-07-15 15:35:12.109507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.477 [2024-07-15 15:35:12.109524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.477 qpair failed and we were unable to recover it. 00:30:08.477 [2024-07-15 15:35:12.109715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.477 [2024-07-15 15:35:12.109732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.477 qpair failed and we were unable to recover it. 00:30:08.477 [2024-07-15 15:35:12.109930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.477 [2024-07-15 15:35:12.109947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.477 qpair failed and we were unable to recover it. 00:30:08.477 [2024-07-15 15:35:12.110141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.477 [2024-07-15 15:35:12.110158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.477 qpair failed and we were unable to recover it. 00:30:08.477 [2024-07-15 15:35:12.110354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.477 [2024-07-15 15:35:12.110371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.477 qpair failed and we were unable to recover it. 00:30:08.477 [2024-07-15 15:35:12.110558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.477 [2024-07-15 15:35:12.110575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.477 qpair failed and we were unable to recover it. 00:30:08.477 [2024-07-15 15:35:12.110824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.477 [2024-07-15 15:35:12.110846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.477 qpair failed and we were unable to recover it. 00:30:08.477 [2024-07-15 15:35:12.111104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.477 [2024-07-15 15:35:12.111121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.477 qpair failed and we were unable to recover it. 00:30:08.477 [2024-07-15 15:35:12.111363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.477 [2024-07-15 15:35:12.111380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.477 qpair failed and we were unable to recover it. 00:30:08.477 [2024-07-15 15:35:12.111627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.477 [2024-07-15 15:35:12.111643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.477 qpair failed and we were unable to recover it. 00:30:08.477 [2024-07-15 15:35:12.111902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.477 [2024-07-15 15:35:12.111919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.477 qpair failed and we were unable to recover it. 00:30:08.477 [2024-07-15 15:35:12.112170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.477 [2024-07-15 15:35:12.112187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.477 qpair failed and we were unable to recover it. 00:30:08.477 [2024-07-15 15:35:12.112510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.477 [2024-07-15 15:35:12.112527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.477 qpair failed and we were unable to recover it. 00:30:08.477 [2024-07-15 15:35:12.112840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.477 [2024-07-15 15:35:12.112857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.477 qpair failed and we were unable to recover it. 00:30:08.477 [2024-07-15 15:35:12.113118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.477 [2024-07-15 15:35:12.113135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.477 qpair failed and we were unable to recover it. 00:30:08.477 [2024-07-15 15:35:12.113326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.478 [2024-07-15 15:35:12.113343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.478 qpair failed and we were unable to recover it. 00:30:08.478 [2024-07-15 15:35:12.113600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.478 [2024-07-15 15:35:12.113618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.478 qpair failed and we were unable to recover it. 00:30:08.478 [2024-07-15 15:35:12.113813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.478 [2024-07-15 15:35:12.113830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.478 qpair failed and we were unable to recover it. 00:30:08.478 [2024-07-15 15:35:12.114120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.478 [2024-07-15 15:35:12.114137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.478 qpair failed and we were unable to recover it. 00:30:08.478 [2024-07-15 15:35:12.114474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.478 [2024-07-15 15:35:12.114491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.478 qpair failed and we were unable to recover it. 00:30:08.478 [2024-07-15 15:35:12.114777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.478 [2024-07-15 15:35:12.114794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.478 qpair failed and we were unable to recover it. 00:30:08.478 [2024-07-15 15:35:12.115057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.478 [2024-07-15 15:35:12.115075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.478 qpair failed and we were unable to recover it. 00:30:08.478 [2024-07-15 15:35:12.115410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.478 [2024-07-15 15:35:12.115426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.478 qpair failed and we were unable to recover it. 00:30:08.478 [2024-07-15 15:35:12.115751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.478 [2024-07-15 15:35:12.115768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.478 qpair failed and we were unable to recover it. 00:30:08.478 [2024-07-15 15:35:12.116047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.478 [2024-07-15 15:35:12.116064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.478 qpair failed and we were unable to recover it. 00:30:08.478 [2024-07-15 15:35:12.116323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.478 [2024-07-15 15:35:12.116340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.478 qpair failed and we were unable to recover it. 00:30:08.478 [2024-07-15 15:35:12.116636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.478 [2024-07-15 15:35:12.116653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.478 qpair failed and we were unable to recover it. 00:30:08.478 [2024-07-15 15:35:12.116899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.478 [2024-07-15 15:35:12.116916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.478 qpair failed and we were unable to recover it. 00:30:08.478 [2024-07-15 15:35:12.117107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.478 [2024-07-15 15:35:12.117124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.478 qpair failed and we were unable to recover it. 00:30:08.478 [2024-07-15 15:35:12.117318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.478 [2024-07-15 15:35:12.117335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.478 qpair failed and we were unable to recover it. 00:30:08.478 [2024-07-15 15:35:12.117581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.478 [2024-07-15 15:35:12.117598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.478 qpair failed and we were unable to recover it. 00:30:08.478 [2024-07-15 15:35:12.117854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.478 [2024-07-15 15:35:12.117872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.478 qpair failed and we were unable to recover it. 00:30:08.478 [2024-07-15 15:35:12.118262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.478 [2024-07-15 15:35:12.118279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.478 qpair failed and we were unable to recover it. 00:30:08.478 [2024-07-15 15:35:12.118523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.478 [2024-07-15 15:35:12.118540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.478 qpair failed and we were unable to recover it. 00:30:08.478 [2024-07-15 15:35:12.118818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.478 [2024-07-15 15:35:12.118854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.478 qpair failed and we were unable to recover it. 00:30:08.478 [2024-07-15 15:35:12.119192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.478 [2024-07-15 15:35:12.119213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.478 qpair failed and we were unable to recover it. 00:30:08.478 [2024-07-15 15:35:12.119475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.478 [2024-07-15 15:35:12.119492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.478 qpair failed and we were unable to recover it. 00:30:08.478 [2024-07-15 15:35:12.119736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.478 [2024-07-15 15:35:12.119753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.478 qpair failed and we were unable to recover it. 00:30:08.478 [2024-07-15 15:35:12.120044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.478 [2024-07-15 15:35:12.120062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.478 qpair failed and we were unable to recover it. 00:30:08.478 [2024-07-15 15:35:12.120324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.478 [2024-07-15 15:35:12.120341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.478 qpair failed and we were unable to recover it. 00:30:08.478 [2024-07-15 15:35:12.120533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.478 [2024-07-15 15:35:12.120551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.478 qpair failed and we were unable to recover it. 00:30:08.478 [2024-07-15 15:35:12.120860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.478 [2024-07-15 15:35:12.120879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.478 qpair failed and we were unable to recover it. 00:30:08.479 [2024-07-15 15:35:12.121122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.479 [2024-07-15 15:35:12.121139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.479 qpair failed and we were unable to recover it. 00:30:08.479 [2024-07-15 15:35:12.121334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.479 [2024-07-15 15:35:12.121351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.479 qpair failed and we were unable to recover it. 00:30:08.479 [2024-07-15 15:35:12.121661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.479 [2024-07-15 15:35:12.121679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.479 qpair failed and we were unable to recover it. 00:30:08.479 [2024-07-15 15:35:12.121963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.479 [2024-07-15 15:35:12.121981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.479 qpair failed and we were unable to recover it. 00:30:08.479 [2024-07-15 15:35:12.122170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.479 [2024-07-15 15:35:12.122188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.479 qpair failed and we were unable to recover it. 00:30:08.479 [2024-07-15 15:35:12.122430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.479 [2024-07-15 15:35:12.122447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.479 qpair failed and we were unable to recover it. 00:30:08.479 [2024-07-15 15:35:12.122717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.479 [2024-07-15 15:35:12.122734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.479 qpair failed and we were unable to recover it. 00:30:08.479 [2024-07-15 15:35:12.122993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.479 [2024-07-15 15:35:12.123011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.479 qpair failed and we were unable to recover it. 00:30:08.479 [2024-07-15 15:35:12.123323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.479 [2024-07-15 15:35:12.123340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.479 qpair failed and we were unable to recover it. 00:30:08.479 [2024-07-15 15:35:12.123592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.479 [2024-07-15 15:35:12.123608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.479 qpair failed and we were unable to recover it. 00:30:08.479 [2024-07-15 15:35:12.123871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.479 [2024-07-15 15:35:12.123889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.479 qpair failed and we were unable to recover it. 00:30:08.479 [2024-07-15 15:35:12.124083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.479 [2024-07-15 15:35:12.124100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.479 qpair failed and we were unable to recover it. 00:30:08.479 [2024-07-15 15:35:12.124310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.479 [2024-07-15 15:35:12.124327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.479 qpair failed and we were unable to recover it. 00:30:08.479 [2024-07-15 15:35:12.124652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.479 [2024-07-15 15:35:12.124669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.479 qpair failed and we were unable to recover it. 00:30:08.479 [2024-07-15 15:35:12.124849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.479 [2024-07-15 15:35:12.124867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.479 qpair failed and we were unable to recover it. 00:30:08.479 [2024-07-15 15:35:12.125129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.479 [2024-07-15 15:35:12.125146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.479 qpair failed and we were unable to recover it. 00:30:08.479 [2024-07-15 15:35:12.125326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.479 [2024-07-15 15:35:12.125344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.479 qpair failed and we were unable to recover it. 00:30:08.479 [2024-07-15 15:35:12.125703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.479 [2024-07-15 15:35:12.125720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.479 qpair failed and we were unable to recover it. 00:30:08.479 [2024-07-15 15:35:12.125926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.479 [2024-07-15 15:35:12.125943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.479 qpair failed and we were unable to recover it. 00:30:08.479 [2024-07-15 15:35:12.126148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.479 [2024-07-15 15:35:12.126165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.479 qpair failed and we were unable to recover it. 00:30:08.479 [2024-07-15 15:35:12.126439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.479 [2024-07-15 15:35:12.126456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.479 qpair failed and we were unable to recover it. 00:30:08.479 [2024-07-15 15:35:12.126713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.479 [2024-07-15 15:35:12.126730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.479 qpair failed and we were unable to recover it. 00:30:08.479 [2024-07-15 15:35:12.126984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.479 [2024-07-15 15:35:12.127002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.479 qpair failed and we were unable to recover it. 00:30:08.479 [2024-07-15 15:35:12.127311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.479 [2024-07-15 15:35:12.127329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.479 qpair failed and we were unable to recover it. 00:30:08.479 [2024-07-15 15:35:12.127578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.479 [2024-07-15 15:35:12.127598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.479 qpair failed and we were unable to recover it. 00:30:08.479 [2024-07-15 15:35:12.127940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.479 [2024-07-15 15:35:12.127958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.479 qpair failed and we were unable to recover it. 00:30:08.480 [2024-07-15 15:35:12.128227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.480 [2024-07-15 15:35:12.128245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.480 qpair failed and we were unable to recover it. 00:30:08.480 [2024-07-15 15:35:12.128556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.480 [2024-07-15 15:35:12.128573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.480 qpair failed and we were unable to recover it. 00:30:08.480 [2024-07-15 15:35:12.128788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.480 [2024-07-15 15:35:12.128804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.480 qpair failed and we were unable to recover it. 00:30:08.480 [2024-07-15 15:35:12.129176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.480 [2024-07-15 15:35:12.129193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.480 qpair failed and we were unable to recover it. 00:30:08.480 [2024-07-15 15:35:12.129460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.480 [2024-07-15 15:35:12.129477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.480 qpair failed and we were unable to recover it. 00:30:08.480 [2024-07-15 15:35:12.129798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.480 [2024-07-15 15:35:12.129814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.480 qpair failed and we were unable to recover it. 00:30:08.480 [2024-07-15 15:35:12.130068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.480 [2024-07-15 15:35:12.130085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.480 qpair failed and we were unable to recover it. 00:30:08.480 [2024-07-15 15:35:12.130272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.480 [2024-07-15 15:35:12.130293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.480 qpair failed and we were unable to recover it. 00:30:08.480 [2024-07-15 15:35:12.130604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.480 [2024-07-15 15:35:12.130622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.480 qpair failed and we were unable to recover it. 00:30:08.480 [2024-07-15 15:35:12.130798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.480 [2024-07-15 15:35:12.130814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:08.480 qpair failed and we were unable to recover it. 00:30:08.480 [2024-07-15 15:35:12.131025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.480 [2024-07-15 15:35:12.131062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.480 qpair failed and we were unable to recover it. 00:30:08.480 [2024-07-15 15:35:12.131318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.480 [2024-07-15 15:35:12.131337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.480 qpair failed and we were unable to recover it. 00:30:08.480 [2024-07-15 15:35:12.131548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.480 [2024-07-15 15:35:12.131565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.480 qpair failed and we were unable to recover it. 00:30:08.480 [2024-07-15 15:35:12.131828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.480 [2024-07-15 15:35:12.131854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.480 qpair failed and we were unable to recover it. 00:30:08.480 [2024-07-15 15:35:12.132166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.480 [2024-07-15 15:35:12.132184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.480 qpair failed and we were unable to recover it. 00:30:08.480 [2024-07-15 15:35:12.132294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.480 [2024-07-15 15:35:12.132311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.480 qpair failed and we were unable to recover it. 00:30:08.480 [2024-07-15 15:35:12.132562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.480 [2024-07-15 15:35:12.132578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.480 qpair failed and we were unable to recover it. 00:30:08.480 [2024-07-15 15:35:12.132822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.480 [2024-07-15 15:35:12.132845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.480 qpair failed and we were unable to recover it. 00:30:08.480 [2024-07-15 15:35:12.133057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.480 [2024-07-15 15:35:12.133073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.480 qpair failed and we were unable to recover it. 00:30:08.480 [2024-07-15 15:35:12.133188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.480 [2024-07-15 15:35:12.133205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.480 qpair failed and we were unable to recover it. 00:30:08.480 [2024-07-15 15:35:12.133514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.480 [2024-07-15 15:35:12.133532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.480 qpair failed and we were unable to recover it. 00:30:08.480 [2024-07-15 15:35:12.133847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.480 [2024-07-15 15:35:12.133865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.480 qpair failed and we were unable to recover it. 00:30:08.480 [2024-07-15 15:35:12.134066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.480 [2024-07-15 15:35:12.134083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.480 qpair failed and we were unable to recover it. 00:30:08.480 [2024-07-15 15:35:12.134351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.480 [2024-07-15 15:35:12.134368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.480 qpair failed and we were unable to recover it. 00:30:08.480 [2024-07-15 15:35:12.134626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.480 [2024-07-15 15:35:12.134643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.480 qpair failed and we were unable to recover it. 00:30:08.480 [2024-07-15 15:35:12.134839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.480 [2024-07-15 15:35:12.134856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.480 qpair failed and we were unable to recover it. 00:30:08.480 [2024-07-15 15:35:12.135059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.480 [2024-07-15 15:35:12.135076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.480 qpair failed and we were unable to recover it. 00:30:08.481 [2024-07-15 15:35:12.135342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.481 [2024-07-15 15:35:12.135359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.481 qpair failed and we were unable to recover it. 00:30:08.481 [2024-07-15 15:35:12.135720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.481 [2024-07-15 15:35:12.135737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.481 qpair failed and we were unable to recover it. 00:30:08.481 [2024-07-15 15:35:12.135984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.481 [2024-07-15 15:35:12.136001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.481 qpair failed and we were unable to recover it. 00:30:08.481 [2024-07-15 15:35:12.136266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.481 [2024-07-15 15:35:12.136283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.481 qpair failed and we were unable to recover it. 00:30:08.481 [2024-07-15 15:35:12.136540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.481 [2024-07-15 15:35:12.136557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.481 qpair failed and we were unable to recover it. 00:30:08.481 [2024-07-15 15:35:12.136752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.481 [2024-07-15 15:35:12.136768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.481 qpair failed and we were unable to recover it. 00:30:08.481 [2024-07-15 15:35:12.136961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.481 [2024-07-15 15:35:12.136978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.481 qpair failed and we were unable to recover it. 00:30:08.481 [2024-07-15 15:35:12.137257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.481 [2024-07-15 15:35:12.137275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.481 qpair failed and we were unable to recover it. 00:30:08.481 [2024-07-15 15:35:12.137471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.481 [2024-07-15 15:35:12.137488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.481 qpair failed and we were unable to recover it. 00:30:08.481 [2024-07-15 15:35:12.137803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.481 [2024-07-15 15:35:12.137820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.481 qpair failed and we were unable to recover it. 00:30:08.481 [2024-07-15 15:35:12.138103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.481 [2024-07-15 15:35:12.138120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.481 qpair failed and we were unable to recover it. 00:30:08.481 [2024-07-15 15:35:12.138452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.481 [2024-07-15 15:35:12.138469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.481 qpair failed and we were unable to recover it. 00:30:08.481 [2024-07-15 15:35:12.138791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.481 [2024-07-15 15:35:12.138809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.481 qpair failed and we were unable to recover it. 00:30:08.481 [2024-07-15 15:35:12.138990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.481 [2024-07-15 15:35:12.139008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.481 qpair failed and we were unable to recover it. 00:30:08.481 [2024-07-15 15:35:12.139271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.481 [2024-07-15 15:35:12.139288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.481 qpair failed and we were unable to recover it. 00:30:08.481 [2024-07-15 15:35:12.139476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.481 [2024-07-15 15:35:12.139493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.481 qpair failed and we were unable to recover it. 00:30:08.481 [2024-07-15 15:35:12.139609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.481 [2024-07-15 15:35:12.139626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.481 qpair failed and we were unable to recover it. 00:30:08.481 [2024-07-15 15:35:12.139792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.481 [2024-07-15 15:35:12.139809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.481 qpair failed and we were unable to recover it. 00:30:08.481 [2024-07-15 15:35:12.139996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.481 [2024-07-15 15:35:12.140014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.481 qpair failed and we were unable to recover it. 00:30:08.481 [2024-07-15 15:35:12.140262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.481 [2024-07-15 15:35:12.140279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.481 qpair failed and we were unable to recover it. 00:30:08.481 [2024-07-15 15:35:12.140618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.481 [2024-07-15 15:35:12.140637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.481 qpair failed and we were unable to recover it. 00:30:08.481 [2024-07-15 15:35:12.140879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.481 [2024-07-15 15:35:12.140897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.481 qpair failed and we were unable to recover it. 00:30:08.481 [2024-07-15 15:35:12.141154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.481 [2024-07-15 15:35:12.141171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.481 qpair failed and we were unable to recover it. 00:30:08.482 [2024-07-15 15:35:12.141510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.482 [2024-07-15 15:35:12.141527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.482 qpair failed and we were unable to recover it. 00:30:08.482 [2024-07-15 15:35:12.141841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.482 [2024-07-15 15:35:12.141859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.482 qpair failed and we were unable to recover it. 00:30:08.482 [2024-07-15 15:35:12.142033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.482 [2024-07-15 15:35:12.142050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.482 qpair failed and we were unable to recover it. 00:30:08.482 [2024-07-15 15:35:12.142360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.482 [2024-07-15 15:35:12.142377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.482 qpair failed and we were unable to recover it. 00:30:08.482 [2024-07-15 15:35:12.142557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.482 [2024-07-15 15:35:12.142575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.482 qpair failed and we were unable to recover it. 00:30:08.482 [2024-07-15 15:35:12.142895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.482 [2024-07-15 15:35:12.142913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.482 qpair failed and we were unable to recover it. 00:30:08.482 [2024-07-15 15:35:12.143105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.482 [2024-07-15 15:35:12.143122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.482 qpair failed and we were unable to recover it. 00:30:08.482 [2024-07-15 15:35:12.143405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.482 [2024-07-15 15:35:12.143422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.482 qpair failed and we were unable to recover it. 00:30:08.482 [2024-07-15 15:35:12.143681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.482 [2024-07-15 15:35:12.143698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.482 qpair failed and we were unable to recover it. 00:30:08.482 [2024-07-15 15:35:12.144033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.482 [2024-07-15 15:35:12.144051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.482 qpair failed and we were unable to recover it. 00:30:08.482 [2024-07-15 15:35:12.144262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.482 [2024-07-15 15:35:12.144279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.482 qpair failed and we were unable to recover it. 00:30:08.482 [2024-07-15 15:35:12.144636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.482 [2024-07-15 15:35:12.144653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.482 qpair failed and we were unable to recover it. 00:30:08.482 [2024-07-15 15:35:12.144850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.482 [2024-07-15 15:35:12.144868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.482 qpair failed and we were unable to recover it. 00:30:08.482 [2024-07-15 15:35:12.145179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.482 [2024-07-15 15:35:12.145196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.482 qpair failed and we were unable to recover it. 00:30:08.482 [2024-07-15 15:35:12.145470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.482 [2024-07-15 15:35:12.145487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.482 qpair failed and we were unable to recover it. 00:30:08.482 [2024-07-15 15:35:12.145729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.482 [2024-07-15 15:35:12.145748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.482 qpair failed and we were unable to recover it. 00:30:08.482 [2024-07-15 15:35:12.146033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.482 [2024-07-15 15:35:12.146051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.482 qpair failed and we were unable to recover it. 00:30:08.482 [2024-07-15 15:35:12.146306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.482 [2024-07-15 15:35:12.146322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.482 qpair failed and we were unable to recover it. 00:30:08.482 [2024-07-15 15:35:12.146600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.482 [2024-07-15 15:35:12.146617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.482 qpair failed and we were unable to recover it. 00:30:08.482 [2024-07-15 15:35:12.146953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.482 [2024-07-15 15:35:12.146970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.482 qpair failed and we were unable to recover it. 00:30:08.482 [2024-07-15 15:35:12.147280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.482 [2024-07-15 15:35:12.147298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.482 qpair failed and we were unable to recover it. 00:30:08.482 [2024-07-15 15:35:12.147543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.483 [2024-07-15 15:35:12.147560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.483 qpair failed and we were unable to recover it. 00:30:08.483 [2024-07-15 15:35:12.147871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.483 [2024-07-15 15:35:12.147889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.483 qpair failed and we were unable to recover it. 00:30:08.483 [2024-07-15 15:35:12.148141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.483 [2024-07-15 15:35:12.148158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.483 qpair failed and we were unable to recover it. 00:30:08.483 [2024-07-15 15:35:12.148519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.483 [2024-07-15 15:35:12.148536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.483 qpair failed and we were unable to recover it. 00:30:08.483 [2024-07-15 15:35:12.148732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.483 [2024-07-15 15:35:12.148750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.483 qpair failed and we were unable to recover it. 00:30:08.483 [2024-07-15 15:35:12.149012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.483 [2024-07-15 15:35:12.149029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.483 qpair failed and we were unable to recover it. 00:30:08.483 [2024-07-15 15:35:12.149223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.483 [2024-07-15 15:35:12.149241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.483 qpair failed and we were unable to recover it. 00:30:08.483 [2024-07-15 15:35:12.149574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.483 [2024-07-15 15:35:12.149591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.483 qpair failed and we were unable to recover it. 00:30:08.483 [2024-07-15 15:35:12.149845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.483 [2024-07-15 15:35:12.149862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.483 qpair failed and we were unable to recover it. 00:30:08.483 [2024-07-15 15:35:12.150125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.483 [2024-07-15 15:35:12.150142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.483 qpair failed and we were unable to recover it. 00:30:08.483 [2024-07-15 15:35:12.150317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.483 [2024-07-15 15:35:12.150334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.483 qpair failed and we were unable to recover it. 00:30:08.483 [2024-07-15 15:35:12.150589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.483 [2024-07-15 15:35:12.150606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.483 qpair failed and we were unable to recover it. 00:30:08.483 [2024-07-15 15:35:12.150853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.483 [2024-07-15 15:35:12.150871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.483 qpair failed and we were unable to recover it. 00:30:08.483 [2024-07-15 15:35:12.151205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.483 [2024-07-15 15:35:12.151222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.483 qpair failed and we were unable to recover it. 00:30:08.483 [2024-07-15 15:35:12.151533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.483 [2024-07-15 15:35:12.151550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.483 qpair failed and we were unable to recover it. 00:30:08.483 [2024-07-15 15:35:12.151769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.483 [2024-07-15 15:35:12.151787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.483 qpair failed and we were unable to recover it. 00:30:08.483 [2024-07-15 15:35:12.152129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.483 [2024-07-15 15:35:12.152149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.483 qpair failed and we were unable to recover it. 00:30:08.483 [2024-07-15 15:35:12.152391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.483 [2024-07-15 15:35:12.152408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.483 qpair failed and we were unable to recover it. 00:30:08.483 [2024-07-15 15:35:12.152720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.483 [2024-07-15 15:35:12.152737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.483 qpair failed and we were unable to recover it. 00:30:08.483 [2024-07-15 15:35:12.152942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.483 [2024-07-15 15:35:12.152960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.483 qpair failed and we were unable to recover it. 00:30:08.483 [2024-07-15 15:35:12.153220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.483 [2024-07-15 15:35:12.153237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.483 qpair failed and we were unable to recover it. 00:30:08.483 [2024-07-15 15:35:12.153479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.483 [2024-07-15 15:35:12.153497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.483 qpair failed and we were unable to recover it. 00:30:08.483 [2024-07-15 15:35:12.153753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.483 [2024-07-15 15:35:12.153770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.483 qpair failed and we were unable to recover it. 00:30:08.483 [2024-07-15 15:35:12.154022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.483 [2024-07-15 15:35:12.154039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.483 qpair failed and we were unable to recover it. 00:30:08.483 [2024-07-15 15:35:12.154226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.483 [2024-07-15 15:35:12.154244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.483 qpair failed and we were unable to recover it. 00:30:08.483 [2024-07-15 15:35:12.154424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.483 [2024-07-15 15:35:12.154441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.483 qpair failed and we were unable to recover it. 00:30:08.483 [2024-07-15 15:35:12.154703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.484 [2024-07-15 15:35:12.154720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.484 qpair failed and we were unable to recover it. 00:30:08.484 [2024-07-15 15:35:12.155029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.484 [2024-07-15 15:35:12.155046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.484 qpair failed and we were unable to recover it. 00:30:08.484 [2024-07-15 15:35:12.155338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.484 [2024-07-15 15:35:12.155355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.484 qpair failed and we were unable to recover it. 00:30:08.484 [2024-07-15 15:35:12.155599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.484 [2024-07-15 15:35:12.155616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.484 qpair failed and we were unable to recover it. 00:30:08.484 [2024-07-15 15:35:12.155892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.484 [2024-07-15 15:35:12.155909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.484 qpair failed and we were unable to recover it. 00:30:08.484 [2024-07-15 15:35:12.156161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.484 [2024-07-15 15:35:12.156178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.484 qpair failed and we were unable to recover it. 00:30:08.484 [2024-07-15 15:35:12.156447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.484 [2024-07-15 15:35:12.156465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.484 qpair failed and we were unable to recover it. 00:30:08.484 [2024-07-15 15:35:12.156725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.484 [2024-07-15 15:35:12.156742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.484 qpair failed and we were unable to recover it. 00:30:08.484 [2024-07-15 15:35:12.156933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.484 [2024-07-15 15:35:12.156951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.484 qpair failed and we were unable to recover it. 00:30:08.484 [2024-07-15 15:35:12.157201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.484 [2024-07-15 15:35:12.157218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.484 qpair failed and we were unable to recover it. 00:30:08.484 [2024-07-15 15:35:12.157407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.484 [2024-07-15 15:35:12.157424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.484 qpair failed and we were unable to recover it. 00:30:08.484 [2024-07-15 15:35:12.157687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.484 [2024-07-15 15:35:12.157704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.484 qpair failed and we were unable to recover it. 00:30:08.484 [2024-07-15 15:35:12.157988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.484 [2024-07-15 15:35:12.158005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.484 qpair failed and we were unable to recover it. 00:30:08.484 [2024-07-15 15:35:12.158250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.484 [2024-07-15 15:35:12.158267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.484 qpair failed and we were unable to recover it. 00:30:08.484 [2024-07-15 15:35:12.158523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.484 [2024-07-15 15:35:12.158541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.484 qpair failed and we were unable to recover it. 00:30:08.484 [2024-07-15 15:35:12.158813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.484 [2024-07-15 15:35:12.158830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.484 qpair failed and we were unable to recover it. 00:30:08.484 [2024-07-15 15:35:12.159111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.484 [2024-07-15 15:35:12.159128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.484 qpair failed and we were unable to recover it. 00:30:08.484 [2024-07-15 15:35:12.159443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.484 [2024-07-15 15:35:12.159460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.484 qpair failed and we were unable to recover it. 00:30:08.484 [2024-07-15 15:35:12.159714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.484 [2024-07-15 15:35:12.159731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.484 qpair failed and we were unable to recover it. 00:30:08.484 [2024-07-15 15:35:12.160058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.484 [2024-07-15 15:35:12.160075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.484 qpair failed and we were unable to recover it. 00:30:08.484 [2024-07-15 15:35:12.160355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.484 [2024-07-15 15:35:12.160372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.484 qpair failed and we were unable to recover it. 00:30:08.484 [2024-07-15 15:35:12.160483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.484 [2024-07-15 15:35:12.160500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.484 qpair failed and we were unable to recover it. 00:30:08.484 [2024-07-15 15:35:12.160810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.484 [2024-07-15 15:35:12.160827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.484 qpair failed and we were unable to recover it. 00:30:08.484 [2024-07-15 15:35:12.161113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.484 [2024-07-15 15:35:12.161131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.484 qpair failed and we were unable to recover it. 00:30:08.484 [2024-07-15 15:35:12.161280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.484 [2024-07-15 15:35:12.161297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.484 qpair failed and we were unable to recover it. 00:30:08.484 [2024-07-15 15:35:12.161608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.484 [2024-07-15 15:35:12.161625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.484 qpair failed and we were unable to recover it. 00:30:08.484 [2024-07-15 15:35:12.161949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.485 [2024-07-15 15:35:12.161966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.485 qpair failed and we were unable to recover it. 00:30:08.485 [2024-07-15 15:35:12.162277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.485 [2024-07-15 15:35:12.162293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.485 qpair failed and we were unable to recover it. 00:30:08.485 [2024-07-15 15:35:12.162603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.485 [2024-07-15 15:35:12.162620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.485 qpair failed and we were unable to recover it. 00:30:08.485 [2024-07-15 15:35:12.162732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.485 [2024-07-15 15:35:12.162747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.485 qpair failed and we were unable to recover it. 00:30:08.485 [2024-07-15 15:35:12.162941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.485 [2024-07-15 15:35:12.162961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.485 qpair failed and we were unable to recover it. 00:30:08.485 [2024-07-15 15:35:12.163317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.485 [2024-07-15 15:35:12.163334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.485 qpair failed and we were unable to recover it. 00:30:08.485 [2024-07-15 15:35:12.163542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.485 [2024-07-15 15:35:12.163559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.485 qpair failed and we were unable to recover it. 00:30:08.485 [2024-07-15 15:35:12.163822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.485 [2024-07-15 15:35:12.163842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.485 qpair failed and we were unable to recover it. 00:30:08.485 [2024-07-15 15:35:12.164093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.485 [2024-07-15 15:35:12.164110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.485 qpair failed and we were unable to recover it. 00:30:08.485 [2024-07-15 15:35:12.164369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.485 [2024-07-15 15:35:12.164386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.485 qpair failed and we were unable to recover it. 00:30:08.485 [2024-07-15 15:35:12.164645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.485 [2024-07-15 15:35:12.164662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.485 qpair failed and we were unable to recover it. 00:30:08.485 [2024-07-15 15:35:12.164842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.485 [2024-07-15 15:35:12.164859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.485 qpair failed and we were unable to recover it. 00:30:08.485 [2024-07-15 15:35:12.165072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.485 [2024-07-15 15:35:12.165089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.485 qpair failed and we were unable to recover it. 00:30:08.485 [2024-07-15 15:35:12.165333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.485 [2024-07-15 15:35:12.165350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.485 qpair failed and we were unable to recover it. 00:30:08.485 [2024-07-15 15:35:12.165613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.485 [2024-07-15 15:35:12.165630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.485 qpair failed and we were unable to recover it. 00:30:08.485 [2024-07-15 15:35:12.165960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.485 [2024-07-15 15:35:12.165977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.485 qpair failed and we were unable to recover it. 00:30:08.485 [2024-07-15 15:35:12.166278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.485 [2024-07-15 15:35:12.166295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.485 qpair failed and we were unable to recover it. 00:30:08.485 [2024-07-15 15:35:12.166606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.485 [2024-07-15 15:35:12.166623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.485 qpair failed and we were unable to recover it. 00:30:08.485 [2024-07-15 15:35:12.166874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.485 [2024-07-15 15:35:12.166892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.485 qpair failed and we were unable to recover it. 00:30:08.485 [2024-07-15 15:35:12.167146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.485 [2024-07-15 15:35:12.167163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.485 qpair failed and we were unable to recover it. 00:30:08.485 [2024-07-15 15:35:12.167449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.485 [2024-07-15 15:35:12.167466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.485 qpair failed and we were unable to recover it. 00:30:08.485 [2024-07-15 15:35:12.167642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.485 [2024-07-15 15:35:12.167660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.485 qpair failed and we were unable to recover it. 00:30:08.485 [2024-07-15 15:35:12.167925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.485 [2024-07-15 15:35:12.167942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.485 qpair failed and we were unable to recover it. 00:30:08.485 [2024-07-15 15:35:12.168210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.485 [2024-07-15 15:35:12.168227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.485 qpair failed and we were unable to recover it. 00:30:08.485 [2024-07-15 15:35:12.168561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.485 [2024-07-15 15:35:12.168578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.485 qpair failed and we were unable to recover it. 00:30:08.485 [2024-07-15 15:35:12.168841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.485 [2024-07-15 15:35:12.168858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.485 qpair failed and we were unable to recover it. 00:30:08.485 [2024-07-15 15:35:12.169196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.485 [2024-07-15 15:35:12.169213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.485 qpair failed and we were unable to recover it. 00:30:08.485 [2024-07-15 15:35:12.169470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.485 [2024-07-15 15:35:12.169487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.485 qpair failed and we were unable to recover it. 00:30:08.485 [2024-07-15 15:35:12.169728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.486 [2024-07-15 15:35:12.169745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.486 qpair failed and we were unable to recover it. 00:30:08.486 [2024-07-15 15:35:12.170007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.486 [2024-07-15 15:35:12.170024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.486 qpair failed and we were unable to recover it. 00:30:08.486 [2024-07-15 15:35:12.170278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.486 [2024-07-15 15:35:12.170295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.486 qpair failed and we were unable to recover it. 00:30:08.486 [2024-07-15 15:35:12.170560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.486 [2024-07-15 15:35:12.170577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.486 qpair failed and we were unable to recover it. 00:30:08.486 [2024-07-15 15:35:12.170856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.486 [2024-07-15 15:35:12.170874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.486 qpair failed and we were unable to recover it. 00:30:08.486 [2024-07-15 15:35:12.171065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.486 [2024-07-15 15:35:12.171082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.486 qpair failed and we were unable to recover it. 00:30:08.486 [2024-07-15 15:35:12.171343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.486 [2024-07-15 15:35:12.171359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.486 qpair failed and we were unable to recover it. 00:30:08.486 [2024-07-15 15:35:12.171655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.486 [2024-07-15 15:35:12.171673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.486 qpair failed and we were unable to recover it. 00:30:08.486 [2024-07-15 15:35:12.171985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.486 [2024-07-15 15:35:12.172002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.486 qpair failed and we were unable to recover it. 00:30:08.486 [2024-07-15 15:35:12.172294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.486 [2024-07-15 15:35:12.172311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.486 qpair failed and we were unable to recover it. 00:30:08.486 [2024-07-15 15:35:12.172621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.486 [2024-07-15 15:35:12.172638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.486 qpair failed and we were unable to recover it. 00:30:08.486 [2024-07-15 15:35:12.172900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.486 [2024-07-15 15:35:12.172917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.486 qpair failed and we were unable to recover it. 00:30:08.486 [2024-07-15 15:35:12.173205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.486 [2024-07-15 15:35:12.173222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.486 qpair failed and we were unable to recover it. 00:30:08.486 [2024-07-15 15:35:12.173476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.486 [2024-07-15 15:35:12.173493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.486 qpair failed and we were unable to recover it. 00:30:08.486 [2024-07-15 15:35:12.173827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.486 [2024-07-15 15:35:12.173849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.486 qpair failed and we were unable to recover it. 00:30:08.486 [2024-07-15 15:35:12.174112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.486 [2024-07-15 15:35:12.174130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.486 qpair failed and we were unable to recover it. 00:30:08.486 [2024-07-15 15:35:12.174321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.486 [2024-07-15 15:35:12.174340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.486 qpair failed and we were unable to recover it. 00:30:08.486 [2024-07-15 15:35:12.174654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.486 [2024-07-15 15:35:12.174670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.486 qpair failed and we were unable to recover it. 00:30:08.486 [2024-07-15 15:35:12.175023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.486 [2024-07-15 15:35:12.175041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.486 qpair failed and we were unable to recover it. 00:30:08.486 [2024-07-15 15:35:12.175307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.486 [2024-07-15 15:35:12.175324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.486 qpair failed and we were unable to recover it. 00:30:08.486 [2024-07-15 15:35:12.175581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.486 [2024-07-15 15:35:12.175598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.486 qpair failed and we were unable to recover it. 00:30:08.486 [2024-07-15 15:35:12.175708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.486 [2024-07-15 15:35:12.175726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.486 qpair failed and we were unable to recover it. 00:30:08.486 [2024-07-15 15:35:12.176035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.486 [2024-07-15 15:35:12.176052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.486 qpair failed and we were unable to recover it. 00:30:08.486 [2024-07-15 15:35:12.176308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.486 [2024-07-15 15:35:12.176324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.486 qpair failed and we were unable to recover it. 00:30:08.486 [2024-07-15 15:35:12.176516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.486 [2024-07-15 15:35:12.176533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.486 qpair failed and we were unable to recover it. 00:30:08.486 [2024-07-15 15:35:12.176741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.486 [2024-07-15 15:35:12.176758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.486 qpair failed and we were unable to recover it. 00:30:08.486 [2024-07-15 15:35:12.177044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.486 [2024-07-15 15:35:12.177061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.486 qpair failed and we were unable to recover it. 00:30:08.486 [2024-07-15 15:35:12.177337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.486 [2024-07-15 15:35:12.177354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.486 qpair failed and we were unable to recover it. 00:30:08.486 [2024-07-15 15:35:12.177618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.486 [2024-07-15 15:35:12.177635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.486 qpair failed and we were unable to recover it. 00:30:08.487 [2024-07-15 15:35:12.177891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.487 [2024-07-15 15:35:12.177908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.487 qpair failed and we were unable to recover it. 00:30:08.487 [2024-07-15 15:35:12.178159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.487 [2024-07-15 15:35:12.178176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.487 qpair failed and we were unable to recover it. 00:30:08.487 [2024-07-15 15:35:12.178420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.487 [2024-07-15 15:35:12.178437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.487 qpair failed and we were unable to recover it. 00:30:08.487 [2024-07-15 15:35:12.178693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.487 [2024-07-15 15:35:12.178710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.487 qpair failed and we were unable to recover it. 00:30:08.487 [2024-07-15 15:35:12.178961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.487 [2024-07-15 15:35:12.178978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.487 qpair failed and we were unable to recover it. 00:30:08.487 [2024-07-15 15:35:12.179162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.487 [2024-07-15 15:35:12.179179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.487 qpair failed and we were unable to recover it. 00:30:08.487 [2024-07-15 15:35:12.179453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.487 [2024-07-15 15:35:12.179470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.487 qpair failed and we were unable to recover it. 00:30:08.487 [2024-07-15 15:35:12.179805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.487 [2024-07-15 15:35:12.179822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.487 qpair failed and we were unable to recover it. 00:30:08.487 [2024-07-15 15:35:12.180183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.487 [2024-07-15 15:35:12.180200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.487 qpair failed and we were unable to recover it. 00:30:08.487 [2024-07-15 15:35:12.180394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.487 [2024-07-15 15:35:12.180411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.487 qpair failed and we were unable to recover it. 00:30:08.487 [2024-07-15 15:35:12.180746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.487 [2024-07-15 15:35:12.180763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.487 qpair failed and we were unable to recover it. 00:30:08.487 [2024-07-15 15:35:12.181049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.487 [2024-07-15 15:35:12.181067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.487 qpair failed and we were unable to recover it. 00:30:08.487 [2024-07-15 15:35:12.181257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.487 [2024-07-15 15:35:12.181274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.487 qpair failed and we were unable to recover it. 00:30:08.487 [2024-07-15 15:35:12.181451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.487 [2024-07-15 15:35:12.181468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.487 qpair failed and we were unable to recover it. 00:30:08.487 [2024-07-15 15:35:12.181657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.487 [2024-07-15 15:35:12.181674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.487 qpair failed and we were unable to recover it. 00:30:08.487 [2024-07-15 15:35:12.181871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.487 [2024-07-15 15:35:12.181889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.487 qpair failed and we were unable to recover it. 00:30:08.487 [2024-07-15 15:35:12.182130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.487 [2024-07-15 15:35:12.182147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.487 qpair failed and we were unable to recover it. 00:30:08.487 [2024-07-15 15:35:12.182389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.487 [2024-07-15 15:35:12.182406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.487 qpair failed and we were unable to recover it. 00:30:08.487 [2024-07-15 15:35:12.182668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.487 [2024-07-15 15:35:12.182685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.487 qpair failed and we were unable to recover it. 00:30:08.487 [2024-07-15 15:35:12.182957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.487 [2024-07-15 15:35:12.182974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.487 qpair failed and we were unable to recover it. 00:30:08.487 [2024-07-15 15:35:12.183234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.487 [2024-07-15 15:35:12.183251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.487 qpair failed and we were unable to recover it. 00:30:08.487 [2024-07-15 15:35:12.183443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.487 [2024-07-15 15:35:12.183461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.487 qpair failed and we were unable to recover it. 00:30:08.487 [2024-07-15 15:35:12.183708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.487 [2024-07-15 15:35:12.183725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.487 qpair failed and we were unable to recover it. 00:30:08.487 [2024-07-15 15:35:12.183970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.487 [2024-07-15 15:35:12.183988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.487 qpair failed and we were unable to recover it. 00:30:08.487 [2024-07-15 15:35:12.184256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.487 [2024-07-15 15:35:12.184273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.487 qpair failed and we were unable to recover it. 00:30:08.487 [2024-07-15 15:35:12.184550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.487 [2024-07-15 15:35:12.184567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.487 qpair failed and we were unable to recover it. 00:30:08.487 [2024-07-15 15:35:12.184815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.487 [2024-07-15 15:35:12.184837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.487 qpair failed and we were unable to recover it. 00:30:08.487 [2024-07-15 15:35:12.185123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.487 [2024-07-15 15:35:12.185145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.487 qpair failed and we were unable to recover it. 00:30:08.487 [2024-07-15 15:35:12.185482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.487 [2024-07-15 15:35:12.185499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.487 qpair failed and we were unable to recover it. 00:30:08.487 [2024-07-15 15:35:12.185621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.487 [2024-07-15 15:35:12.185637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.487 qpair failed and we were unable to recover it. 00:30:08.487 [2024-07-15 15:35:12.185886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.487 [2024-07-15 15:35:12.185903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.487 qpair failed and we were unable to recover it. 00:30:08.487 [2024-07-15 15:35:12.186256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.487 [2024-07-15 15:35:12.186273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.487 qpair failed and we were unable to recover it. 00:30:08.488 [2024-07-15 15:35:12.186458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.488 [2024-07-15 15:35:12.186475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.488 qpair failed and we were unable to recover it. 00:30:08.488 [2024-07-15 15:35:12.186727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.488 [2024-07-15 15:35:12.186743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.488 qpair failed and we were unable to recover it. 00:30:08.488 [2024-07-15 15:35:12.187004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.488 [2024-07-15 15:35:12.187022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.488 qpair failed and we were unable to recover it. 00:30:08.488 [2024-07-15 15:35:12.187262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.488 [2024-07-15 15:35:12.187279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.488 qpair failed and we were unable to recover it. 00:30:08.488 [2024-07-15 15:35:12.187473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.488 [2024-07-15 15:35:12.187489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.488 qpair failed and we were unable to recover it. 00:30:08.488 [2024-07-15 15:35:12.187674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.488 [2024-07-15 15:35:12.187690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.488 qpair failed and we were unable to recover it. 00:30:08.488 [2024-07-15 15:35:12.187806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.488 [2024-07-15 15:35:12.187823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.488 qpair failed and we were unable to recover it. 00:30:08.488 [2024-07-15 15:35:12.188101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.488 [2024-07-15 15:35:12.188119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.488 qpair failed and we were unable to recover it. 00:30:08.488 [2024-07-15 15:35:12.188312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.488 [2024-07-15 15:35:12.188329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.488 qpair failed and we were unable to recover it. 00:30:08.488 [2024-07-15 15:35:12.188665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.488 [2024-07-15 15:35:12.188682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.488 qpair failed and we were unable to recover it. 00:30:08.488 [2024-07-15 15:35:12.188860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.488 [2024-07-15 15:35:12.188878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.488 qpair failed and we were unable to recover it. 00:30:08.488 [2024-07-15 15:35:12.189067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.488 [2024-07-15 15:35:12.189084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.488 qpair failed and we were unable to recover it. 00:30:08.488 [2024-07-15 15:35:12.189279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.488 [2024-07-15 15:35:12.189296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.488 qpair failed and we were unable to recover it. 00:30:08.488 [2024-07-15 15:35:12.189489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.488 [2024-07-15 15:35:12.189506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.488 qpair failed and we were unable to recover it. 00:30:08.488 [2024-07-15 15:35:12.189750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.488 [2024-07-15 15:35:12.189767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.488 qpair failed and we were unable to recover it. 00:30:08.488 [2024-07-15 15:35:12.189970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.488 [2024-07-15 15:35:12.189988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.488 qpair failed and we were unable to recover it. 00:30:08.488 [2024-07-15 15:35:12.190107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.488 [2024-07-15 15:35:12.190124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.488 qpair failed and we were unable to recover it. 00:30:08.488 [2024-07-15 15:35:12.190418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.488 [2024-07-15 15:35:12.190436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.488 qpair failed and we were unable to recover it. 00:30:08.488 [2024-07-15 15:35:12.190629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.488 [2024-07-15 15:35:12.190646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.488 qpair failed and we were unable to recover it. 00:30:08.488 [2024-07-15 15:35:12.190896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.488 [2024-07-15 15:35:12.190913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.488 qpair failed and we were unable to recover it. 00:30:08.488 [2024-07-15 15:35:12.191248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.488 [2024-07-15 15:35:12.191265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.488 qpair failed and we were unable to recover it. 00:30:08.488 [2024-07-15 15:35:12.191516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.488 [2024-07-15 15:35:12.191533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.488 qpair failed and we were unable to recover it. 00:30:08.488 [2024-07-15 15:35:12.191885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.488 [2024-07-15 15:35:12.191902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.488 qpair failed and we were unable to recover it. 00:30:08.488 [2024-07-15 15:35:12.192161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.488 [2024-07-15 15:35:12.192178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.488 qpair failed and we were unable to recover it. 00:30:08.488 [2024-07-15 15:35:12.192434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.488 [2024-07-15 15:35:12.192451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.488 qpair failed and we were unable to recover it. 00:30:08.488 [2024-07-15 15:35:12.192710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.488 [2024-07-15 15:35:12.192727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.488 qpair failed and we were unable to recover it. 00:30:08.488 [2024-07-15 15:35:12.192990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.488 [2024-07-15 15:35:12.193008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.488 qpair failed and we were unable to recover it. 00:30:08.488 [2024-07-15 15:35:12.193248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.488 [2024-07-15 15:35:12.193265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.488 qpair failed and we were unable to recover it. 00:30:08.488 [2024-07-15 15:35:12.193528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.488 [2024-07-15 15:35:12.193546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.488 qpair failed and we were unable to recover it. 00:30:08.488 [2024-07-15 15:35:12.193716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.488 [2024-07-15 15:35:12.193734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.488 qpair failed and we were unable to recover it. 00:30:08.488 [2024-07-15 15:35:12.194065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.488 [2024-07-15 15:35:12.194082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.488 qpair failed and we were unable to recover it. 00:30:08.488 [2024-07-15 15:35:12.194346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.488 [2024-07-15 15:35:12.194363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.489 qpair failed and we were unable to recover it. 00:30:08.489 [2024-07-15 15:35:12.194616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.489 [2024-07-15 15:35:12.194632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.489 qpair failed and we were unable to recover it. 00:30:08.489 [2024-07-15 15:35:12.194904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.489 [2024-07-15 15:35:12.194921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.489 qpair failed and we were unable to recover it. 00:30:08.489 [2024-07-15 15:35:12.195234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.489 [2024-07-15 15:35:12.195251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.489 qpair failed and we were unable to recover it. 00:30:08.489 [2024-07-15 15:35:12.195508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.489 [2024-07-15 15:35:12.195528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.489 qpair failed and we were unable to recover it. 00:30:08.489 [2024-07-15 15:35:12.195806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.489 [2024-07-15 15:35:12.195823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.489 qpair failed and we were unable to recover it. 00:30:08.489 [2024-07-15 15:35:12.196149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.489 [2024-07-15 15:35:12.196166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.489 qpair failed and we were unable to recover it. 00:30:08.489 [2024-07-15 15:35:12.196409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.489 [2024-07-15 15:35:12.196426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.489 qpair failed and we were unable to recover it. 00:30:08.489 [2024-07-15 15:35:12.196772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.489 [2024-07-15 15:35:12.196789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.489 qpair failed and we were unable to recover it. 00:30:08.489 [2024-07-15 15:35:12.197114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.489 [2024-07-15 15:35:12.197131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.489 qpair failed and we were unable to recover it. 00:30:08.489 [2024-07-15 15:35:12.197390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.489 [2024-07-15 15:35:12.197408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.489 qpair failed and we were unable to recover it. 00:30:08.489 [2024-07-15 15:35:12.197531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.489 [2024-07-15 15:35:12.197549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.489 qpair failed and we were unable to recover it. 00:30:08.489 [2024-07-15 15:35:12.197808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.489 [2024-07-15 15:35:12.197826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.489 qpair failed and we were unable to recover it. 00:30:08.489 [2024-07-15 15:35:12.198076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.489 [2024-07-15 15:35:12.198093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.489 qpair failed and we were unable to recover it. 00:30:08.489 [2024-07-15 15:35:12.198216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.489 [2024-07-15 15:35:12.198233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.489 qpair failed and we were unable to recover it. 00:30:08.489 [2024-07-15 15:35:12.198519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.489 [2024-07-15 15:35:12.198536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.489 qpair failed and we were unable to recover it. 00:30:08.489 [2024-07-15 15:35:12.198804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.489 [2024-07-15 15:35:12.198821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.489 qpair failed and we were unable to recover it. 00:30:08.489 [2024-07-15 15:35:12.199149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.489 [2024-07-15 15:35:12.199166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.489 qpair failed and we were unable to recover it. 00:30:08.489 [2024-07-15 15:35:12.199478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.489 [2024-07-15 15:35:12.199496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.489 qpair failed and we were unable to recover it. 00:30:08.489 [2024-07-15 15:35:12.199805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.489 [2024-07-15 15:35:12.199822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.489 qpair failed and we were unable to recover it. 00:30:08.489 [2024-07-15 15:35:12.199943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.489 [2024-07-15 15:35:12.199961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.489 qpair failed and we were unable to recover it. 00:30:08.489 [2024-07-15 15:35:12.200297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.489 [2024-07-15 15:35:12.200317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.489 qpair failed and we were unable to recover it. 00:30:08.489 [2024-07-15 15:35:12.200629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.489 [2024-07-15 15:35:12.200647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.489 qpair failed and we were unable to recover it. 00:30:08.489 [2024-07-15 15:35:12.200926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.489 [2024-07-15 15:35:12.200943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.489 qpair failed and we were unable to recover it. 00:30:08.489 [2024-07-15 15:35:12.201149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.489 [2024-07-15 15:35:12.201166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.489 qpair failed and we were unable to recover it. 00:30:08.489 [2024-07-15 15:35:12.201430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.489 [2024-07-15 15:35:12.201448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.489 qpair failed and we were unable to recover it. 00:30:08.489 [2024-07-15 15:35:12.201701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.489 [2024-07-15 15:35:12.201718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.489 qpair failed and we were unable to recover it. 00:30:08.489 [2024-07-15 15:35:12.202039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.489 [2024-07-15 15:35:12.202056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.489 qpair failed and we were unable to recover it. 00:30:08.489 [2024-07-15 15:35:12.202378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.489 [2024-07-15 15:35:12.202394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.489 qpair failed and we were unable to recover it. 00:30:08.489 [2024-07-15 15:35:12.202675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.489 [2024-07-15 15:35:12.202692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.489 qpair failed and we were unable to recover it. 00:30:08.489 [2024-07-15 15:35:12.202970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.489 [2024-07-15 15:35:12.202987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.489 qpair failed and we were unable to recover it. 00:30:08.489 [2024-07-15 15:35:12.203325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.489 [2024-07-15 15:35:12.203343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.489 qpair failed and we were unable to recover it. 00:30:08.489 [2024-07-15 15:35:12.203608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.489 [2024-07-15 15:35:12.203625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.489 qpair failed and we were unable to recover it. 00:30:08.489 [2024-07-15 15:35:12.203803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.489 [2024-07-15 15:35:12.203820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.489 qpair failed and we were unable to recover it. 00:30:08.489 [2024-07-15 15:35:12.204089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.489 [2024-07-15 15:35:12.204106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.489 qpair failed and we were unable to recover it. 00:30:08.489 [2024-07-15 15:35:12.204380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.489 [2024-07-15 15:35:12.204398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.489 qpair failed and we were unable to recover it. 00:30:08.489 [2024-07-15 15:35:12.204653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.489 [2024-07-15 15:35:12.204670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.489 qpair failed and we were unable to recover it. 00:30:08.489 [2024-07-15 15:35:12.204877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.489 [2024-07-15 15:35:12.204894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.489 qpair failed and we were unable to recover it. 00:30:08.489 [2024-07-15 15:35:12.205187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.489 [2024-07-15 15:35:12.205204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.489 qpair failed and we were unable to recover it. 00:30:08.489 [2024-07-15 15:35:12.205385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.489 [2024-07-15 15:35:12.205402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.489 qpair failed and we were unable to recover it. 00:30:08.489 [2024-07-15 15:35:12.205579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.489 [2024-07-15 15:35:12.205596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.489 qpair failed and we were unable to recover it. 00:30:08.490 [2024-07-15 15:35:12.205907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.490 [2024-07-15 15:35:12.205924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.490 qpair failed and we were unable to recover it. 00:30:08.490 [2024-07-15 15:35:12.206177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.490 [2024-07-15 15:35:12.206194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.490 qpair failed and we were unable to recover it. 00:30:08.490 [2024-07-15 15:35:12.206441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.490 [2024-07-15 15:35:12.206458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.490 qpair failed and we were unable to recover it. 00:30:08.490 [2024-07-15 15:35:12.206705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.490 [2024-07-15 15:35:12.206725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.490 qpair failed and we were unable to recover it. 00:30:08.490 [2024-07-15 15:35:12.207055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.490 [2024-07-15 15:35:12.207072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.490 qpair failed and we were unable to recover it. 00:30:08.490 [2024-07-15 15:35:12.207404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.490 [2024-07-15 15:35:12.207421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.490 qpair failed and we were unable to recover it. 00:30:08.490 [2024-07-15 15:35:12.207611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.490 [2024-07-15 15:35:12.207628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.490 qpair failed and we were unable to recover it. 00:30:08.490 [2024-07-15 15:35:12.207949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.490 [2024-07-15 15:35:12.207967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.490 qpair failed and we were unable to recover it. 00:30:08.490 [2024-07-15 15:35:12.208299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.490 [2024-07-15 15:35:12.208317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.490 qpair failed and we were unable to recover it. 00:30:08.490 [2024-07-15 15:35:12.208560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.490 [2024-07-15 15:35:12.208577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.490 qpair failed and we were unable to recover it. 00:30:08.490 [2024-07-15 15:35:12.208848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.490 [2024-07-15 15:35:12.208866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.490 qpair failed and we were unable to recover it. 00:30:08.490 [2024-07-15 15:35:12.209190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.490 [2024-07-15 15:35:12.209207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.490 qpair failed and we were unable to recover it. 00:30:08.490 [2024-07-15 15:35:12.209469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.490 [2024-07-15 15:35:12.209487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.490 qpair failed and we were unable to recover it. 00:30:08.490 [2024-07-15 15:35:12.209662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.490 [2024-07-15 15:35:12.209679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.490 qpair failed and we were unable to recover it. 00:30:08.490 [2024-07-15 15:35:12.210008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.490 [2024-07-15 15:35:12.210025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.490 qpair failed and we were unable to recover it. 00:30:08.490 [2024-07-15 15:35:12.210281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.490 [2024-07-15 15:35:12.210298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.490 qpair failed and we were unable to recover it. 00:30:08.490 [2024-07-15 15:35:12.210631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.490 [2024-07-15 15:35:12.210648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.490 qpair failed and we were unable to recover it. 00:30:08.490 [2024-07-15 15:35:12.210929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.490 [2024-07-15 15:35:12.210947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.490 qpair failed and we were unable to recover it. 00:30:08.490 [2024-07-15 15:35:12.211209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.490 [2024-07-15 15:35:12.211227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.490 qpair failed and we were unable to recover it. 00:30:08.490 [2024-07-15 15:35:12.211521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.490 [2024-07-15 15:35:12.211538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.490 qpair failed and we were unable to recover it. 00:30:08.490 [2024-07-15 15:35:12.211851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.490 [2024-07-15 15:35:12.211869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.490 qpair failed and we were unable to recover it. 00:30:08.490 [2024-07-15 15:35:12.212160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.490 [2024-07-15 15:35:12.212178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.490 qpair failed and we were unable to recover it. 00:30:08.490 [2024-07-15 15:35:12.212425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.490 [2024-07-15 15:35:12.212442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.490 qpair failed and we were unable to recover it. 00:30:08.490 [2024-07-15 15:35:12.212619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.490 [2024-07-15 15:35:12.212636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.490 qpair failed and we were unable to recover it. 00:30:08.490 [2024-07-15 15:35:12.212823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.490 [2024-07-15 15:35:12.212852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.490 qpair failed and we were unable to recover it. 00:30:08.490 [2024-07-15 15:35:12.213178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.490 [2024-07-15 15:35:12.213195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.490 qpair failed and we were unable to recover it. 00:30:08.490 [2024-07-15 15:35:12.213507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.490 [2024-07-15 15:35:12.213525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.490 qpair failed and we were unable to recover it. 00:30:08.490 [2024-07-15 15:35:12.213867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.490 [2024-07-15 15:35:12.213884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.490 qpair failed and we were unable to recover it. 00:30:08.490 [2024-07-15 15:35:12.214072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.490 [2024-07-15 15:35:12.214090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.490 qpair failed and we were unable to recover it. 00:30:08.490 [2024-07-15 15:35:12.214336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.490 [2024-07-15 15:35:12.214352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.490 qpair failed and we were unable to recover it. 00:30:08.490 [2024-07-15 15:35:12.214524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.490 [2024-07-15 15:35:12.214542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.490 qpair failed and we were unable to recover it. 00:30:08.490 [2024-07-15 15:35:12.214735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.490 [2024-07-15 15:35:12.214752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.490 qpair failed and we were unable to recover it. 00:30:08.490 [2024-07-15 15:35:12.215008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.490 [2024-07-15 15:35:12.215026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.490 qpair failed and we were unable to recover it. 00:30:08.490 [2024-07-15 15:35:12.215337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.490 [2024-07-15 15:35:12.215354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.490 qpair failed and we were unable to recover it. 00:30:08.490 [2024-07-15 15:35:12.215542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.490 [2024-07-15 15:35:12.215559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.490 qpair failed and we were unable to recover it. 00:30:08.490 [2024-07-15 15:35:12.215812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.490 [2024-07-15 15:35:12.215829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.490 qpair failed and we were unable to recover it. 00:30:08.490 [2024-07-15 15:35:12.216148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.490 [2024-07-15 15:35:12.216166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.490 qpair failed and we were unable to recover it. 00:30:08.490 [2024-07-15 15:35:12.216435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.490 [2024-07-15 15:35:12.216452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.490 qpair failed and we were unable to recover it. 00:30:08.490 [2024-07-15 15:35:12.216772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.490 [2024-07-15 15:35:12.216789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.490 qpair failed and we were unable to recover it. 00:30:08.490 [2024-07-15 15:35:12.217100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.490 [2024-07-15 15:35:12.217117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.490 qpair failed and we were unable to recover it. 00:30:08.490 [2024-07-15 15:35:12.217380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.491 [2024-07-15 15:35:12.217397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.491 qpair failed and we were unable to recover it. 00:30:08.491 [2024-07-15 15:35:12.217578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.491 [2024-07-15 15:35:12.217595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.491 qpair failed and we were unable to recover it. 00:30:08.491 [2024-07-15 15:35:12.217851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.491 [2024-07-15 15:35:12.217869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.491 qpair failed and we were unable to recover it. 00:30:08.491 [2024-07-15 15:35:12.218126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.491 [2024-07-15 15:35:12.218147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.491 qpair failed and we were unable to recover it. 00:30:08.491 [2024-07-15 15:35:12.218323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.491 [2024-07-15 15:35:12.218340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.491 qpair failed and we were unable to recover it. 00:30:08.491 [2024-07-15 15:35:12.218468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.491 [2024-07-15 15:35:12.218485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.491 qpair failed and we were unable to recover it. 00:30:08.491 [2024-07-15 15:35:12.218751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.491 [2024-07-15 15:35:12.218768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.491 qpair failed and we were unable to recover it. 00:30:08.491 [2024-07-15 15:35:12.219028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.491 [2024-07-15 15:35:12.219046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.491 qpair failed and we were unable to recover it. 00:30:08.491 [2024-07-15 15:35:12.219160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.491 [2024-07-15 15:35:12.219177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.491 qpair failed and we were unable to recover it. 00:30:08.491 [2024-07-15 15:35:12.219373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.491 [2024-07-15 15:35:12.219390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.491 qpair failed and we were unable to recover it. 00:30:08.491 [2024-07-15 15:35:12.219585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.491 [2024-07-15 15:35:12.219602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.491 qpair failed and we were unable to recover it. 00:30:08.491 [2024-07-15 15:35:12.219839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.491 [2024-07-15 15:35:12.219857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.491 qpair failed and we were unable to recover it. 00:30:08.491 [2024-07-15 15:35:12.220031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.491 [2024-07-15 15:35:12.220048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.491 qpair failed and we were unable to recover it. 00:30:08.491 [2024-07-15 15:35:12.220195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.491 [2024-07-15 15:35:12.220212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.491 qpair failed and we were unable to recover it. 00:30:08.491 [2024-07-15 15:35:12.220467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.491 [2024-07-15 15:35:12.220484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.491 qpair failed and we were unable to recover it. 00:30:08.491 [2024-07-15 15:35:12.220751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.491 [2024-07-15 15:35:12.220768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.491 qpair failed and we were unable to recover it. 00:30:08.491 [2024-07-15 15:35:12.220963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.491 [2024-07-15 15:35:12.220980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.491 qpair failed and we were unable to recover it. 00:30:08.491 [2024-07-15 15:35:12.221245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.491 [2024-07-15 15:35:12.221262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.491 qpair failed and we were unable to recover it. 00:30:08.491 [2024-07-15 15:35:12.221575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.491 [2024-07-15 15:35:12.221593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.491 qpair failed and we were unable to recover it. 00:30:08.491 [2024-07-15 15:35:12.221841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.491 [2024-07-15 15:35:12.221858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.491 qpair failed and we were unable to recover it. 00:30:08.491 [2024-07-15 15:35:12.222112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.491 [2024-07-15 15:35:12.222129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.491 qpair failed and we were unable to recover it. 00:30:08.491 [2024-07-15 15:35:12.222463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.491 [2024-07-15 15:35:12.222481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.491 qpair failed and we were unable to recover it. 00:30:08.491 [2024-07-15 15:35:12.222815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.491 [2024-07-15 15:35:12.222836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.491 qpair failed and we were unable to recover it. 00:30:08.491 [2024-07-15 15:35:12.223027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.491 [2024-07-15 15:35:12.223044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.491 qpair failed and we were unable to recover it. 00:30:08.491 [2024-07-15 15:35:12.223301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.491 [2024-07-15 15:35:12.223317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.491 qpair failed and we were unable to recover it. 00:30:08.491 [2024-07-15 15:35:12.223504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.491 [2024-07-15 15:35:12.223521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.491 qpair failed and we were unable to recover it. 00:30:08.491 [2024-07-15 15:35:12.223803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.491 [2024-07-15 15:35:12.223821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.491 qpair failed and we were unable to recover it. 00:30:08.491 [2024-07-15 15:35:12.224017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.491 [2024-07-15 15:35:12.224035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.491 qpair failed and we were unable to recover it. 00:30:08.491 [2024-07-15 15:35:12.224372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.491 [2024-07-15 15:35:12.224389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.491 qpair failed and we were unable to recover it. 00:30:08.491 [2024-07-15 15:35:12.224586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.491 [2024-07-15 15:35:12.224603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.491 qpair failed and we were unable to recover it. 00:30:08.491 [2024-07-15 15:35:12.224710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.491 [2024-07-15 15:35:12.224728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.491 qpair failed and we were unable to recover it. 00:30:08.491 [2024-07-15 15:35:12.224930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.491 [2024-07-15 15:35:12.224947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.491 qpair failed and we were unable to recover it. 00:30:08.491 [2024-07-15 15:35:12.225126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.491 [2024-07-15 15:35:12.225143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.491 qpair failed and we were unable to recover it. 00:30:08.491 [2024-07-15 15:35:12.225452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.491 [2024-07-15 15:35:12.225470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.491 qpair failed and we were unable to recover it. 00:30:08.491 [2024-07-15 15:35:12.225773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.491 [2024-07-15 15:35:12.225790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.491 qpair failed and we were unable to recover it. 00:30:08.491 [2024-07-15 15:35:12.226077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.491 [2024-07-15 15:35:12.226095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.491 qpair failed and we were unable to recover it. 00:30:08.491 [2024-07-15 15:35:12.226352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.491 [2024-07-15 15:35:12.226368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.491 qpair failed and we were unable to recover it. 00:30:08.491 [2024-07-15 15:35:12.226628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.491 [2024-07-15 15:35:12.226645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.491 qpair failed and we were unable to recover it. 00:30:08.491 [2024-07-15 15:35:12.226775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.491 [2024-07-15 15:35:12.226793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.491 qpair failed and we were unable to recover it. 00:30:08.491 [2024-07-15 15:35:12.227044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.491 [2024-07-15 15:35:12.227061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.491 qpair failed and we were unable to recover it. 00:30:08.491 [2024-07-15 15:35:12.227319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.491 [2024-07-15 15:35:12.227336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.491 qpair failed and we were unable to recover it. 00:30:08.492 [2024-07-15 15:35:12.227601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.492 [2024-07-15 15:35:12.227618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.492 qpair failed and we were unable to recover it. 00:30:08.492 [2024-07-15 15:35:12.227883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.492 [2024-07-15 15:35:12.227900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.492 qpair failed and we were unable to recover it. 00:30:08.492 [2024-07-15 15:35:12.228157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.492 [2024-07-15 15:35:12.228176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.492 qpair failed and we were unable to recover it. 00:30:08.492 [2024-07-15 15:35:12.228505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.492 [2024-07-15 15:35:12.228522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.492 qpair failed and we were unable to recover it. 00:30:08.492 [2024-07-15 15:35:12.228725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.492 [2024-07-15 15:35:12.228742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.492 qpair failed and we were unable to recover it. 00:30:08.492 [2024-07-15 15:35:12.228987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.492 [2024-07-15 15:35:12.229005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.492 qpair failed and we were unable to recover it. 00:30:08.492 [2024-07-15 15:35:12.229253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.492 [2024-07-15 15:35:12.229271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.492 qpair failed and we were unable to recover it. 00:30:08.492 [2024-07-15 15:35:12.229515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.492 [2024-07-15 15:35:12.229532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.492 qpair failed and we were unable to recover it. 00:30:08.492 [2024-07-15 15:35:12.229783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.492 [2024-07-15 15:35:12.229800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.492 qpair failed and we were unable to recover it. 00:30:08.492 [2024-07-15 15:35:12.230126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.492 [2024-07-15 15:35:12.230144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.492 qpair failed and we were unable to recover it. 00:30:08.492 [2024-07-15 15:35:12.230486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.492 [2024-07-15 15:35:12.230504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.492 qpair failed and we were unable to recover it. 00:30:08.492 [2024-07-15 15:35:12.230841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.492 [2024-07-15 15:35:12.230859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.492 qpair failed and we were unable to recover it. 00:30:08.492 [2024-07-15 15:35:12.231049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.492 [2024-07-15 15:35:12.231067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.492 qpair failed and we were unable to recover it. 00:30:08.492 [2024-07-15 15:35:12.231379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.492 [2024-07-15 15:35:12.231397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.492 qpair failed and we were unable to recover it. 00:30:08.492 [2024-07-15 15:35:12.231580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.492 [2024-07-15 15:35:12.231598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.492 qpair failed and we were unable to recover it. 00:30:08.492 [2024-07-15 15:35:12.231786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.492 [2024-07-15 15:35:12.231803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.492 qpair failed and we were unable to recover it. 00:30:08.492 [2024-07-15 15:35:12.232094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.492 [2024-07-15 15:35:12.232112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.492 qpair failed and we were unable to recover it. 00:30:08.492 [2024-07-15 15:35:12.232308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.492 [2024-07-15 15:35:12.232325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.492 qpair failed and we were unable to recover it. 00:30:08.492 [2024-07-15 15:35:12.232588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.492 [2024-07-15 15:35:12.232606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.492 qpair failed and we were unable to recover it. 00:30:08.492 [2024-07-15 15:35:12.232863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.492 [2024-07-15 15:35:12.232880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.492 qpair failed and we were unable to recover it. 00:30:08.492 [2024-07-15 15:35:12.233158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.492 [2024-07-15 15:35:12.233175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.492 qpair failed and we were unable to recover it. 00:30:08.492 [2024-07-15 15:35:12.233436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.492 [2024-07-15 15:35:12.233453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.492 qpair failed and we were unable to recover it. 00:30:08.492 [2024-07-15 15:35:12.233627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.492 [2024-07-15 15:35:12.233644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.492 qpair failed and we were unable to recover it. 00:30:08.492 [2024-07-15 15:35:12.233956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.492 [2024-07-15 15:35:12.233973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.492 qpair failed and we were unable to recover it. 00:30:08.492 [2024-07-15 15:35:12.234173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.492 [2024-07-15 15:35:12.234190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.492 qpair failed and we were unable to recover it. 00:30:08.492 [2024-07-15 15:35:12.234434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.492 [2024-07-15 15:35:12.234451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.492 qpair failed and we were unable to recover it. 00:30:08.492 [2024-07-15 15:35:12.234628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.492 [2024-07-15 15:35:12.234645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.492 qpair failed and we were unable to recover it. 00:30:08.492 [2024-07-15 15:35:12.234889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.492 [2024-07-15 15:35:12.234907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.492 qpair failed and we were unable to recover it. 00:30:08.492 [2024-07-15 15:35:12.235168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.492 [2024-07-15 15:35:12.235186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.492 qpair failed and we were unable to recover it. 00:30:08.492 [2024-07-15 15:35:12.235362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.492 [2024-07-15 15:35:12.235380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.492 qpair failed and we were unable to recover it. 00:30:08.492 [2024-07-15 15:35:12.235691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.492 [2024-07-15 15:35:12.235708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.492 qpair failed and we were unable to recover it. 00:30:08.492 [2024-07-15 15:35:12.235951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.492 [2024-07-15 15:35:12.235968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.492 qpair failed and we were unable to recover it. 00:30:08.492 [2024-07-15 15:35:12.236228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.492 [2024-07-15 15:35:12.236245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.492 qpair failed and we were unable to recover it. 00:30:08.492 [2024-07-15 15:35:12.236429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.492 [2024-07-15 15:35:12.236445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.492 qpair failed and we were unable to recover it. 00:30:08.492 [2024-07-15 15:35:12.236615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.492 [2024-07-15 15:35:12.236633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-15 15:35:12.236841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-15 15:35:12.236858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-15 15:35:12.237180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-15 15:35:12.237197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-15 15:35:12.237537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-15 15:35:12.237554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-15 15:35:12.237872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-15 15:35:12.237890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-15 15:35:12.238173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-15 15:35:12.238190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-15 15:35:12.238434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-15 15:35:12.238451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-15 15:35:12.238778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-15 15:35:12.238796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-15 15:35:12.239037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-15 15:35:12.239059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-15 15:35:12.239314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-15 15:35:12.239331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-15 15:35:12.239609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-15 15:35:12.239626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-15 15:35:12.239937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-15 15:35:12.239954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-15 15:35:12.240226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-15 15:35:12.240243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-15 15:35:12.240527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-15 15:35:12.240544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-15 15:35:12.240785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-15 15:35:12.240802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-15 15:35:12.241055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-15 15:35:12.241072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-15 15:35:12.241341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-15 15:35:12.241358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-15 15:35:12.241602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-15 15:35:12.241619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-15 15:35:12.241897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-15 15:35:12.241914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-15 15:35:12.242155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-15 15:35:12.242172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-15 15:35:12.242506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-15 15:35:12.242523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-15 15:35:12.242787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-15 15:35:12.242804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-15 15:35:12.243057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-15 15:35:12.243074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-15 15:35:12.243385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-15 15:35:12.243401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-15 15:35:12.243664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-15 15:35:12.243681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-15 15:35:12.243921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-15 15:35:12.243939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-15 15:35:12.244248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-15 15:35:12.244265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-15 15:35:12.244512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-15 15:35:12.244529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-15 15:35:12.244840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-15 15:35:12.244857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-15 15:35:12.245201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-15 15:35:12.245219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-15 15:35:12.245396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-15 15:35:12.245413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-15 15:35:12.245671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-15 15:35:12.245693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-15 15:35:12.245943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-15 15:35:12.245960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-15 15:35:12.246289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-15 15:35:12.246307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-15 15:35:12.246641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-15 15:35:12.246657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-15 15:35:12.246914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-15 15:35:12.246931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-15 15:35:12.247132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-15 15:35:12.247149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-15 15:35:12.247249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-15 15:35:12.247266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-15 15:35:12.247438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-15 15:35:12.247455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-15 15:35:12.247767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-15 15:35:12.247784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.493 [2024-07-15 15:35:12.248048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.493 [2024-07-15 15:35:12.248065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.493 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-15 15:35:12.248311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-15 15:35:12.248328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-15 15:35:12.248568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-15 15:35:12.248585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-15 15:35:12.248914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-15 15:35:12.248931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-15 15:35:12.249174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-15 15:35:12.249191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-15 15:35:12.249463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-15 15:35:12.249480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-15 15:35:12.249811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-15 15:35:12.249827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-15 15:35:12.250155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-15 15:35:12.250172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-15 15:35:12.250446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-15 15:35:12.250463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-15 15:35:12.250734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-15 15:35:12.250751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-15 15:35:12.250957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-15 15:35:12.250974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-15 15:35:12.251242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-15 15:35:12.251259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-15 15:35:12.251436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-15 15:35:12.251453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-15 15:35:12.251702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-15 15:35:12.251719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-15 15:35:12.251977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-15 15:35:12.251994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-15 15:35:12.252288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-15 15:35:12.252305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-15 15:35:12.252663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-15 15:35:12.252680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-15 15:35:12.252856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-15 15:35:12.252873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-15 15:35:12.253125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-15 15:35:12.253142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-15 15:35:12.253352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-15 15:35:12.253369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-15 15:35:12.253585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-15 15:35:12.253602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-15 15:35:12.253847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-15 15:35:12.253864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-15 15:35:12.254122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-15 15:35:12.254139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-15 15:35:12.254414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-15 15:35:12.254431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-15 15:35:12.254560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-15 15:35:12.254577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-15 15:35:12.254765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-15 15:35:12.254781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-15 15:35:12.255045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-15 15:35:12.255063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-15 15:35:12.255312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-15 15:35:12.255328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-15 15:35:12.255438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-15 15:35:12.255454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-15 15:35:12.255560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-15 15:35:12.255576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-15 15:35:12.255820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-15 15:35:12.255842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-15 15:35:12.256110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-15 15:35:12.256128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-15 15:35:12.256321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-15 15:35:12.256338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-15 15:35:12.256527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-15 15:35:12.256543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-15 15:35:12.256826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-15 15:35:12.256848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-15 15:35:12.257046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-15 15:35:12.257065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-15 15:35:12.257377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-15 15:35:12.257394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-15 15:35:12.257706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-15 15:35:12.257722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-15 15:35:12.257930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-15 15:35:12.257947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-15 15:35:12.258201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-15 15:35:12.258218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-15 15:35:12.258460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-15 15:35:12.258477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.494 [2024-07-15 15:35:12.258722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.494 [2024-07-15 15:35:12.258738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.494 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-15 15:35:12.258859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-15 15:35:12.258876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-15 15:35:12.259156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-15 15:35:12.259173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-15 15:35:12.259502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-15 15:35:12.259519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-15 15:35:12.259769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-15 15:35:12.259786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-15 15:35:12.260060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-15 15:35:12.260078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-15 15:35:12.260347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-15 15:35:12.260364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-15 15:35:12.260679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-15 15:35:12.260696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-15 15:35:12.261042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-15 15:35:12.261059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-15 15:35:12.261409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-15 15:35:12.261426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-15 15:35:12.261758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-15 15:35:12.261775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-15 15:35:12.262034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-15 15:35:12.262051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-15 15:35:12.262327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-15 15:35:12.262344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-15 15:35:12.262606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-15 15:35:12.262623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-15 15:35:12.262888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-15 15:35:12.262905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-15 15:35:12.263221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-15 15:35:12.263238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-15 15:35:12.263428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-15 15:35:12.263445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-15 15:35:12.263707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-15 15:35:12.263724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-15 15:35:12.263895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-15 15:35:12.263912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-15 15:35:12.264224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-15 15:35:12.264241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-15 15:35:12.264417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-15 15:35:12.264434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-15 15:35:12.264691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-15 15:35:12.264708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-15 15:35:12.264882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-15 15:35:12.264899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-15 15:35:12.265235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-15 15:35:12.265252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-15 15:35:12.265448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-15 15:35:12.265465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-15 15:35:12.265780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-15 15:35:12.265797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-15 15:35:12.266113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-15 15:35:12.266130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-15 15:35:12.266411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-15 15:35:12.266428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-15 15:35:12.266655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-15 15:35:12.266672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-15 15:35:12.266932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-15 15:35:12.266949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-15 15:35:12.267209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-15 15:35:12.267226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-15 15:35:12.267535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-15 15:35:12.267552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-15 15:35:12.267829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-15 15:35:12.267858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-15 15:35:12.268139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-15 15:35:12.268156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-15 15:35:12.268450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-15 15:35:12.268470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-15 15:35:12.268769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-15 15:35:12.268786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-15 15:35:12.268998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-15 15:35:12.269016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-15 15:35:12.269326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-15 15:35:12.269343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-15 15:35:12.269609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-15 15:35:12.269626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-15 15:35:12.269939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-15 15:35:12.269956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.495 [2024-07-15 15:35:12.270080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.495 [2024-07-15 15:35:12.270097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.495 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-15 15:35:12.270406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-15 15:35:12.270422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-15 15:35:12.270733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-15 15:35:12.270750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-15 15:35:12.270998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-15 15:35:12.271016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-15 15:35:12.271260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-15 15:35:12.271277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-15 15:35:12.271587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-15 15:35:12.271603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-15 15:35:12.271940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-15 15:35:12.271957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-15 15:35:12.272210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-15 15:35:12.272227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-15 15:35:12.272398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-15 15:35:12.272415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-15 15:35:12.272726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-15 15:35:12.272742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-15 15:35:12.272988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-15 15:35:12.273005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-15 15:35:12.273260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-15 15:35:12.273277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-15 15:35:12.273521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-15 15:35:12.273538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-15 15:35:12.273818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-15 15:35:12.273839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-15 15:35:12.274092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-15 15:35:12.274109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-15 15:35:12.274439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-15 15:35:12.274456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-15 15:35:12.274747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-15 15:35:12.274764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-15 15:35:12.274960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-15 15:35:12.274977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-15 15:35:12.275289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-15 15:35:12.275305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-15 15:35:12.275583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-15 15:35:12.275600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-15 15:35:12.275864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-15 15:35:12.275881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-15 15:35:12.276153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-15 15:35:12.276170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-15 15:35:12.276414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-15 15:35:12.276431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-15 15:35:12.276758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-15 15:35:12.276776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-15 15:35:12.277019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-15 15:35:12.277037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-15 15:35:12.277346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-15 15:35:12.277363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-15 15:35:12.277643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-15 15:35:12.277660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-15 15:35:12.277905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-15 15:35:12.277922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-15 15:35:12.278124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-15 15:35:12.278141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-15 15:35:12.278427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-15 15:35:12.278444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-15 15:35:12.278698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-15 15:35:12.278715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-15 15:35:12.279060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-15 15:35:12.279077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-15 15:35:12.279322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-15 15:35:12.279339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-15 15:35:12.279670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-15 15:35:12.279686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-15 15:35:12.279950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-15 15:35:12.279970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.496 [2024-07-15 15:35:12.280209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.496 [2024-07-15 15:35:12.280226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.496 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-15 15:35:12.280514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-15 15:35:12.280532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-15 15:35:12.280816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-15 15:35:12.280839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-15 15:35:12.281151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-15 15:35:12.281168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-15 15:35:12.281433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-15 15:35:12.281449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-15 15:35:12.281704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-15 15:35:12.281722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-15 15:35:12.282060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-15 15:35:12.282077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-15 15:35:12.282255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-15 15:35:12.282272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-15 15:35:12.282534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-15 15:35:12.282551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-15 15:35:12.282818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-15 15:35:12.282838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-15 15:35:12.283079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-15 15:35:12.283095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-15 15:35:12.283344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-15 15:35:12.283361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-15 15:35:12.283697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-15 15:35:12.283714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-15 15:35:12.283971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-15 15:35:12.283988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-15 15:35:12.284167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-15 15:35:12.284183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-15 15:35:12.284458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-15 15:35:12.284475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-15 15:35:12.284646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-15 15:35:12.284663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-15 15:35:12.284838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-15 15:35:12.284856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-15 15:35:12.285041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-15 15:35:12.285057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-15 15:35:12.285319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-15 15:35:12.285335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-15 15:35:12.285580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-15 15:35:12.285597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-15 15:35:12.285845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-15 15:35:12.285862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-15 15:35:12.286051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-15 15:35:12.286068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-15 15:35:12.286355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-15 15:35:12.286372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-15 15:35:12.286682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-15 15:35:12.286699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-15 15:35:12.286891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-15 15:35:12.286908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-15 15:35:12.287223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-15 15:35:12.287239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-15 15:35:12.287597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-15 15:35:12.287614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-15 15:35:12.287783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-15 15:35:12.287800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-15 15:35:12.288078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-15 15:35:12.288095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-15 15:35:12.288360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-15 15:35:12.288377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-15 15:35:12.288568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-15 15:35:12.288585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-15 15:35:12.288840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-15 15:35:12.288857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-15 15:35:12.289194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-15 15:35:12.289211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-15 15:35:12.289488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-15 15:35:12.289506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-15 15:35:12.289759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-15 15:35:12.289776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-15 15:35:12.290088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-15 15:35:12.290105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-15 15:35:12.290438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-15 15:35:12.290455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-15 15:35:12.290732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-15 15:35:12.290749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-15 15:35:12.291103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-15 15:35:12.291123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-15 15:35:12.291365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-15 15:35:12.291382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.497 qpair failed and we were unable to recover it. 00:30:08.497 [2024-07-15 15:35:12.291621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.497 [2024-07-15 15:35:12.291638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-15 15:35:12.291901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-15 15:35:12.291919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-15 15:35:12.292129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-15 15:35:12.292146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-15 15:35:12.292343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-15 15:35:12.292359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-15 15:35:12.292634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-15 15:35:12.292651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-15 15:35:12.292909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-15 15:35:12.292926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-15 15:35:12.293116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-15 15:35:12.293133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-15 15:35:12.293324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-15 15:35:12.293341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-15 15:35:12.293584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-15 15:35:12.293600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-15 15:35:12.293859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-15 15:35:12.293876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-15 15:35:12.294150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-15 15:35:12.294167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-15 15:35:12.294365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-15 15:35:12.294382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-15 15:35:12.294654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-15 15:35:12.294671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-15 15:35:12.295010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-15 15:35:12.295028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-15 15:35:12.295384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-15 15:35:12.295400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-15 15:35:12.295594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-15 15:35:12.295612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-15 15:35:12.295857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-15 15:35:12.295874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-15 15:35:12.296210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-15 15:35:12.296227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-15 15:35:12.296510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-15 15:35:12.296527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-15 15:35:12.296701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-15 15:35:12.296718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-15 15:35:12.296964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-15 15:35:12.296981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-15 15:35:12.297255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-15 15:35:12.297272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-15 15:35:12.297543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-15 15:35:12.297560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-15 15:35:12.297730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-15 15:35:12.297747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-15 15:35:12.297992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-15 15:35:12.298009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-15 15:35:12.298301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-15 15:35:12.298319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-15 15:35:12.298569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-15 15:35:12.298586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-15 15:35:12.298842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-15 15:35:12.298859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-15 15:35:12.299169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-15 15:35:12.299186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-15 15:35:12.299438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-15 15:35:12.299455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-15 15:35:12.299698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-15 15:35:12.299715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-15 15:35:12.300072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-15 15:35:12.300090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-15 15:35:12.300348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-15 15:35:12.300365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-15 15:35:12.300677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-15 15:35:12.300694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-15 15:35:12.300885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-15 15:35:12.300902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-15 15:35:12.301142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-15 15:35:12.301159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-15 15:35:12.301370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-15 15:35:12.301387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-15 15:35:12.301746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-15 15:35:12.301763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-15 15:35:12.302091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-15 15:35:12.302111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-15 15:35:12.302302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-15 15:35:12.302319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.498 [2024-07-15 15:35:12.302650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.498 [2024-07-15 15:35:12.302668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.498 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-15 15:35:12.302923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-15 15:35:12.302940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-15 15:35:12.303188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-15 15:35:12.303204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-15 15:35:12.303448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-15 15:35:12.303466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-15 15:35:12.303798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-15 15:35:12.303816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-15 15:35:12.304102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-15 15:35:12.304119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-15 15:35:12.304366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-15 15:35:12.304384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-15 15:35:12.304626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-15 15:35:12.304644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-15 15:35:12.304928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-15 15:35:12.304947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-15 15:35:12.305238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-15 15:35:12.305255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-15 15:35:12.305511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-15 15:35:12.305529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-15 15:35:12.305775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-15 15:35:12.305792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-15 15:35:12.306054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-15 15:35:12.306071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-15 15:35:12.306269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-15 15:35:12.306286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-15 15:35:12.306553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-15 15:35:12.306569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-15 15:35:12.306826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-15 15:35:12.306848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-15 15:35:12.307093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-15 15:35:12.307110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-15 15:35:12.307303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-15 15:35:12.307322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-15 15:35:12.307583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-15 15:35:12.307601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-15 15:35:12.307775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-15 15:35:12.307792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-15 15:35:12.308105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-15 15:35:12.308123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-15 15:35:12.308384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-15 15:35:12.308402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-15 15:35:12.308714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-15 15:35:12.308731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-15 15:35:12.309043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-15 15:35:12.309060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-15 15:35:12.309370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-15 15:35:12.309387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-15 15:35:12.309594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-15 15:35:12.309612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-15 15:35:12.309856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-15 15:35:12.309874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-15 15:35:12.310199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-15 15:35:12.310216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-15 15:35:12.310498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-15 15:35:12.310515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-15 15:35:12.310756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-15 15:35:12.310773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-15 15:35:12.311053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-15 15:35:12.311069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-15 15:35:12.311381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-15 15:35:12.311398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-15 15:35:12.311584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-15 15:35:12.311601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-15 15:35:12.311857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-15 15:35:12.311874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-15 15:35:12.312089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-15 15:35:12.312106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-15 15:35:12.312349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-15 15:35:12.312366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-15 15:35:12.312549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-15 15:35:12.312566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-15 15:35:12.312887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-15 15:35:12.312905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-15 15:35:12.313094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-15 15:35:12.313114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-15 15:35:12.313316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-15 15:35:12.313333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-15 15:35:12.313611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-15 15:35:12.313628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.499 qpair failed and we were unable to recover it. 00:30:08.499 [2024-07-15 15:35:12.313943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.499 [2024-07-15 15:35:12.313963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-15 15:35:12.314248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-15 15:35:12.314267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-15 15:35:12.314481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-15 15:35:12.314498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-15 15:35:12.314692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-15 15:35:12.314709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-15 15:35:12.314971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-15 15:35:12.314990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-15 15:35:12.315180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-15 15:35:12.315197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-15 15:35:12.315532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-15 15:35:12.315550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-15 15:35:12.315747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-15 15:35:12.315764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-15 15:35:12.316099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-15 15:35:12.316116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-15 15:35:12.316373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-15 15:35:12.316390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-15 15:35:12.316634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-15 15:35:12.316651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-15 15:35:12.316933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-15 15:35:12.316949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-15 15:35:12.317222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-15 15:35:12.317239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-15 15:35:12.317417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-15 15:35:12.317434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-15 15:35:12.317761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-15 15:35:12.317778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-15 15:35:12.318002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-15 15:35:12.318019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-15 15:35:12.318215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-15 15:35:12.318232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-15 15:35:12.318492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-15 15:35:12.318509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-15 15:35:12.318830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-15 15:35:12.318851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-15 15:35:12.319092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-15 15:35:12.319109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-15 15:35:12.319419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-15 15:35:12.319436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-15 15:35:12.319630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-15 15:35:12.319647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-15 15:35:12.319901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-15 15:35:12.319918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-15 15:35:12.320229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-15 15:35:12.320246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-15 15:35:12.320438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-15 15:35:12.320455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-15 15:35:12.320702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-15 15:35:12.320719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-15 15:35:12.320965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-15 15:35:12.320982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-15 15:35:12.321224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-15 15:35:12.321242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-15 15:35:12.321501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-15 15:35:12.321517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-15 15:35:12.321769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-15 15:35:12.321787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-15 15:35:12.321972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-15 15:35:12.321990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-15 15:35:12.322235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-15 15:35:12.322252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-15 15:35:12.322495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-15 15:35:12.322513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-15 15:35:12.322711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-15 15:35:12.322729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-15 15:35:12.322993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-15 15:35:12.323011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-15 15:35:12.323283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-15 15:35:12.323300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-15 15:35:12.323554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-15 15:35:12.323572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-15 15:35:12.323758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-15 15:35:12.323779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-15 15:35:12.324031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-15 15:35:12.324048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-15 15:35:12.324357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-15 15:35:12.324374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.500 qpair failed and we were unable to recover it. 00:30:08.500 [2024-07-15 15:35:12.324612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.500 [2024-07-15 15:35:12.324629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-15 15:35:12.324911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-15 15:35:12.324928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-15 15:35:12.325265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-15 15:35:12.325282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-15 15:35:12.325618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-15 15:35:12.325635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-15 15:35:12.325841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-15 15:35:12.325858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-15 15:35:12.326104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-15 15:35:12.326121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-15 15:35:12.326304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-15 15:35:12.326321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-15 15:35:12.326604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-15 15:35:12.326621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-15 15:35:12.326949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-15 15:35:12.326966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-15 15:35:12.327175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-15 15:35:12.327191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-15 15:35:12.327432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-15 15:35:12.327449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-15 15:35:12.327703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-15 15:35:12.327720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-15 15:35:12.328034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-15 15:35:12.328051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-15 15:35:12.328294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-15 15:35:12.328311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-15 15:35:12.328589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-15 15:35:12.328606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-15 15:35:12.328857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-15 15:35:12.328874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-15 15:35:12.329062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-15 15:35:12.329079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-15 15:35:12.329263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-15 15:35:12.329279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-15 15:35:12.329457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-15 15:35:12.329474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-15 15:35:12.329662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-15 15:35:12.329679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-15 15:35:12.329990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-15 15:35:12.330008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-15 15:35:12.330343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-15 15:35:12.330360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-15 15:35:12.330612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-15 15:35:12.330629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-15 15:35:12.330815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-15 15:35:12.330846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-15 15:35:12.331115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-15 15:35:12.331146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-15 15:35:12.331345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-15 15:35:12.331359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-15 15:35:12.331533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-15 15:35:12.331546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-15 15:35:12.331781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-15 15:35:12.331795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-15 15:35:12.331985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-15 15:35:12.331998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-15 15:35:12.332259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-15 15:35:12.332272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-15 15:35:12.332540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-15 15:35:12.332552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-15 15:35:12.332721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-15 15:35:12.332734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-15 15:35:12.332915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-15 15:35:12.332928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-15 15:35:12.333238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-15 15:35:12.333250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-15 15:35:12.333575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.501 [2024-07-15 15:35:12.333588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.501 qpair failed and we were unable to recover it. 00:30:08.501 [2024-07-15 15:35:12.333838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.502 [2024-07-15 15:35:12.333851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.502 qpair failed and we were unable to recover it. 00:30:08.502 [2024-07-15 15:35:12.334085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.502 [2024-07-15 15:35:12.334098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.502 qpair failed and we were unable to recover it. 00:30:08.502 [2024-07-15 15:35:12.334267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.502 [2024-07-15 15:35:12.334283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.502 qpair failed and we were unable to recover it. 00:30:08.502 [2024-07-15 15:35:12.334542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.502 [2024-07-15 15:35:12.334555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.502 qpair failed and we were unable to recover it. 00:30:08.502 [2024-07-15 15:35:12.334790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.502 [2024-07-15 15:35:12.334802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.502 qpair failed and we were unable to recover it. 00:30:08.502 [2024-07-15 15:35:12.334973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.502 [2024-07-15 15:35:12.334986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.502 qpair failed and we were unable to recover it. 00:30:08.502 [2024-07-15 15:35:12.335228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.502 [2024-07-15 15:35:12.335241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.502 qpair failed and we were unable to recover it. 00:30:08.502 [2024-07-15 15:35:12.335428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.502 [2024-07-15 15:35:12.335441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.502 qpair failed and we were unable to recover it. 00:30:08.502 [2024-07-15 15:35:12.335771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.502 [2024-07-15 15:35:12.335783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.502 qpair failed and we were unable to recover it. 00:30:08.502 [2024-07-15 15:35:12.336066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.502 [2024-07-15 15:35:12.336079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.502 qpair failed and we were unable to recover it. 00:30:08.502 [2024-07-15 15:35:12.336327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.502 [2024-07-15 15:35:12.336340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.502 qpair failed and we were unable to recover it. 00:30:08.775 [2024-07-15 15:35:12.336518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.775 [2024-07-15 15:35:12.336531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.775 qpair failed and we were unable to recover it. 00:30:08.775 [2024-07-15 15:35:12.336701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.775 [2024-07-15 15:35:12.336713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.775 qpair failed and we were unable to recover it. 00:30:08.775 [2024-07-15 15:35:12.336946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.775 [2024-07-15 15:35:12.336959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.775 qpair failed and we were unable to recover it. 00:30:08.775 [2024-07-15 15:35:12.337265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.775 [2024-07-15 15:35:12.337278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.775 qpair failed and we were unable to recover it. 00:30:08.775 [2024-07-15 15:35:12.337512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.775 [2024-07-15 15:35:12.337525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.775 qpair failed and we were unable to recover it. 00:30:08.775 [2024-07-15 15:35:12.337801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.775 [2024-07-15 15:35:12.337814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.775 qpair failed and we were unable to recover it. 00:30:08.775 [2024-07-15 15:35:12.338115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.775 [2024-07-15 15:35:12.338129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.775 qpair failed and we were unable to recover it. 00:30:08.775 [2024-07-15 15:35:12.338374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.775 [2024-07-15 15:35:12.338387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.775 qpair failed and we were unable to recover it. 00:30:08.775 [2024-07-15 15:35:12.338688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.775 [2024-07-15 15:35:12.338700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.775 qpair failed and we were unable to recover it. 00:30:08.775 [2024-07-15 15:35:12.338981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.775 [2024-07-15 15:35:12.338994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.775 qpair failed and we were unable to recover it. 00:30:08.775 [2024-07-15 15:35:12.339241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.775 [2024-07-15 15:35:12.339254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.775 qpair failed and we were unable to recover it. 00:30:08.775 [2024-07-15 15:35:12.339501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.775 [2024-07-15 15:35:12.339514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.775 qpair failed and we were unable to recover it. 00:30:08.775 [2024-07-15 15:35:12.339783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.775 [2024-07-15 15:35:12.339796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.775 qpair failed and we were unable to recover it. 00:30:08.775 [2024-07-15 15:35:12.340101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.775 [2024-07-15 15:35:12.340114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.775 qpair failed and we were unable to recover it. 00:30:08.775 [2024-07-15 15:35:12.340295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.775 [2024-07-15 15:35:12.340308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.775 qpair failed and we were unable to recover it. 00:30:08.775 [2024-07-15 15:35:12.340654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.775 [2024-07-15 15:35:12.340667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.775 qpair failed and we were unable to recover it. 00:30:08.775 [2024-07-15 15:35:12.340841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.775 [2024-07-15 15:35:12.340854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.775 qpair failed and we were unable to recover it. 00:30:08.775 [2024-07-15 15:35:12.341144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.775 [2024-07-15 15:35:12.341156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.775 qpair failed and we were unable to recover it. 00:30:08.775 [2024-07-15 15:35:12.341410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.775 [2024-07-15 15:35:12.341429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.775 qpair failed and we were unable to recover it. 00:30:08.775 [2024-07-15 15:35:12.341699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.775 [2024-07-15 15:35:12.341716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.775 qpair failed and we were unable to recover it. 00:30:08.775 [2024-07-15 15:35:12.341889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.775 [2024-07-15 15:35:12.341907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.775 qpair failed and we were unable to recover it. 00:30:08.775 [2024-07-15 15:35:12.342101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.775 [2024-07-15 15:35:12.342118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.775 qpair failed and we were unable to recover it. 00:30:08.775 [2024-07-15 15:35:12.342379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.775 [2024-07-15 15:35:12.342396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.775 qpair failed and we were unable to recover it. 00:30:08.775 [2024-07-15 15:35:12.342653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.775 [2024-07-15 15:35:12.342670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.775 qpair failed and we were unable to recover it. 00:30:08.775 [2024-07-15 15:35:12.342934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.775 [2024-07-15 15:35:12.342952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.775 qpair failed and we were unable to recover it. 00:30:08.775 [2024-07-15 15:35:12.343138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.775 [2024-07-15 15:35:12.343155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.775 qpair failed and we were unable to recover it. 00:30:08.775 [2024-07-15 15:35:12.343342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.775 [2024-07-15 15:35:12.343359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.775 qpair failed and we were unable to recover it. 00:30:08.775 [2024-07-15 15:35:12.343551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.775 [2024-07-15 15:35:12.343568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.776 qpair failed and we were unable to recover it. 00:30:08.776 [2024-07-15 15:35:12.343897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.776 [2024-07-15 15:35:12.343914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.776 qpair failed and we were unable to recover it. 00:30:08.776 [2024-07-15 15:35:12.344195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.776 [2024-07-15 15:35:12.344212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.776 qpair failed and we were unable to recover it. 00:30:08.776 [2024-07-15 15:35:12.344524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.776 [2024-07-15 15:35:12.344538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.776 qpair failed and we were unable to recover it. 00:30:08.776 [2024-07-15 15:35:12.344777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.776 [2024-07-15 15:35:12.344792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.776 qpair failed and we were unable to recover it. 00:30:08.776 [2024-07-15 15:35:12.345016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.776 [2024-07-15 15:35:12.345029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.776 qpair failed and we were unable to recover it. 00:30:08.776 [2024-07-15 15:35:12.345288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.776 [2024-07-15 15:35:12.345301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.776 qpair failed and we were unable to recover it. 00:30:08.776 [2024-07-15 15:35:12.345627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.776 [2024-07-15 15:35:12.345639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.776 qpair failed and we were unable to recover it. 00:30:08.776 [2024-07-15 15:35:12.345825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.776 [2024-07-15 15:35:12.345842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.776 qpair failed and we were unable to recover it. 00:30:08.776 [2024-07-15 15:35:12.346203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.776 [2024-07-15 15:35:12.346216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.776 qpair failed and we were unable to recover it. 00:30:08.776 [2024-07-15 15:35:12.346385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.776 [2024-07-15 15:35:12.346398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.776 qpair failed and we were unable to recover it. 00:30:08.776 [2024-07-15 15:35:12.346579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.776 [2024-07-15 15:35:12.346591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.776 qpair failed and we were unable to recover it. 00:30:08.776 [2024-07-15 15:35:12.346854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.776 [2024-07-15 15:35:12.346866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.776 qpair failed and we were unable to recover it. 00:30:08.776 [2024-07-15 15:35:12.347167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.776 [2024-07-15 15:35:12.347180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.776 qpair failed and we were unable to recover it. 00:30:08.776 [2024-07-15 15:35:12.347430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.776 [2024-07-15 15:35:12.347443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.776 qpair failed and we were unable to recover it. 00:30:08.776 [2024-07-15 15:35:12.347710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.776 [2024-07-15 15:35:12.347723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.776 qpair failed and we were unable to recover it. 00:30:08.776 [2024-07-15 15:35:12.347969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.776 [2024-07-15 15:35:12.347982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.776 qpair failed and we were unable to recover it. 00:30:08.776 [2024-07-15 15:35:12.348257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.776 [2024-07-15 15:35:12.348270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.776 qpair failed and we were unable to recover it. 00:30:08.776 [2024-07-15 15:35:12.348381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.776 [2024-07-15 15:35:12.348393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.776 qpair failed and we were unable to recover it. 00:30:08.776 [2024-07-15 15:35:12.348571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.776 [2024-07-15 15:35:12.348584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.776 qpair failed and we were unable to recover it. 00:30:08.776 [2024-07-15 15:35:12.348769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.776 [2024-07-15 15:35:12.348781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.776 qpair failed and we were unable to recover it. 00:30:08.776 [2024-07-15 15:35:12.349029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.776 [2024-07-15 15:35:12.349042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.776 qpair failed and we were unable to recover it. 00:30:08.776 [2024-07-15 15:35:12.349289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.776 [2024-07-15 15:35:12.349302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.776 qpair failed and we were unable to recover it. 00:30:08.776 [2024-07-15 15:35:12.349628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.776 [2024-07-15 15:35:12.349641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.776 qpair failed and we were unable to recover it. 00:30:08.776 [2024-07-15 15:35:12.349807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.776 [2024-07-15 15:35:12.349819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.776 qpair failed and we were unable to recover it. 00:30:08.776 [2024-07-15 15:35:12.350058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.776 [2024-07-15 15:35:12.350071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.776 qpair failed and we were unable to recover it. 00:30:08.776 [2024-07-15 15:35:12.350175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.776 [2024-07-15 15:35:12.350187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.776 qpair failed and we were unable to recover it. 00:30:08.776 [2024-07-15 15:35:12.350373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.776 [2024-07-15 15:35:12.350386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.776 qpair failed and we were unable to recover it. 00:30:08.776 [2024-07-15 15:35:12.350566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.776 [2024-07-15 15:35:12.350579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.776 qpair failed and we were unable to recover it. 00:30:08.776 [2024-07-15 15:35:12.350746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.776 [2024-07-15 15:35:12.350759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.776 qpair failed and we were unable to recover it. 00:30:08.776 [2024-07-15 15:35:12.351067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.776 [2024-07-15 15:35:12.351080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.776 qpair failed and we were unable to recover it. 00:30:08.776 [2024-07-15 15:35:12.351256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.776 [2024-07-15 15:35:12.351275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.776 qpair failed and we were unable to recover it. 00:30:08.776 [2024-07-15 15:35:12.351520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.776 [2024-07-15 15:35:12.351537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.776 qpair failed and we were unable to recover it. 00:30:08.776 [2024-07-15 15:35:12.351728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.776 [2024-07-15 15:35:12.351745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.776 qpair failed and we were unable to recover it. 00:30:08.776 [2024-07-15 15:35:12.352016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.776 [2024-07-15 15:35:12.352034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.776 qpair failed and we were unable to recover it. 00:30:08.776 [2024-07-15 15:35:12.352277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.776 [2024-07-15 15:35:12.352290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.776 qpair failed and we were unable to recover it. 00:30:08.776 [2024-07-15 15:35:12.352498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.776 [2024-07-15 15:35:12.352510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.776 qpair failed and we were unable to recover it. 00:30:08.776 [2024-07-15 15:35:12.352676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.776 [2024-07-15 15:35:12.352689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.776 qpair failed and we were unable to recover it. 00:30:08.776 [2024-07-15 15:35:12.352931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.776 [2024-07-15 15:35:12.352944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.776 qpair failed and we were unable to recover it. 00:30:08.776 [2024-07-15 15:35:12.353180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.776 [2024-07-15 15:35:12.353192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.776 qpair failed and we were unable to recover it. 00:30:08.776 [2024-07-15 15:35:12.353377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.776 [2024-07-15 15:35:12.353390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.776 qpair failed and we were unable to recover it. 00:30:08.776 [2024-07-15 15:35:12.353573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.777 [2024-07-15 15:35:12.353585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.777 qpair failed and we were unable to recover it. 00:30:08.777 [2024-07-15 15:35:12.353758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.777 [2024-07-15 15:35:12.353770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.777 qpair failed and we were unable to recover it. 00:30:08.777 [2024-07-15 15:35:12.354002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.777 [2024-07-15 15:35:12.354017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.777 qpair failed and we were unable to recover it. 00:30:08.777 [2024-07-15 15:35:12.354260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.777 [2024-07-15 15:35:12.354275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.777 qpair failed and we were unable to recover it. 00:30:08.777 [2024-07-15 15:35:12.354535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.777 [2024-07-15 15:35:12.354547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.777 qpair failed and we were unable to recover it. 00:30:08.777 [2024-07-15 15:35:12.354788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.777 [2024-07-15 15:35:12.354801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.777 qpair failed and we were unable to recover it. 00:30:08.777 [2024-07-15 15:35:12.354973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.777 [2024-07-15 15:35:12.354986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.777 qpair failed and we were unable to recover it. 00:30:08.777 [2024-07-15 15:35:12.355194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.777 [2024-07-15 15:35:12.355206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.777 qpair failed and we were unable to recover it. 00:30:08.777 [2024-07-15 15:35:12.355383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.777 [2024-07-15 15:35:12.355395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.777 qpair failed and we were unable to recover it. 00:30:08.777 [2024-07-15 15:35:12.355629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.777 [2024-07-15 15:35:12.355641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.777 qpair failed and we were unable to recover it. 00:30:08.777 [2024-07-15 15:35:12.355944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.777 [2024-07-15 15:35:12.355957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.777 qpair failed and we were unable to recover it. 00:30:08.777 [2024-07-15 15:35:12.356216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.777 [2024-07-15 15:35:12.356228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.777 qpair failed and we were unable to recover it. 00:30:08.777 [2024-07-15 15:35:12.356459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.777 [2024-07-15 15:35:12.356472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.777 qpair failed and we were unable to recover it. 00:30:08.777 [2024-07-15 15:35:12.356640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.777 [2024-07-15 15:35:12.356652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.777 qpair failed and we were unable to recover it. 00:30:08.777 [2024-07-15 15:35:12.356864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.777 [2024-07-15 15:35:12.356877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.777 qpair failed and we were unable to recover it. 00:30:08.777 [2024-07-15 15:35:12.357140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.777 [2024-07-15 15:35:12.357153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.777 qpair failed and we were unable to recover it. 00:30:08.777 [2024-07-15 15:35:12.357391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.777 [2024-07-15 15:35:12.357404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.777 qpair failed and we were unable to recover it. 00:30:08.777 [2024-07-15 15:35:12.357508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.777 [2024-07-15 15:35:12.357520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.777 qpair failed and we were unable to recover it. 00:30:08.777 [2024-07-15 15:35:12.357771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.777 [2024-07-15 15:35:12.357784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.777 qpair failed and we were unable to recover it. 00:30:08.777 [2024-07-15 15:35:12.358016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.777 [2024-07-15 15:35:12.358029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.777 qpair failed and we were unable to recover it. 00:30:08.777 [2024-07-15 15:35:12.358196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.777 [2024-07-15 15:35:12.358208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.777 qpair failed and we were unable to recover it. 00:30:08.777 [2024-07-15 15:35:12.358466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.777 [2024-07-15 15:35:12.358479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.777 qpair failed and we were unable to recover it. 00:30:08.777 [2024-07-15 15:35:12.358751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.777 [2024-07-15 15:35:12.358763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.777 qpair failed and we were unable to recover it. 00:30:08.777 [2024-07-15 15:35:12.359064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.777 [2024-07-15 15:35:12.359077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.777 qpair failed and we were unable to recover it. 00:30:08.777 [2024-07-15 15:35:12.359262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.777 [2024-07-15 15:35:12.359275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.777 qpair failed and we were unable to recover it. 00:30:08.777 [2024-07-15 15:35:12.359525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.777 [2024-07-15 15:35:12.359538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.777 qpair failed and we were unable to recover it. 00:30:08.777 [2024-07-15 15:35:12.359799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.777 [2024-07-15 15:35:12.359811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.777 qpair failed and we were unable to recover it. 00:30:08.777 [2024-07-15 15:35:12.359985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.777 [2024-07-15 15:35:12.359999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.777 qpair failed and we were unable to recover it. 00:30:08.777 [2024-07-15 15:35:12.360190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.777 [2024-07-15 15:35:12.360202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.777 qpair failed and we were unable to recover it. 00:30:08.777 [2024-07-15 15:35:12.360315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.777 [2024-07-15 15:35:12.360326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.777 qpair failed and we were unable to recover it. 00:30:08.777 [2024-07-15 15:35:12.360599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.777 [2024-07-15 15:35:12.360618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.777 qpair failed and we were unable to recover it. 00:30:08.777 [2024-07-15 15:35:12.360867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.777 [2024-07-15 15:35:12.360884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.777 qpair failed and we were unable to recover it. 00:30:08.777 [2024-07-15 15:35:12.361197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.777 [2024-07-15 15:35:12.361215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.777 qpair failed and we were unable to recover it. 00:30:08.777 [2024-07-15 15:35:12.361440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.777 [2024-07-15 15:35:12.361458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.777 qpair failed and we were unable to recover it. 00:30:08.777 [2024-07-15 15:35:12.361718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.777 [2024-07-15 15:35:12.361735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.777 qpair failed and we were unable to recover it. 00:30:08.777 [2024-07-15 15:35:12.361985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.777 [2024-07-15 15:35:12.362003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.777 qpair failed and we were unable to recover it. 00:30:08.777 [2024-07-15 15:35:12.362259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.777 [2024-07-15 15:35:12.362277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.777 qpair failed and we were unable to recover it. 00:30:08.777 [2024-07-15 15:35:12.362610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.777 [2024-07-15 15:35:12.362628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:08.777 qpair failed and we were unable to recover it. 00:30:08.777 [2024-07-15 15:35:12.362871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.777 [2024-07-15 15:35:12.362885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.777 qpair failed and we were unable to recover it. 00:30:08.777 [2024-07-15 15:35:12.363071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.777 [2024-07-15 15:35:12.363084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.777 qpair failed and we were unable to recover it. 00:30:08.777 [2024-07-15 15:35:12.363268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.777 [2024-07-15 15:35:12.363281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.777 qpair failed and we were unable to recover it. 00:30:08.778 [2024-07-15 15:35:12.363608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.778 [2024-07-15 15:35:12.363621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.778 qpair failed and we were unable to recover it. 00:30:08.778 [2024-07-15 15:35:12.363825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.778 [2024-07-15 15:35:12.363847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.778 qpair failed and we were unable to recover it. 00:30:08.778 [2024-07-15 15:35:12.364023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.778 [2024-07-15 15:35:12.364038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.778 qpair failed and we were unable to recover it. 00:30:08.778 [2024-07-15 15:35:12.364396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.778 [2024-07-15 15:35:12.364410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.778 qpair failed and we were unable to recover it. 00:30:08.778 [2024-07-15 15:35:12.364711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.778 [2024-07-15 15:35:12.364724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.778 qpair failed and we were unable to recover it. 00:30:08.778 [2024-07-15 15:35:12.364902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.778 [2024-07-15 15:35:12.364915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.778 qpair failed and we were unable to recover it. 00:30:08.778 [2024-07-15 15:35:12.365226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.778 [2024-07-15 15:35:12.365239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.778 qpair failed and we were unable to recover it. 00:30:08.778 [2024-07-15 15:35:12.365542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.778 [2024-07-15 15:35:12.365555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.778 qpair failed and we were unable to recover it. 00:30:08.778 [2024-07-15 15:35:12.365725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.778 [2024-07-15 15:35:12.365738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.778 qpair failed and we were unable to recover it. 00:30:08.778 [2024-07-15 15:35:12.365939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.778 [2024-07-15 15:35:12.365952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.778 qpair failed and we were unable to recover it. 00:30:08.778 [2024-07-15 15:35:12.366222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.778 [2024-07-15 15:35:12.366235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.778 qpair failed and we were unable to recover it. 00:30:08.778 [2024-07-15 15:35:12.366350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.778 [2024-07-15 15:35:12.366362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.778 qpair failed and we were unable to recover it. 00:30:08.778 [2024-07-15 15:35:12.366699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.778 [2024-07-15 15:35:12.366711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.778 qpair failed and we were unable to recover it. 00:30:08.778 [2024-07-15 15:35:12.366959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.778 [2024-07-15 15:35:12.366973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.778 qpair failed and we were unable to recover it. 00:30:08.778 [2024-07-15 15:35:12.367220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.778 [2024-07-15 15:35:12.367233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.778 qpair failed and we were unable to recover it. 00:30:08.778 [2024-07-15 15:35:12.367410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.778 [2024-07-15 15:35:12.367423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.778 qpair failed and we were unable to recover it. 00:30:08.778 [2024-07-15 15:35:12.367674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.778 [2024-07-15 15:35:12.367687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.778 qpair failed and we were unable to recover it. 00:30:08.778 [2024-07-15 15:35:12.367929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.778 [2024-07-15 15:35:12.367942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.778 qpair failed and we were unable to recover it. 00:30:08.778 [2024-07-15 15:35:12.368182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.778 [2024-07-15 15:35:12.368195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.778 qpair failed and we were unable to recover it. 00:30:08.778 [2024-07-15 15:35:12.368496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.778 [2024-07-15 15:35:12.368509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.778 qpair failed and we were unable to recover it. 00:30:08.778 [2024-07-15 15:35:12.368762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.778 [2024-07-15 15:35:12.368775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.778 qpair failed and we were unable to recover it. 00:30:08.778 [2024-07-15 15:35:12.369029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.778 [2024-07-15 15:35:12.369042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.778 qpair failed and we were unable to recover it. 00:30:08.778 [2024-07-15 15:35:12.369283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.778 [2024-07-15 15:35:12.369296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.778 qpair failed and we were unable to recover it. 00:30:08.778 [2024-07-15 15:35:12.369497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.778 [2024-07-15 15:35:12.369510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.778 qpair failed and we were unable to recover it. 00:30:08.778 [2024-07-15 15:35:12.369770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.778 [2024-07-15 15:35:12.369783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.778 qpair failed and we were unable to recover it. 00:30:08.778 [2024-07-15 15:35:12.370029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.778 [2024-07-15 15:35:12.370042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.778 qpair failed and we were unable to recover it. 00:30:08.778 [2024-07-15 15:35:12.370210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.778 [2024-07-15 15:35:12.370223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.778 qpair failed and we were unable to recover it. 00:30:08.778 [2024-07-15 15:35:12.370548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.778 [2024-07-15 15:35:12.370561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.778 qpair failed and we were unable to recover it. 00:30:08.778 [2024-07-15 15:35:12.370744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.778 [2024-07-15 15:35:12.370757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.778 qpair failed and we were unable to recover it. 00:30:08.778 [2024-07-15 15:35:12.371030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.778 [2024-07-15 15:35:12.371043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.778 qpair failed and we were unable to recover it. 00:30:08.778 [2024-07-15 15:35:12.371318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.778 [2024-07-15 15:35:12.371331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.778 qpair failed and we were unable to recover it. 00:30:08.778 [2024-07-15 15:35:12.371608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.778 [2024-07-15 15:35:12.371621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.778 qpair failed and we were unable to recover it. 00:30:08.778 [2024-07-15 15:35:12.371937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.778 [2024-07-15 15:35:12.371950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.778 qpair failed and we were unable to recover it. 00:30:08.778 [2024-07-15 15:35:12.372273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.778 [2024-07-15 15:35:12.372286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.778 qpair failed and we were unable to recover it. 00:30:08.778 [2024-07-15 15:35:12.372553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.778 [2024-07-15 15:35:12.372566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.778 qpair failed and we were unable to recover it. 00:30:08.778 [2024-07-15 15:35:12.372732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.778 [2024-07-15 15:35:12.372745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.778 qpair failed and we were unable to recover it. 00:30:08.778 [2024-07-15 15:35:12.372997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.778 [2024-07-15 15:35:12.373010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.778 qpair failed and we were unable to recover it. 00:30:08.778 [2024-07-15 15:35:12.373247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.778 [2024-07-15 15:35:12.373259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.778 qpair failed and we were unable to recover it. 00:30:08.778 [2024-07-15 15:35:12.373517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.778 [2024-07-15 15:35:12.373530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.778 qpair failed and we were unable to recover it. 00:30:08.778 [2024-07-15 15:35:12.373855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.778 [2024-07-15 15:35:12.373869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.778 qpair failed and we were unable to recover it. 00:30:08.778 [2024-07-15 15:35:12.374116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.778 [2024-07-15 15:35:12.374128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.779 qpair failed and we were unable to recover it. 00:30:08.779 [2024-07-15 15:35:12.374394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.779 [2024-07-15 15:35:12.374407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.779 qpair failed and we were unable to recover it. 00:30:08.779 [2024-07-15 15:35:12.374709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.779 [2024-07-15 15:35:12.374724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.779 qpair failed and we were unable to recover it. 00:30:08.779 [2024-07-15 15:35:12.375077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.779 [2024-07-15 15:35:12.375090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.779 qpair failed and we were unable to recover it. 00:30:08.779 [2024-07-15 15:35:12.375342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.779 [2024-07-15 15:35:12.375355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.779 qpair failed and we were unable to recover it. 00:30:08.779 [2024-07-15 15:35:12.375528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.779 [2024-07-15 15:35:12.375541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.779 qpair failed and we were unable to recover it. 00:30:08.779 [2024-07-15 15:35:12.375843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.779 [2024-07-15 15:35:12.375856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.779 qpair failed and we were unable to recover it. 00:30:08.779 [2024-07-15 15:35:12.376179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.779 [2024-07-15 15:35:12.376192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.779 qpair failed and we were unable to recover it. 00:30:08.779 [2024-07-15 15:35:12.376493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.779 [2024-07-15 15:35:12.376506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.779 qpair failed and we were unable to recover it. 00:30:08.779 [2024-07-15 15:35:12.376751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.779 [2024-07-15 15:35:12.376763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.779 qpair failed and we were unable to recover it. 00:30:08.779 [2024-07-15 15:35:12.376999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.779 [2024-07-15 15:35:12.377012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.779 qpair failed and we were unable to recover it. 00:30:08.779 [2024-07-15 15:35:12.377265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.779 [2024-07-15 15:35:12.377277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.779 qpair failed and we were unable to recover it. 00:30:08.779 [2024-07-15 15:35:12.377470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.779 [2024-07-15 15:35:12.377483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.779 qpair failed and we were unable to recover it. 00:30:08.779 [2024-07-15 15:35:12.377737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.779 [2024-07-15 15:35:12.377750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.779 qpair failed and we were unable to recover it. 00:30:08.779 [2024-07-15 15:35:12.377999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.779 [2024-07-15 15:35:12.378011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.779 qpair failed and we were unable to recover it. 00:30:08.779 [2024-07-15 15:35:12.378265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.779 [2024-07-15 15:35:12.378278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.779 qpair failed and we were unable to recover it. 00:30:08.779 [2024-07-15 15:35:12.378601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.779 [2024-07-15 15:35:12.378614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.779 qpair failed and we were unable to recover it. 00:30:08.779 [2024-07-15 15:35:12.378860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.779 [2024-07-15 15:35:12.378873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.779 qpair failed and we were unable to recover it. 00:30:08.779 [2024-07-15 15:35:12.379107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.779 [2024-07-15 15:35:12.379120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.779 qpair failed and we were unable to recover it. 00:30:08.779 [2024-07-15 15:35:12.379321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.779 [2024-07-15 15:35:12.379333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.779 qpair failed and we were unable to recover it. 00:30:08.779 [2024-07-15 15:35:12.379635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.779 [2024-07-15 15:35:12.379648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.779 qpair failed and we were unable to recover it. 00:30:08.779 [2024-07-15 15:35:12.379971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.779 [2024-07-15 15:35:12.379985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.779 qpair failed and we were unable to recover it. 00:30:08.779 [2024-07-15 15:35:12.380280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.779 [2024-07-15 15:35:12.380293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.779 qpair failed and we were unable to recover it. 00:30:08.779 [2024-07-15 15:35:12.380543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.779 [2024-07-15 15:35:12.380556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.779 qpair failed and we were unable to recover it. 00:30:08.779 [2024-07-15 15:35:12.380791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.779 [2024-07-15 15:35:12.380804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.779 qpair failed and we were unable to recover it. 00:30:08.779 [2024-07-15 15:35:12.380969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.779 [2024-07-15 15:35:12.380982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.779 qpair failed and we were unable to recover it. 00:30:08.779 [2024-07-15 15:35:12.381226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.779 [2024-07-15 15:35:12.381239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.779 qpair failed and we were unable to recover it. 00:30:08.779 [2024-07-15 15:35:12.381489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.779 [2024-07-15 15:35:12.381501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.779 qpair failed and we were unable to recover it. 00:30:08.779 [2024-07-15 15:35:12.381802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.779 [2024-07-15 15:35:12.381815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.779 qpair failed and we were unable to recover it. 00:30:08.779 [2024-07-15 15:35:12.382056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.779 [2024-07-15 15:35:12.382069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.779 qpair failed and we were unable to recover it. 00:30:08.779 [2024-07-15 15:35:12.382396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.779 [2024-07-15 15:35:12.382409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.779 qpair failed and we were unable to recover it. 00:30:08.779 [2024-07-15 15:35:12.382658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.779 [2024-07-15 15:35:12.382671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.779 qpair failed and we were unable to recover it. 00:30:08.779 [2024-07-15 15:35:12.382924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.779 [2024-07-15 15:35:12.382937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.779 qpair failed and we were unable to recover it. 00:30:08.779 [2024-07-15 15:35:12.383188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.779 [2024-07-15 15:35:12.383200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.779 qpair failed and we were unable to recover it. 00:30:08.779 [2024-07-15 15:35:12.383380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.780 [2024-07-15 15:35:12.383393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.780 qpair failed and we were unable to recover it. 00:30:08.780 [2024-07-15 15:35:12.383641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.780 [2024-07-15 15:35:12.383653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.780 qpair failed and we were unable to recover it. 00:30:08.780 [2024-07-15 15:35:12.383904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.780 [2024-07-15 15:35:12.383917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.780 qpair failed and we were unable to recover it. 00:30:08.780 [2024-07-15 15:35:12.384217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.780 [2024-07-15 15:35:12.384230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.780 qpair failed and we were unable to recover it. 00:30:08.780 [2024-07-15 15:35:12.384405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.780 [2024-07-15 15:35:12.384418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.780 qpair failed and we were unable to recover it. 00:30:08.780 [2024-07-15 15:35:12.384718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.780 [2024-07-15 15:35:12.384731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.780 qpair failed and we were unable to recover it. 00:30:08.780 [2024-07-15 15:35:12.385042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.780 [2024-07-15 15:35:12.385055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.780 qpair failed and we were unable to recover it. 00:30:08.780 [2024-07-15 15:35:12.385295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.780 [2024-07-15 15:35:12.385307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.780 qpair failed and we were unable to recover it. 00:30:08.780 [2024-07-15 15:35:12.385486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.780 [2024-07-15 15:35:12.385498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.780 qpair failed and we were unable to recover it. 00:30:08.780 [2024-07-15 15:35:12.385665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.780 [2024-07-15 15:35:12.385678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.780 qpair failed and we were unable to recover it. 00:30:08.780 [2024-07-15 15:35:12.385911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.780 [2024-07-15 15:35:12.385923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.780 qpair failed and we were unable to recover it. 00:30:08.780 [2024-07-15 15:35:12.386225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.780 [2024-07-15 15:35:12.386237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.780 qpair failed and we were unable to recover it. 00:30:08.780 [2024-07-15 15:35:12.386424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.780 [2024-07-15 15:35:12.386437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.780 qpair failed and we were unable to recover it. 00:30:08.780 [2024-07-15 15:35:12.386765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.780 [2024-07-15 15:35:12.386778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.780 qpair failed and we were unable to recover it. 00:30:08.780 [2024-07-15 15:35:12.387025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.780 [2024-07-15 15:35:12.387038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.780 qpair failed and we were unable to recover it. 00:30:08.780 [2024-07-15 15:35:12.387281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.780 [2024-07-15 15:35:12.387295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.780 qpair failed and we were unable to recover it. 00:30:08.780 [2024-07-15 15:35:12.387580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.780 [2024-07-15 15:35:12.387593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.780 qpair failed and we were unable to recover it. 00:30:08.780 [2024-07-15 15:35:12.387921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.780 [2024-07-15 15:35:12.387934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.780 qpair failed and we were unable to recover it. 00:30:08.780 [2024-07-15 15:35:12.388114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.780 [2024-07-15 15:35:12.388128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.780 qpair failed and we were unable to recover it. 00:30:08.780 [2024-07-15 15:35:12.388377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.780 [2024-07-15 15:35:12.388391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.780 qpair failed and we were unable to recover it. 00:30:08.780 [2024-07-15 15:35:12.388626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.780 [2024-07-15 15:35:12.388639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.780 qpair failed and we were unable to recover it. 00:30:08.780 [2024-07-15 15:35:12.388898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.780 [2024-07-15 15:35:12.388924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.780 qpair failed and we were unable to recover it. 00:30:08.780 [2024-07-15 15:35:12.389174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.780 [2024-07-15 15:35:12.389186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.780 qpair failed and we were unable to recover it. 00:30:08.780 [2024-07-15 15:35:12.389445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.780 [2024-07-15 15:35:12.389458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.780 qpair failed and we were unable to recover it. 00:30:08.780 [2024-07-15 15:35:12.389701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.780 [2024-07-15 15:35:12.389714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.780 qpair failed and we were unable to recover it. 00:30:08.780 [2024-07-15 15:35:12.390039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.780 [2024-07-15 15:35:12.390052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.780 qpair failed and we were unable to recover it. 00:30:08.780 [2024-07-15 15:35:12.390220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.780 [2024-07-15 15:35:12.390233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.780 qpair failed and we were unable to recover it. 00:30:08.780 [2024-07-15 15:35:12.390487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.780 [2024-07-15 15:35:12.390500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.780 qpair failed and we were unable to recover it. 00:30:08.780 [2024-07-15 15:35:12.390736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.780 [2024-07-15 15:35:12.390749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.780 qpair failed and we were unable to recover it. 00:30:08.780 [2024-07-15 15:35:12.391021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.780 [2024-07-15 15:35:12.391034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.780 qpair failed and we were unable to recover it. 00:30:08.780 [2024-07-15 15:35:12.391235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.780 [2024-07-15 15:35:12.391248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.780 qpair failed and we were unable to recover it. 00:30:08.780 [2024-07-15 15:35:12.391551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.780 [2024-07-15 15:35:12.391564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.780 qpair failed and we were unable to recover it. 00:30:08.780 [2024-07-15 15:35:12.391807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.780 [2024-07-15 15:35:12.391820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.780 qpair failed and we were unable to recover it. 00:30:08.780 [2024-07-15 15:35:12.392080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.780 [2024-07-15 15:35:12.392093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.780 qpair failed and we were unable to recover it. 00:30:08.780 [2024-07-15 15:35:12.392397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.780 [2024-07-15 15:35:12.392410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.780 qpair failed and we were unable to recover it. 00:30:08.780 [2024-07-15 15:35:12.392576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.780 [2024-07-15 15:35:12.392591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.780 qpair failed and we were unable to recover it. 00:30:08.780 [2024-07-15 15:35:12.392850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.780 [2024-07-15 15:35:12.392864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.780 qpair failed and we were unable to recover it. 00:30:08.780 [2024-07-15 15:35:12.393101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.780 [2024-07-15 15:35:12.393113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.780 qpair failed and we were unable to recover it. 00:30:08.780 [2024-07-15 15:35:12.393348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.780 [2024-07-15 15:35:12.393361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.780 qpair failed and we were unable to recover it. 00:30:08.780 [2024-07-15 15:35:12.393595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.780 [2024-07-15 15:35:12.393608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.780 qpair failed and we were unable to recover it. 00:30:08.780 [2024-07-15 15:35:12.393929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.780 [2024-07-15 15:35:12.393943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.780 qpair failed and we were unable to recover it. 00:30:08.781 [2024-07-15 15:35:12.394197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.781 [2024-07-15 15:35:12.394210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.781 qpair failed and we were unable to recover it. 00:30:08.781 [2024-07-15 15:35:12.394510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.781 [2024-07-15 15:35:12.394523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.781 qpair failed and we were unable to recover it. 00:30:08.781 [2024-07-15 15:35:12.394757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.781 [2024-07-15 15:35:12.394770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.781 qpair failed and we were unable to recover it. 00:30:08.781 [2024-07-15 15:35:12.394999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.781 [2024-07-15 15:35:12.395012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.781 qpair failed and we were unable to recover it. 00:30:08.781 [2024-07-15 15:35:12.395176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.781 [2024-07-15 15:35:12.395189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.781 qpair failed and we were unable to recover it. 00:30:08.781 [2024-07-15 15:35:12.395442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.781 [2024-07-15 15:35:12.395454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.781 qpair failed and we were unable to recover it. 00:30:08.781 [2024-07-15 15:35:12.395755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.781 [2024-07-15 15:35:12.395768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.781 qpair failed and we were unable to recover it. 00:30:08.781 [2024-07-15 15:35:12.395948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.781 [2024-07-15 15:35:12.395961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.781 qpair failed and we were unable to recover it. 00:30:08.781 [2024-07-15 15:35:12.396198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.781 [2024-07-15 15:35:12.396211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.781 qpair failed and we were unable to recover it. 00:30:08.781 [2024-07-15 15:35:12.396498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.781 [2024-07-15 15:35:12.396511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.781 qpair failed and we were unable to recover it. 00:30:08.781 [2024-07-15 15:35:12.396604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.781 [2024-07-15 15:35:12.396616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.781 qpair failed and we were unable to recover it. 00:30:08.781 [2024-07-15 15:35:12.396867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.781 [2024-07-15 15:35:12.396880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.781 qpair failed and we were unable to recover it. 00:30:08.781 [2024-07-15 15:35:12.397141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.781 [2024-07-15 15:35:12.397154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.781 qpair failed and we were unable to recover it. 00:30:08.781 [2024-07-15 15:35:12.397462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.781 [2024-07-15 15:35:12.397474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.781 qpair failed and we were unable to recover it. 00:30:08.781 [2024-07-15 15:35:12.397635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.781 [2024-07-15 15:35:12.397648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.781 qpair failed and we were unable to recover it. 00:30:08.781 [2024-07-15 15:35:12.397949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.781 [2024-07-15 15:35:12.397962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.781 qpair failed and we were unable to recover it. 00:30:08.781 [2024-07-15 15:35:12.398232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.781 [2024-07-15 15:35:12.398245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.781 qpair failed and we were unable to recover it. 00:30:08.781 [2024-07-15 15:35:12.398518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.781 [2024-07-15 15:35:12.398531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.781 qpair failed and we were unable to recover it. 00:30:08.781 [2024-07-15 15:35:12.398839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.781 [2024-07-15 15:35:12.398853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.781 qpair failed and we were unable to recover it. 00:30:08.781 [2024-07-15 15:35:12.399125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.781 [2024-07-15 15:35:12.399137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.781 qpair failed and we were unable to recover it. 00:30:08.781 [2024-07-15 15:35:12.399246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.781 [2024-07-15 15:35:12.399259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.781 qpair failed and we were unable to recover it. 00:30:08.781 [2024-07-15 15:35:12.399549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.781 [2024-07-15 15:35:12.399561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.781 qpair failed and we were unable to recover it. 00:30:08.781 [2024-07-15 15:35:12.399721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.781 [2024-07-15 15:35:12.399734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.781 qpair failed and we were unable to recover it. 00:30:08.781 [2024-07-15 15:35:12.400058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.781 [2024-07-15 15:35:12.400071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.781 qpair failed and we were unable to recover it. 00:30:08.781 [2024-07-15 15:35:12.400374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.781 [2024-07-15 15:35:12.400387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.781 qpair failed and we were unable to recover it. 00:30:08.781 [2024-07-15 15:35:12.400700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.781 [2024-07-15 15:35:12.400712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.781 qpair failed and we were unable to recover it. 00:30:08.781 [2024-07-15 15:35:12.401015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.781 [2024-07-15 15:35:12.401028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.781 qpair failed and we were unable to recover it. 00:30:08.781 [2024-07-15 15:35:12.401354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.781 [2024-07-15 15:35:12.401366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.781 qpair failed and we were unable to recover it. 00:30:08.781 [2024-07-15 15:35:12.401559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.781 [2024-07-15 15:35:12.401572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.781 qpair failed and we were unable to recover it. 00:30:08.781 [2024-07-15 15:35:12.401840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.781 [2024-07-15 15:35:12.401853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.781 qpair failed and we were unable to recover it. 00:30:08.781 [2024-07-15 15:35:12.402087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.781 [2024-07-15 15:35:12.402100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.781 qpair failed and we were unable to recover it. 00:30:08.781 [2024-07-15 15:35:12.402400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.781 [2024-07-15 15:35:12.402413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.781 qpair failed and we were unable to recover it. 00:30:08.781 [2024-07-15 15:35:12.402589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.781 [2024-07-15 15:35:12.402602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.781 qpair failed and we were unable to recover it. 00:30:08.781 [2024-07-15 15:35:12.402787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.781 [2024-07-15 15:35:12.402800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.781 qpair failed and we were unable to recover it. 00:30:08.781 [2024-07-15 15:35:12.403039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.781 [2024-07-15 15:35:12.403054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.781 qpair failed and we were unable to recover it. 00:30:08.781 [2024-07-15 15:35:12.403381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.781 [2024-07-15 15:35:12.403393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.781 qpair failed and we were unable to recover it. 00:30:08.781 [2024-07-15 15:35:12.403547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.781 [2024-07-15 15:35:12.403560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.781 qpair failed and we were unable to recover it. 00:30:08.781 [2024-07-15 15:35:12.403805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.781 [2024-07-15 15:35:12.403817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.781 qpair failed and we were unable to recover it. 00:30:08.781 [2024-07-15 15:35:12.404148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.781 [2024-07-15 15:35:12.404161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.781 qpair failed and we were unable to recover it. 00:30:08.781 [2024-07-15 15:35:12.404332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.781 [2024-07-15 15:35:12.404345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.781 qpair failed and we were unable to recover it. 00:30:08.781 [2024-07-15 15:35:12.404672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.781 [2024-07-15 15:35:12.404684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.781 qpair failed and we were unable to recover it. 00:30:08.782 [2024-07-15 15:35:12.404966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.782 [2024-07-15 15:35:12.404979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.782 qpair failed and we were unable to recover it. 00:30:08.782 [2024-07-15 15:35:12.405280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.782 [2024-07-15 15:35:12.405293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.782 qpair failed and we were unable to recover it. 00:30:08.782 [2024-07-15 15:35:12.405527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.782 [2024-07-15 15:35:12.405539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.782 qpair failed and we were unable to recover it. 00:30:08.782 [2024-07-15 15:35:12.405854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.782 [2024-07-15 15:35:12.405867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.782 qpair failed and we were unable to recover it. 00:30:08.782 [2024-07-15 15:35:12.406048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.782 [2024-07-15 15:35:12.406061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.782 qpair failed and we were unable to recover it. 00:30:08.782 [2024-07-15 15:35:12.406322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.782 [2024-07-15 15:35:12.406335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.782 qpair failed and we were unable to recover it. 00:30:08.782 [2024-07-15 15:35:12.406454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.782 [2024-07-15 15:35:12.406467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.782 qpair failed and we were unable to recover it. 00:30:08.782 [2024-07-15 15:35:12.406721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.782 [2024-07-15 15:35:12.406734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.782 qpair failed and we were unable to recover it. 00:30:08.782 [2024-07-15 15:35:12.406974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.782 [2024-07-15 15:35:12.406987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.782 qpair failed and we were unable to recover it. 00:30:08.782 [2024-07-15 15:35:12.407226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.782 [2024-07-15 15:35:12.407238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.782 qpair failed and we were unable to recover it. 00:30:08.782 [2024-07-15 15:35:12.407564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.782 [2024-07-15 15:35:12.407577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.782 qpair failed and we were unable to recover it. 00:30:08.782 [2024-07-15 15:35:12.407865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.782 [2024-07-15 15:35:12.407878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.782 qpair failed and we were unable to recover it. 00:30:08.782 [2024-07-15 15:35:12.408121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.782 [2024-07-15 15:35:12.408134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.782 qpair failed and we were unable to recover it. 00:30:08.782 [2024-07-15 15:35:12.408381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.782 [2024-07-15 15:35:12.408394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.782 qpair failed and we were unable to recover it. 00:30:08.782 [2024-07-15 15:35:12.408592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.782 [2024-07-15 15:35:12.408605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.782 qpair failed and we were unable to recover it. 00:30:08.782 [2024-07-15 15:35:12.408927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.782 [2024-07-15 15:35:12.408940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.782 qpair failed and we were unable to recover it. 00:30:08.782 [2024-07-15 15:35:12.409121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.782 [2024-07-15 15:35:12.409133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.782 qpair failed and we were unable to recover it. 00:30:08.782 [2024-07-15 15:35:12.409389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.782 [2024-07-15 15:35:12.409402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.782 qpair failed and we were unable to recover it. 00:30:08.782 [2024-07-15 15:35:12.409680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.782 [2024-07-15 15:35:12.409692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.782 qpair failed and we were unable to recover it. 00:30:08.782 [2024-07-15 15:35:12.409873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.782 [2024-07-15 15:35:12.409887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.782 qpair failed and we were unable to recover it. 00:30:08.782 [2024-07-15 15:35:12.410121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.782 [2024-07-15 15:35:12.410134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.782 qpair failed and we were unable to recover it. 00:30:08.782 [2024-07-15 15:35:12.410368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.782 [2024-07-15 15:35:12.410381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.782 qpair failed and we were unable to recover it. 00:30:08.782 [2024-07-15 15:35:12.410708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.782 [2024-07-15 15:35:12.410720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.782 qpair failed and we were unable to recover it. 00:30:08.782 [2024-07-15 15:35:12.410980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.782 [2024-07-15 15:35:12.410993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.782 qpair failed and we were unable to recover it. 00:30:08.782 [2024-07-15 15:35:12.411110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.782 [2024-07-15 15:35:12.411122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.782 qpair failed and we were unable to recover it. 00:30:08.782 [2024-07-15 15:35:12.411414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.782 [2024-07-15 15:35:12.411427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.782 qpair failed and we were unable to recover it. 00:30:08.782 [2024-07-15 15:35:12.411600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.782 [2024-07-15 15:35:12.411613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.782 qpair failed and we were unable to recover it. 00:30:08.782 [2024-07-15 15:35:12.411862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.782 [2024-07-15 15:35:12.411876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.782 qpair failed and we were unable to recover it. 00:30:08.782 [2024-07-15 15:35:12.412054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.782 [2024-07-15 15:35:12.412066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.782 qpair failed and we were unable to recover it. 00:30:08.782 [2024-07-15 15:35:12.412384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.782 [2024-07-15 15:35:12.412398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.782 qpair failed and we were unable to recover it. 00:30:08.782 [2024-07-15 15:35:12.412593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.782 [2024-07-15 15:35:12.412606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.782 qpair failed and we were unable to recover it. 00:30:08.782 [2024-07-15 15:35:12.412866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.782 [2024-07-15 15:35:12.412879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.782 qpair failed and we were unable to recover it. 00:30:08.782 [2024-07-15 15:35:12.413187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.782 [2024-07-15 15:35:12.413200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.782 qpair failed and we were unable to recover it. 00:30:08.782 [2024-07-15 15:35:12.413443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.782 [2024-07-15 15:35:12.413458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.782 qpair failed and we were unable to recover it. 00:30:08.782 [2024-07-15 15:35:12.413723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.782 [2024-07-15 15:35:12.413735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.782 qpair failed and we were unable to recover it. 00:30:08.782 [2024-07-15 15:35:12.413983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.782 [2024-07-15 15:35:12.413996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.782 qpair failed and we were unable to recover it. 00:30:08.782 [2024-07-15 15:35:12.414233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.782 [2024-07-15 15:35:12.414246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.782 qpair failed and we were unable to recover it. 00:30:08.782 [2024-07-15 15:35:12.414556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.782 [2024-07-15 15:35:12.414569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.782 qpair failed and we were unable to recover it. 00:30:08.782 [2024-07-15 15:35:12.414871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.782 [2024-07-15 15:35:12.414884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.782 qpair failed and we were unable to recover it. 00:30:08.782 [2024-07-15 15:35:12.415188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.782 [2024-07-15 15:35:12.415200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.782 qpair failed and we were unable to recover it. 00:30:08.782 [2024-07-15 15:35:12.415461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.782 [2024-07-15 15:35:12.415474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.782 qpair failed and we were unable to recover it. 00:30:08.783 [2024-07-15 15:35:12.415774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.783 [2024-07-15 15:35:12.415787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.783 qpair failed and we were unable to recover it. 00:30:08.783 [2024-07-15 15:35:12.416138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.783 [2024-07-15 15:35:12.416151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.783 qpair failed and we were unable to recover it. 00:30:08.783 [2024-07-15 15:35:12.416387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.783 [2024-07-15 15:35:12.416400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.783 qpair failed and we were unable to recover it. 00:30:08.783 [2024-07-15 15:35:12.416695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.783 [2024-07-15 15:35:12.416708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.783 qpair failed and we were unable to recover it. 00:30:08.783 [2024-07-15 15:35:12.416946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.783 [2024-07-15 15:35:12.416959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.783 qpair failed and we were unable to recover it. 00:30:08.783 [2024-07-15 15:35:12.417071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.783 [2024-07-15 15:35:12.417083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.783 qpair failed and we were unable to recover it. 00:30:08.783 [2024-07-15 15:35:12.417317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.783 [2024-07-15 15:35:12.417330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.783 qpair failed and we were unable to recover it. 00:30:08.783 [2024-07-15 15:35:12.417485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.783 [2024-07-15 15:35:12.417497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.783 qpair failed and we were unable to recover it. 00:30:08.783 [2024-07-15 15:35:12.417729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.783 [2024-07-15 15:35:12.417742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.783 qpair failed and we were unable to recover it. 00:30:08.783 [2024-07-15 15:35:12.418013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.783 [2024-07-15 15:35:12.418026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.783 qpair failed and we were unable to recover it. 00:30:08.783 [2024-07-15 15:35:12.418278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.783 [2024-07-15 15:35:12.418291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.783 qpair failed and we were unable to recover it. 00:30:08.783 [2024-07-15 15:35:12.418556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.783 [2024-07-15 15:35:12.418569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.783 qpair failed and we were unable to recover it. 00:30:08.783 [2024-07-15 15:35:12.418900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.783 [2024-07-15 15:35:12.418913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.783 qpair failed and we were unable to recover it. 00:30:08.783 [2024-07-15 15:35:12.419170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.783 [2024-07-15 15:35:12.419183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.783 qpair failed and we were unable to recover it. 00:30:08.783 [2024-07-15 15:35:12.419279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.783 [2024-07-15 15:35:12.419291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.783 qpair failed and we were unable to recover it. 00:30:08.783 [2024-07-15 15:35:12.419525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.783 [2024-07-15 15:35:12.419538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.783 qpair failed and we were unable to recover it. 00:30:08.783 [2024-07-15 15:35:12.419642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.783 [2024-07-15 15:35:12.419655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.783 qpair failed and we were unable to recover it. 00:30:08.783 [2024-07-15 15:35:12.419919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.783 [2024-07-15 15:35:12.419931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.783 qpair failed and we were unable to recover it. 00:30:08.783 [2024-07-15 15:35:12.420234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.783 [2024-07-15 15:35:12.420246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.783 qpair failed and we were unable to recover it. 00:30:08.783 [2024-07-15 15:35:12.420489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.783 [2024-07-15 15:35:12.420502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.783 qpair failed and we were unable to recover it. 00:30:08.783 [2024-07-15 15:35:12.420834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.783 [2024-07-15 15:35:12.420847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.783 qpair failed and we were unable to recover it. 00:30:08.783 [2024-07-15 15:35:12.421032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.783 [2024-07-15 15:35:12.421044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.783 qpair failed and we were unable to recover it. 00:30:08.783 [2024-07-15 15:35:12.421392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.783 [2024-07-15 15:35:12.421405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.783 qpair failed and we were unable to recover it. 00:30:08.783 [2024-07-15 15:35:12.421705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.783 [2024-07-15 15:35:12.421718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.783 qpair failed and we were unable to recover it. 00:30:08.783 [2024-07-15 15:35:12.421972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.783 [2024-07-15 15:35:12.421985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.783 qpair failed and we were unable to recover it. 00:30:08.783 [2024-07-15 15:35:12.422236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.783 [2024-07-15 15:35:12.422249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.783 qpair failed and we were unable to recover it. 00:30:08.783 [2024-07-15 15:35:12.422497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.783 [2024-07-15 15:35:12.422509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.783 qpair failed and we were unable to recover it. 00:30:08.783 [2024-07-15 15:35:12.422699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.783 [2024-07-15 15:35:12.422711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.783 qpair failed and we were unable to recover it. 00:30:08.783 [2024-07-15 15:35:12.422958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.783 [2024-07-15 15:35:12.422971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.783 qpair failed and we were unable to recover it. 00:30:08.783 [2024-07-15 15:35:12.423218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.783 [2024-07-15 15:35:12.423231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.783 qpair failed and we were unable to recover it. 00:30:08.783 [2024-07-15 15:35:12.423497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.783 [2024-07-15 15:35:12.423509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.783 qpair failed and we were unable to recover it. 00:30:08.783 [2024-07-15 15:35:12.423755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.783 [2024-07-15 15:35:12.423768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.783 qpair failed and we were unable to recover it. 00:30:08.783 [2024-07-15 15:35:12.424068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.783 [2024-07-15 15:35:12.424083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.783 qpair failed and we were unable to recover it. 00:30:08.783 [2024-07-15 15:35:12.424238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.783 [2024-07-15 15:35:12.424250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.783 qpair failed and we were unable to recover it. 00:30:08.783 [2024-07-15 15:35:12.424553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.783 [2024-07-15 15:35:12.424566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.783 qpair failed and we were unable to recover it. 00:30:08.783 [2024-07-15 15:35:12.424819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.784 [2024-07-15 15:35:12.424836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.784 qpair failed and we were unable to recover it. 00:30:08.784 [2024-07-15 15:35:12.425136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.784 [2024-07-15 15:35:12.425149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.784 qpair failed and we were unable to recover it. 00:30:08.784 [2024-07-15 15:35:12.425245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.784 [2024-07-15 15:35:12.425257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.784 qpair failed and we were unable to recover it. 00:30:08.784 [2024-07-15 15:35:12.425538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.784 [2024-07-15 15:35:12.425550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.784 qpair failed and we were unable to recover it. 00:30:08.784 [2024-07-15 15:35:12.425800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.784 [2024-07-15 15:35:12.425812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.784 qpair failed and we were unable to recover it. 00:30:08.784 [2024-07-15 15:35:12.426081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.784 [2024-07-15 15:35:12.426094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.784 qpair failed and we were unable to recover it. 00:30:08.784 [2024-07-15 15:35:12.426377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.784 [2024-07-15 15:35:12.426390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.784 qpair failed and we were unable to recover it. 00:30:08.784 [2024-07-15 15:35:12.426649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.784 [2024-07-15 15:35:12.426661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.784 qpair failed and we were unable to recover it. 00:30:08.784 [2024-07-15 15:35:12.426982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.784 [2024-07-15 15:35:12.426995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.784 qpair failed and we were unable to recover it. 00:30:08.784 [2024-07-15 15:35:12.427322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.784 [2024-07-15 15:35:12.427334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.784 qpair failed and we were unable to recover it. 00:30:08.784 [2024-07-15 15:35:12.427513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.784 [2024-07-15 15:35:12.427525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.784 qpair failed and we were unable to recover it. 00:30:08.784 [2024-07-15 15:35:12.427827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.784 [2024-07-15 15:35:12.427843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.784 qpair failed and we were unable to recover it. 00:30:08.784 [2024-07-15 15:35:12.427948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.784 [2024-07-15 15:35:12.427961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.784 qpair failed and we were unable to recover it. 00:30:08.784 [2024-07-15 15:35:12.428127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.784 [2024-07-15 15:35:12.428140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.784 qpair failed and we were unable to recover it. 00:30:08.784 [2024-07-15 15:35:12.428388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.784 [2024-07-15 15:35:12.428400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.784 qpair failed and we were unable to recover it. 00:30:08.784 [2024-07-15 15:35:12.428581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.784 [2024-07-15 15:35:12.428594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.784 qpair failed and we were unable to recover it. 00:30:08.784 [2024-07-15 15:35:12.428840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.784 [2024-07-15 15:35:12.428853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.784 qpair failed and we were unable to recover it. 00:30:08.784 [2024-07-15 15:35:12.429175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.784 [2024-07-15 15:35:12.429188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.784 qpair failed and we were unable to recover it. 00:30:08.784 [2024-07-15 15:35:12.429488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.784 [2024-07-15 15:35:12.429501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.784 qpair failed and we were unable to recover it. 00:30:08.784 [2024-07-15 15:35:12.429807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.784 [2024-07-15 15:35:12.429819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.784 qpair failed and we were unable to recover it. 00:30:08.784 [2024-07-15 15:35:12.430002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.784 [2024-07-15 15:35:12.430014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.784 qpair failed and we were unable to recover it. 00:30:08.784 [2024-07-15 15:35:12.430270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.784 [2024-07-15 15:35:12.430282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.784 qpair failed and we were unable to recover it. 00:30:08.784 [2024-07-15 15:35:12.430540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.784 [2024-07-15 15:35:12.430553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.784 qpair failed and we were unable to recover it. 00:30:08.784 [2024-07-15 15:35:12.430876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.784 [2024-07-15 15:35:12.430888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.784 qpair failed and we were unable to recover it. 00:30:08.784 [2024-07-15 15:35:12.431145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.784 [2024-07-15 15:35:12.431158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.784 qpair failed and we were unable to recover it. 00:30:08.784 [2024-07-15 15:35:12.431393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.784 [2024-07-15 15:35:12.431405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.784 qpair failed and we were unable to recover it. 00:30:08.784 [2024-07-15 15:35:12.431568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.784 [2024-07-15 15:35:12.431580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.784 qpair failed and we were unable to recover it. 00:30:08.784 [2024-07-15 15:35:12.431813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.784 [2024-07-15 15:35:12.431826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.784 qpair failed and we were unable to recover it. 00:30:08.784 [2024-07-15 15:35:12.432177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.784 [2024-07-15 15:35:12.432190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.784 qpair failed and we were unable to recover it. 00:30:08.784 [2024-07-15 15:35:12.432512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.784 [2024-07-15 15:35:12.432525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.784 qpair failed and we were unable to recover it. 00:30:08.784 [2024-07-15 15:35:12.432781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.784 [2024-07-15 15:35:12.432794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.784 qpair failed and we were unable to recover it. 00:30:08.784 [2024-07-15 15:35:12.433118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.784 [2024-07-15 15:35:12.433131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.784 qpair failed and we were unable to recover it. 00:30:08.784 [2024-07-15 15:35:12.433309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.784 [2024-07-15 15:35:12.433322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.784 qpair failed and we were unable to recover it. 00:30:08.784 [2024-07-15 15:35:12.433520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.784 [2024-07-15 15:35:12.433533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.784 qpair failed and we were unable to recover it. 00:30:08.784 [2024-07-15 15:35:12.433764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.784 [2024-07-15 15:35:12.433777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.784 qpair failed and we were unable to recover it. 00:30:08.784 [2024-07-15 15:35:12.433897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.784 [2024-07-15 15:35:12.433909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.784 qpair failed and we were unable to recover it. 00:30:08.784 [2024-07-15 15:35:12.434012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.784 [2024-07-15 15:35:12.434025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.784 qpair failed and we were unable to recover it. 00:30:08.784 [2024-07-15 15:35:12.434348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.784 [2024-07-15 15:35:12.434363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.784 qpair failed and we were unable to recover it. 00:30:08.784 [2024-07-15 15:35:12.434621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.784 [2024-07-15 15:35:12.434634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.784 qpair failed and we were unable to recover it. 00:30:08.784 [2024-07-15 15:35:12.434867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.784 [2024-07-15 15:35:12.434879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.784 qpair failed and we were unable to recover it. 00:30:08.784 [2024-07-15 15:35:12.435200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.784 [2024-07-15 15:35:12.435212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.784 qpair failed and we were unable to recover it. 00:30:08.785 [2024-07-15 15:35:12.435447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.785 [2024-07-15 15:35:12.435460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.785 qpair failed and we were unable to recover it. 00:30:08.785 [2024-07-15 15:35:12.435789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.785 [2024-07-15 15:35:12.435802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.785 qpair failed and we were unable to recover it. 00:30:08.785 [2024-07-15 15:35:12.435983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.785 [2024-07-15 15:35:12.435997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.785 qpair failed and we were unable to recover it. 00:30:08.785 [2024-07-15 15:35:12.436192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.785 [2024-07-15 15:35:12.436204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.785 qpair failed and we were unable to recover it. 00:30:08.785 [2024-07-15 15:35:12.436485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.785 [2024-07-15 15:35:12.436498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.785 qpair failed and we were unable to recover it. 00:30:08.785 [2024-07-15 15:35:12.436664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.785 [2024-07-15 15:35:12.436676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.785 qpair failed and we were unable to recover it. 00:30:08.785 [2024-07-15 15:35:12.437001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.785 [2024-07-15 15:35:12.437014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.785 qpair failed and we were unable to recover it. 00:30:08.785 [2024-07-15 15:35:12.437267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.785 [2024-07-15 15:35:12.437280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.785 qpair failed and we were unable to recover it. 00:30:08.785 [2024-07-15 15:35:12.437463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.785 [2024-07-15 15:35:12.437476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.785 qpair failed and we were unable to recover it. 00:30:08.785 [2024-07-15 15:35:12.437677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.785 [2024-07-15 15:35:12.437689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.785 qpair failed and we were unable to recover it. 00:30:08.785 [2024-07-15 15:35:12.437963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.785 [2024-07-15 15:35:12.437977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.785 qpair failed and we were unable to recover it. 00:30:08.785 [2024-07-15 15:35:12.438279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.785 [2024-07-15 15:35:12.438292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.785 qpair failed and we were unable to recover it. 00:30:08.785 [2024-07-15 15:35:12.438524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.785 [2024-07-15 15:35:12.438536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.785 qpair failed and we were unable to recover it. 00:30:08.785 [2024-07-15 15:35:12.438858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.785 [2024-07-15 15:35:12.438871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.785 qpair failed and we were unable to recover it. 00:30:08.785 [2024-07-15 15:35:12.439102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.785 [2024-07-15 15:35:12.439114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.785 qpair failed and we were unable to recover it. 00:30:08.785 [2024-07-15 15:35:12.439365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.785 [2024-07-15 15:35:12.439377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.785 qpair failed and we were unable to recover it. 00:30:08.785 [2024-07-15 15:35:12.439680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.785 [2024-07-15 15:35:12.439692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.785 qpair failed and we were unable to recover it. 00:30:08.785 [2024-07-15 15:35:12.439993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.785 [2024-07-15 15:35:12.440006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.785 qpair failed and we were unable to recover it. 00:30:08.785 [2024-07-15 15:35:12.440276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.785 [2024-07-15 15:35:12.440288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.785 qpair failed and we were unable to recover it. 00:30:08.785 [2024-07-15 15:35:12.440539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.785 [2024-07-15 15:35:12.440552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.785 qpair failed and we were unable to recover it. 00:30:08.785 [2024-07-15 15:35:12.440781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.785 [2024-07-15 15:35:12.440793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.785 qpair failed and we were unable to recover it. 00:30:08.785 [2024-07-15 15:35:12.441051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.785 [2024-07-15 15:35:12.441064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.785 qpair failed and we were unable to recover it. 00:30:08.785 [2024-07-15 15:35:12.441407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.785 [2024-07-15 15:35:12.441420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.785 qpair failed and we were unable to recover it. 00:30:08.785 [2024-07-15 15:35:12.441588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.785 [2024-07-15 15:35:12.441601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.785 qpair failed and we were unable to recover it. 00:30:08.785 [2024-07-15 15:35:12.441906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.785 [2024-07-15 15:35:12.441919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.785 qpair failed and we were unable to recover it. 00:30:08.785 [2024-07-15 15:35:12.442238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.785 [2024-07-15 15:35:12.442251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.785 qpair failed and we were unable to recover it. 00:30:08.785 [2024-07-15 15:35:12.442531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.785 [2024-07-15 15:35:12.442544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.785 qpair failed and we were unable to recover it. 00:30:08.785 [2024-07-15 15:35:12.442770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.785 [2024-07-15 15:35:12.442783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.785 qpair failed and we were unable to recover it. 00:30:08.785 [2024-07-15 15:35:12.443149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.785 [2024-07-15 15:35:12.443162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.785 qpair failed and we were unable to recover it. 00:30:08.785 [2024-07-15 15:35:12.443398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.785 [2024-07-15 15:35:12.443410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.785 qpair failed and we were unable to recover it. 00:30:08.785 [2024-07-15 15:35:12.443574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.785 [2024-07-15 15:35:12.443586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.785 qpair failed and we were unable to recover it. 00:30:08.785 [2024-07-15 15:35:12.443934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.785 [2024-07-15 15:35:12.443947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.785 qpair failed and we were unable to recover it. 00:30:08.785 [2024-07-15 15:35:12.444252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.785 [2024-07-15 15:35:12.444264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.785 qpair failed and we were unable to recover it. 00:30:08.785 [2024-07-15 15:35:12.444496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.785 [2024-07-15 15:35:12.444509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.785 qpair failed and we were unable to recover it. 00:30:08.785 [2024-07-15 15:35:12.444812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.785 [2024-07-15 15:35:12.444825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.785 qpair failed and we were unable to recover it. 00:30:08.785 [2024-07-15 15:35:12.445026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.785 [2024-07-15 15:35:12.445039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.785 qpair failed and we were unable to recover it. 00:30:08.785 [2024-07-15 15:35:12.445350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.785 [2024-07-15 15:35:12.445364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.785 qpair failed and we were unable to recover it. 00:30:08.785 [2024-07-15 15:35:12.445565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.785 [2024-07-15 15:35:12.445577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.785 qpair failed and we were unable to recover it. 00:30:08.785 [2024-07-15 15:35:12.445761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.785 [2024-07-15 15:35:12.445774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.785 qpair failed and we were unable to recover it. 00:30:08.785 [2024-07-15 15:35:12.445946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.785 [2024-07-15 15:35:12.445958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.785 qpair failed and we were unable to recover it. 00:30:08.785 [2024-07-15 15:35:12.446137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.786 [2024-07-15 15:35:12.446149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.786 qpair failed and we were unable to recover it. 00:30:08.786 [2024-07-15 15:35:12.446380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.786 [2024-07-15 15:35:12.446393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.786 qpair failed and we were unable to recover it. 00:30:08.786 [2024-07-15 15:35:12.446721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.786 [2024-07-15 15:35:12.446734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.786 qpair failed and we were unable to recover it. 00:30:08.786 [2024-07-15 15:35:12.446966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.786 [2024-07-15 15:35:12.446979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.786 qpair failed and we were unable to recover it. 00:30:08.786 [2024-07-15 15:35:12.447141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.786 [2024-07-15 15:35:12.447153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.786 qpair failed and we were unable to recover it. 00:30:08.786 [2024-07-15 15:35:12.447337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.786 [2024-07-15 15:35:12.447350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.786 qpair failed and we were unable to recover it. 00:30:08.786 [2024-07-15 15:35:12.447618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.786 [2024-07-15 15:35:12.447631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.786 qpair failed and we were unable to recover it. 00:30:08.786 [2024-07-15 15:35:12.447875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.786 [2024-07-15 15:35:12.447888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.786 qpair failed and we were unable to recover it. 00:30:08.786 [2024-07-15 15:35:12.448118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.786 [2024-07-15 15:35:12.448130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.786 qpair failed and we were unable to recover it. 00:30:08.786 [2024-07-15 15:35:12.448339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.786 [2024-07-15 15:35:12.448352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.786 qpair failed and we were unable to recover it. 00:30:08.786 [2024-07-15 15:35:12.448689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.786 [2024-07-15 15:35:12.448702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.786 qpair failed and we were unable to recover it. 00:30:08.786 [2024-07-15 15:35:12.448896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.786 [2024-07-15 15:35:12.448908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.786 qpair failed and we were unable to recover it. 00:30:08.786 [2024-07-15 15:35:12.449094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.786 [2024-07-15 15:35:12.449107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.786 qpair failed and we were unable to recover it. 00:30:08.786 [2024-07-15 15:35:12.449339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.786 [2024-07-15 15:35:12.449352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.786 qpair failed and we were unable to recover it. 00:30:08.786 [2024-07-15 15:35:12.449517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.786 [2024-07-15 15:35:12.449530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.786 qpair failed and we were unable to recover it. 00:30:08.786 [2024-07-15 15:35:12.449765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.786 [2024-07-15 15:35:12.449778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.786 qpair failed and we were unable to recover it. 00:30:08.786 [2024-07-15 15:35:12.450026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.786 [2024-07-15 15:35:12.450039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.786 qpair failed and we were unable to recover it. 00:30:08.786 [2024-07-15 15:35:12.450175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.786 [2024-07-15 15:35:12.450187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.786 qpair failed and we were unable to recover it. 00:30:08.786 [2024-07-15 15:35:12.450443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.786 [2024-07-15 15:35:12.450456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.786 qpair failed and we were unable to recover it. 00:30:08.786 [2024-07-15 15:35:12.450755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.786 [2024-07-15 15:35:12.450768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.786 qpair failed and we were unable to recover it. 00:30:08.786 [2024-07-15 15:35:12.450887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.786 [2024-07-15 15:35:12.450900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.786 qpair failed and we were unable to recover it. 00:30:08.786 [2024-07-15 15:35:12.451174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.786 [2024-07-15 15:35:12.451186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.786 qpair failed and we were unable to recover it. 00:30:08.786 [2024-07-15 15:35:12.451526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.786 [2024-07-15 15:35:12.451539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.786 qpair failed and we were unable to recover it. 00:30:08.786 [2024-07-15 15:35:12.451792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.786 [2024-07-15 15:35:12.451805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.786 qpair failed and we were unable to recover it. 00:30:08.786 [2024-07-15 15:35:12.452039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.786 [2024-07-15 15:35:12.452052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.786 qpair failed and we were unable to recover it. 00:30:08.786 [2024-07-15 15:35:12.452329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.786 [2024-07-15 15:35:12.452341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.786 qpair failed and we were unable to recover it. 00:30:08.786 [2024-07-15 15:35:12.452594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.786 [2024-07-15 15:35:12.452606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.786 qpair failed and we were unable to recover it. 00:30:08.786 [2024-07-15 15:35:12.452888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.786 [2024-07-15 15:35:12.452900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.786 qpair failed and we were unable to recover it. 00:30:08.786 [2024-07-15 15:35:12.453105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.786 [2024-07-15 15:35:12.453117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.786 qpair failed and we were unable to recover it. 00:30:08.786 [2024-07-15 15:35:12.453354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.786 [2024-07-15 15:35:12.453367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.786 qpair failed and we were unable to recover it. 00:30:08.786 [2024-07-15 15:35:12.453669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.786 [2024-07-15 15:35:12.453682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.786 qpair failed and we were unable to recover it. 00:30:08.786 [2024-07-15 15:35:12.453821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.786 [2024-07-15 15:35:12.453836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.786 qpair failed and we were unable to recover it. 00:30:08.786 [2024-07-15 15:35:12.454032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.786 [2024-07-15 15:35:12.454045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.786 qpair failed and we were unable to recover it. 00:30:08.786 [2024-07-15 15:35:12.454232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.786 [2024-07-15 15:35:12.454244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.786 qpair failed and we were unable to recover it. 00:30:08.786 [2024-07-15 15:35:12.454547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.786 [2024-07-15 15:35:12.454560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.786 qpair failed and we were unable to recover it. 00:30:08.786 [2024-07-15 15:35:12.454828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.786 [2024-07-15 15:35:12.454850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.786 qpair failed and we were unable to recover it. 00:30:08.786 [2024-07-15 15:35:12.455085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.786 [2024-07-15 15:35:12.455100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.786 qpair failed and we were unable to recover it. 00:30:08.786 [2024-07-15 15:35:12.455426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.786 [2024-07-15 15:35:12.455438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.786 qpair failed and we were unable to recover it. 00:30:08.786 [2024-07-15 15:35:12.455620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.786 [2024-07-15 15:35:12.455632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.786 qpair failed and we were unable to recover it. 00:30:08.786 [2024-07-15 15:35:12.455901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.786 [2024-07-15 15:35:12.455914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.786 qpair failed and we were unable to recover it. 00:30:08.786 [2024-07-15 15:35:12.456111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.786 [2024-07-15 15:35:12.456123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.786 qpair failed and we were unable to recover it. 00:30:08.786 [2024-07-15 15:35:12.456357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.787 [2024-07-15 15:35:12.456369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.787 qpair failed and we were unable to recover it. 00:30:08.787 [2024-07-15 15:35:12.456530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.787 [2024-07-15 15:35:12.456543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.787 qpair failed and we were unable to recover it. 00:30:08.787 [2024-07-15 15:35:12.456843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.787 [2024-07-15 15:35:12.456856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.787 qpair failed and we were unable to recover it. 00:30:08.787 [2024-07-15 15:35:12.457133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.787 [2024-07-15 15:35:12.457146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.787 qpair failed and we were unable to recover it. 00:30:08.787 [2024-07-15 15:35:12.457407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.787 [2024-07-15 15:35:12.457420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.787 qpair failed and we were unable to recover it. 00:30:08.787 [2024-07-15 15:35:12.457666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.787 [2024-07-15 15:35:12.457678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.787 qpair failed and we were unable to recover it. 00:30:08.787 [2024-07-15 15:35:12.457953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.787 [2024-07-15 15:35:12.457966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.787 qpair failed and we were unable to recover it. 00:30:08.787 [2024-07-15 15:35:12.458266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.787 [2024-07-15 15:35:12.458278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.787 qpair failed and we were unable to recover it. 00:30:08.787 [2024-07-15 15:35:12.458514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.787 [2024-07-15 15:35:12.458527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.787 qpair failed and we were unable to recover it. 00:30:08.787 [2024-07-15 15:35:12.458855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.787 [2024-07-15 15:35:12.458867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.787 qpair failed and we were unable to recover it. 00:30:08.787 [2024-07-15 15:35:12.459118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.787 [2024-07-15 15:35:12.459130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.787 qpair failed and we were unable to recover it. 00:30:08.787 [2024-07-15 15:35:12.459362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.787 [2024-07-15 15:35:12.459375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.787 qpair failed and we were unable to recover it. 00:30:08.787 [2024-07-15 15:35:12.459699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.787 [2024-07-15 15:35:12.459711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.787 qpair failed and we were unable to recover it. 00:30:08.787 [2024-07-15 15:35:12.460034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.787 [2024-07-15 15:35:12.460046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.787 qpair failed and we were unable to recover it. 00:30:08.787 [2024-07-15 15:35:12.460279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.787 [2024-07-15 15:35:12.460291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.787 qpair failed and we were unable to recover it. 00:30:08.787 [2024-07-15 15:35:12.460549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.787 [2024-07-15 15:35:12.460562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.787 qpair failed and we were unable to recover it. 00:30:08.787 [2024-07-15 15:35:12.460890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.787 [2024-07-15 15:35:12.460903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.787 qpair failed and we were unable to recover it. 00:30:08.787 [2024-07-15 15:35:12.461144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.787 [2024-07-15 15:35:12.461156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.787 qpair failed and we were unable to recover it. 00:30:08.787 [2024-07-15 15:35:12.461505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.787 [2024-07-15 15:35:12.461518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.787 qpair failed and we were unable to recover it. 00:30:08.787 [2024-07-15 15:35:12.461781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.787 [2024-07-15 15:35:12.461794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.787 qpair failed and we were unable to recover it. 00:30:08.787 [2024-07-15 15:35:12.462066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.787 [2024-07-15 15:35:12.462079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.787 qpair failed and we were unable to recover it. 00:30:08.787 [2024-07-15 15:35:12.462263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.787 [2024-07-15 15:35:12.462275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.787 qpair failed and we were unable to recover it. 00:30:08.787 [2024-07-15 15:35:12.462577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.787 [2024-07-15 15:35:12.462589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.787 qpair failed and we were unable to recover it. 00:30:08.787 [2024-07-15 15:35:12.462766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.787 [2024-07-15 15:35:12.462779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.787 qpair failed and we were unable to recover it. 00:30:08.787 [2024-07-15 15:35:12.463058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.787 [2024-07-15 15:35:12.463071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.787 qpair failed and we were unable to recover it. 00:30:08.787 [2024-07-15 15:35:12.463326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.787 [2024-07-15 15:35:12.463339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.787 qpair failed and we were unable to recover it. 00:30:08.787 [2024-07-15 15:35:12.463592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.787 [2024-07-15 15:35:12.463604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.787 qpair failed and we were unable to recover it. 00:30:08.787 [2024-07-15 15:35:12.463904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.787 [2024-07-15 15:35:12.463917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.787 qpair failed and we were unable to recover it. 00:30:08.787 [2024-07-15 15:35:12.464161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.787 [2024-07-15 15:35:12.464174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.787 qpair failed and we were unable to recover it. 00:30:08.787 [2024-07-15 15:35:12.464348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.787 [2024-07-15 15:35:12.464361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.787 qpair failed and we were unable to recover it. 00:30:08.787 [2024-07-15 15:35:12.464535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.787 [2024-07-15 15:35:12.464547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.787 qpair failed and we were unable to recover it. 00:30:08.787 [2024-07-15 15:35:12.464729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.787 [2024-07-15 15:35:12.464742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.787 qpair failed and we were unable to recover it. 00:30:08.787 [2024-07-15 15:35:12.464900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.787 [2024-07-15 15:35:12.464912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.787 qpair failed and we were unable to recover it. 00:30:08.787 [2024-07-15 15:35:12.465254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.787 [2024-07-15 15:35:12.465267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.787 qpair failed and we were unable to recover it. 00:30:08.787 [2024-07-15 15:35:12.465595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.787 [2024-07-15 15:35:12.465608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.787 qpair failed and we were unable to recover it. 00:30:08.787 [2024-07-15 15:35:12.465884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.787 [2024-07-15 15:35:12.465899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.787 qpair failed and we were unable to recover it. 00:30:08.787 [2024-07-15 15:35:12.466229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.787 [2024-07-15 15:35:12.466241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.787 qpair failed and we were unable to recover it. 00:30:08.787 [2024-07-15 15:35:12.466497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.787 [2024-07-15 15:35:12.466510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.787 qpair failed and we were unable to recover it. 00:30:08.787 [2024-07-15 15:35:12.466745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.787 [2024-07-15 15:35:12.466757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.787 qpair failed and we were unable to recover it. 00:30:08.787 [2024-07-15 15:35:12.467107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.787 [2024-07-15 15:35:12.467120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.787 qpair failed and we were unable to recover it. 00:30:08.787 [2024-07-15 15:35:12.467374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.787 [2024-07-15 15:35:12.467387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.787 qpair failed and we were unable to recover it. 00:30:08.787 [2024-07-15 15:35:12.467553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.788 [2024-07-15 15:35:12.467566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.788 qpair failed and we were unable to recover it. 00:30:08.788 [2024-07-15 15:35:12.467892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.788 [2024-07-15 15:35:12.467905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.788 qpair failed and we were unable to recover it. 00:30:08.788 [2024-07-15 15:35:12.468139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.788 [2024-07-15 15:35:12.468152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.788 qpair failed and we were unable to recover it. 00:30:08.788 [2024-07-15 15:35:12.468452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.788 [2024-07-15 15:35:12.468464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.788 qpair failed and we were unable to recover it. 00:30:08.788 [2024-07-15 15:35:12.468789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.788 [2024-07-15 15:35:12.468802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.788 qpair failed and we were unable to recover it. 00:30:08.788 [2024-07-15 15:35:12.469072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.788 [2024-07-15 15:35:12.469085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.788 qpair failed and we were unable to recover it. 00:30:08.788 [2024-07-15 15:35:12.469387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.788 [2024-07-15 15:35:12.469400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.788 qpair failed and we were unable to recover it. 00:30:08.788 [2024-07-15 15:35:12.469589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.788 [2024-07-15 15:35:12.469601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.788 qpair failed and we were unable to recover it. 00:30:08.788 [2024-07-15 15:35:12.469921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.788 [2024-07-15 15:35:12.469934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.788 qpair failed and we were unable to recover it. 00:30:08.788 [2024-07-15 15:35:12.470192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.788 [2024-07-15 15:35:12.470205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.788 qpair failed and we were unable to recover it. 00:30:08.788 [2024-07-15 15:35:12.470473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.788 [2024-07-15 15:35:12.470485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.788 qpair failed and we were unable to recover it. 00:30:08.788 [2024-07-15 15:35:12.470672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.788 [2024-07-15 15:35:12.470684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.788 qpair failed and we were unable to recover it. 00:30:08.788 [2024-07-15 15:35:12.470958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.788 [2024-07-15 15:35:12.470970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.788 qpair failed and we were unable to recover it. 00:30:08.788 [2024-07-15 15:35:12.471230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.788 [2024-07-15 15:35:12.471243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.788 qpair failed and we were unable to recover it. 00:30:08.788 [2024-07-15 15:35:12.471558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.788 [2024-07-15 15:35:12.471571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.788 qpair failed and we were unable to recover it. 00:30:08.788 [2024-07-15 15:35:12.471819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.788 [2024-07-15 15:35:12.471836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.788 qpair failed and we were unable to recover it. 00:30:08.788 [2024-07-15 15:35:12.472096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.788 [2024-07-15 15:35:12.472109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.788 qpair failed and we were unable to recover it. 00:30:08.788 [2024-07-15 15:35:12.472381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.788 [2024-07-15 15:35:12.472394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.788 qpair failed and we were unable to recover it. 00:30:08.788 [2024-07-15 15:35:12.472719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.788 [2024-07-15 15:35:12.472732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.788 qpair failed and we were unable to recover it. 00:30:08.788 [2024-07-15 15:35:12.472911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.788 [2024-07-15 15:35:12.472924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.788 qpair failed and we were unable to recover it. 00:30:08.788 [2024-07-15 15:35:12.473249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.788 [2024-07-15 15:35:12.473262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.788 qpair failed and we were unable to recover it. 00:30:08.788 [2024-07-15 15:35:12.473563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.788 [2024-07-15 15:35:12.473576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.788 qpair failed and we were unable to recover it. 00:30:08.788 [2024-07-15 15:35:12.473739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.788 [2024-07-15 15:35:12.473752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.788 qpair failed and we were unable to recover it. 00:30:08.788 [2024-07-15 15:35:12.473870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.788 [2024-07-15 15:35:12.473882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.788 qpair failed and we were unable to recover it. 00:30:08.788 [2024-07-15 15:35:12.474131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.788 [2024-07-15 15:35:12.474144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.788 qpair failed and we were unable to recover it. 00:30:08.788 [2024-07-15 15:35:12.474449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.788 [2024-07-15 15:35:12.474461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.788 qpair failed and we were unable to recover it. 00:30:08.788 [2024-07-15 15:35:12.474563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.788 [2024-07-15 15:35:12.474576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.788 qpair failed and we were unable to recover it. 00:30:08.788 [2024-07-15 15:35:12.474831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.788 [2024-07-15 15:35:12.474848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.788 qpair failed and we were unable to recover it. 00:30:08.788 [2024-07-15 15:35:12.475176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.788 [2024-07-15 15:35:12.475188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.788 qpair failed and we were unable to recover it. 00:30:08.788 [2024-07-15 15:35:12.475498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.788 [2024-07-15 15:35:12.475511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.788 qpair failed and we were unable to recover it. 00:30:08.788 [2024-07-15 15:35:12.475749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.788 [2024-07-15 15:35:12.475761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.788 qpair failed and we were unable to recover it. 00:30:08.788 [2024-07-15 15:35:12.476057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.788 [2024-07-15 15:35:12.476069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.788 qpair failed and we were unable to recover it. 00:30:08.788 [2024-07-15 15:35:12.476371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.788 [2024-07-15 15:35:12.476383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.788 qpair failed and we were unable to recover it. 00:30:08.788 [2024-07-15 15:35:12.476735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.788 [2024-07-15 15:35:12.476747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.788 qpair failed and we were unable to recover it. 00:30:08.788 [2024-07-15 15:35:12.476923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.788 [2024-07-15 15:35:12.476938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.788 qpair failed and we were unable to recover it. 00:30:08.789 [2024-07-15 15:35:12.477174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.789 [2024-07-15 15:35:12.477187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.789 qpair failed and we were unable to recover it. 00:30:08.789 [2024-07-15 15:35:12.477457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.789 [2024-07-15 15:35:12.477470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.789 qpair failed and we were unable to recover it. 00:30:08.789 [2024-07-15 15:35:12.477703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.789 [2024-07-15 15:35:12.477716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.789 qpair failed and we were unable to recover it. 00:30:08.789 [2024-07-15 15:35:12.477978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.789 [2024-07-15 15:35:12.477991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.789 qpair failed and we were unable to recover it. 00:30:08.789 [2024-07-15 15:35:12.478276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.789 [2024-07-15 15:35:12.478289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.789 qpair failed and we were unable to recover it. 00:30:08.789 [2024-07-15 15:35:12.478525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.789 [2024-07-15 15:35:12.478537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.789 qpair failed and we were unable to recover it. 00:30:08.789 [2024-07-15 15:35:12.478793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.789 [2024-07-15 15:35:12.478805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.789 qpair failed and we were unable to recover it. 00:30:08.789 [2024-07-15 15:35:12.478900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.789 [2024-07-15 15:35:12.478914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.789 qpair failed and we were unable to recover it. 00:30:08.789 [2024-07-15 15:35:12.479211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.789 [2024-07-15 15:35:12.479224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.789 qpair failed and we were unable to recover it. 00:30:08.789 [2024-07-15 15:35:12.479468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.789 [2024-07-15 15:35:12.479480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.789 qpair failed and we were unable to recover it. 00:30:08.789 [2024-07-15 15:35:12.479782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.789 [2024-07-15 15:35:12.479795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.789 qpair failed and we were unable to recover it. 00:30:08.789 [2024-07-15 15:35:12.480143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.789 [2024-07-15 15:35:12.480156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.789 qpair failed and we were unable to recover it. 00:30:08.789 [2024-07-15 15:35:12.480401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.789 [2024-07-15 15:35:12.480413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.789 qpair failed and we were unable to recover it. 00:30:08.789 [2024-07-15 15:35:12.480649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.789 [2024-07-15 15:35:12.480662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.789 qpair failed and we were unable to recover it. 00:30:08.789 [2024-07-15 15:35:12.480962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.789 [2024-07-15 15:35:12.480975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.789 qpair failed and we were unable to recover it. 00:30:08.789 [2024-07-15 15:35:12.481308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.789 [2024-07-15 15:35:12.481320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.789 qpair failed and we were unable to recover it. 00:30:08.789 [2024-07-15 15:35:12.481571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.789 [2024-07-15 15:35:12.481583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.789 qpair failed and we were unable to recover it. 00:30:08.789 [2024-07-15 15:35:12.481882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.789 [2024-07-15 15:35:12.481895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.789 qpair failed and we were unable to recover it. 00:30:08.789 [2024-07-15 15:35:12.482147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.789 [2024-07-15 15:35:12.482159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.789 qpair failed and we were unable to recover it. 00:30:08.789 [2024-07-15 15:35:12.482323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.789 [2024-07-15 15:35:12.482336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.789 qpair failed and we were unable to recover it. 00:30:08.789 [2024-07-15 15:35:12.482584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.789 [2024-07-15 15:35:12.482596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.789 qpair failed and we were unable to recover it. 00:30:08.789 [2024-07-15 15:35:12.482921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.789 [2024-07-15 15:35:12.482934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.789 qpair failed and we were unable to recover it. 00:30:08.789 [2024-07-15 15:35:12.483180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.789 [2024-07-15 15:35:12.483192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.789 qpair failed and we were unable to recover it. 00:30:08.789 [2024-07-15 15:35:12.483452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.789 [2024-07-15 15:35:12.483465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.789 qpair failed and we were unable to recover it. 00:30:08.789 [2024-07-15 15:35:12.483729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.789 [2024-07-15 15:35:12.483742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.789 qpair failed and we were unable to recover it. 00:30:08.789 [2024-07-15 15:35:12.484065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.789 [2024-07-15 15:35:12.484078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.789 qpair failed and we were unable to recover it. 00:30:08.789 [2024-07-15 15:35:12.484249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.789 [2024-07-15 15:35:12.484261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.789 qpair failed and we were unable to recover it. 00:30:08.789 [2024-07-15 15:35:12.484513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.789 [2024-07-15 15:35:12.484525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.789 qpair failed and we were unable to recover it. 00:30:08.789 [2024-07-15 15:35:12.484843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.789 [2024-07-15 15:35:12.484856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.789 qpair failed and we were unable to recover it. 00:30:08.789 [2024-07-15 15:35:12.485132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.789 [2024-07-15 15:35:12.485145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.789 qpair failed and we were unable to recover it. 00:30:08.789 [2024-07-15 15:35:12.485392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.789 [2024-07-15 15:35:12.485405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.789 qpair failed and we were unable to recover it. 00:30:08.789 [2024-07-15 15:35:12.485588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.789 [2024-07-15 15:35:12.485600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.789 qpair failed and we were unable to recover it. 00:30:08.789 [2024-07-15 15:35:12.485837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.789 [2024-07-15 15:35:12.485850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.789 qpair failed and we were unable to recover it. 00:30:08.789 [2024-07-15 15:35:12.486106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.789 [2024-07-15 15:35:12.486119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.789 qpair failed and we were unable to recover it. 00:30:08.789 [2024-07-15 15:35:12.486371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.789 [2024-07-15 15:35:12.486384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.789 qpair failed and we were unable to recover it. 00:30:08.789 [2024-07-15 15:35:12.486652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.789 [2024-07-15 15:35:12.486666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.789 qpair failed and we were unable to recover it. 00:30:08.789 [2024-07-15 15:35:12.486847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.789 [2024-07-15 15:35:12.486860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.789 qpair failed and we were unable to recover it. 00:30:08.789 [2024-07-15 15:35:12.487162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.789 [2024-07-15 15:35:12.487175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.789 qpair failed and we were unable to recover it. 00:30:08.789 [2024-07-15 15:35:12.487457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.789 [2024-07-15 15:35:12.487471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.789 qpair failed and we were unable to recover it. 00:30:08.789 [2024-07-15 15:35:12.487721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.789 [2024-07-15 15:35:12.487736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.789 qpair failed and we were unable to recover it. 00:30:08.789 [2024-07-15 15:35:12.488085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.790 [2024-07-15 15:35:12.488099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.790 qpair failed and we were unable to recover it. 00:30:08.790 [2024-07-15 15:35:12.488380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.790 [2024-07-15 15:35:12.488394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.790 qpair failed and we were unable to recover it. 00:30:08.790 [2024-07-15 15:35:12.488553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.790 [2024-07-15 15:35:12.488566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.790 qpair failed and we were unable to recover it. 00:30:08.790 [2024-07-15 15:35:12.488761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.790 [2024-07-15 15:35:12.488774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.790 qpair failed and we were unable to recover it. 00:30:08.790 [2024-07-15 15:35:12.489040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.790 [2024-07-15 15:35:12.489053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.790 qpair failed and we were unable to recover it. 00:30:08.790 [2024-07-15 15:35:12.489295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.790 [2024-07-15 15:35:12.489308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.790 qpair failed and we were unable to recover it. 00:30:08.790 [2024-07-15 15:35:12.489623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.790 [2024-07-15 15:35:12.489635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.790 qpair failed and we were unable to recover it. 00:30:08.790 [2024-07-15 15:35:12.489886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.790 [2024-07-15 15:35:12.489899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.790 qpair failed and we were unable to recover it. 00:30:08.790 [2024-07-15 15:35:12.490134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.790 [2024-07-15 15:35:12.490147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.790 qpair failed and we were unable to recover it. 00:30:08.790 [2024-07-15 15:35:12.490400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.790 [2024-07-15 15:35:12.490413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.790 qpair failed and we were unable to recover it. 00:30:08.790 [2024-07-15 15:35:12.490734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.790 [2024-07-15 15:35:12.490747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.790 qpair failed and we were unable to recover it. 00:30:08.790 [2024-07-15 15:35:12.491075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.790 [2024-07-15 15:35:12.491088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.790 qpair failed and we were unable to recover it. 00:30:08.790 [2024-07-15 15:35:12.491338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.790 [2024-07-15 15:35:12.491351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.790 qpair failed and we were unable to recover it. 00:30:08.790 [2024-07-15 15:35:12.491590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.790 [2024-07-15 15:35:12.491603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.790 qpair failed and we were unable to recover it. 00:30:08.790 [2024-07-15 15:35:12.491853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.790 [2024-07-15 15:35:12.491866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.790 qpair failed and we were unable to recover it. 00:30:08.790 [2024-07-15 15:35:12.492060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.790 [2024-07-15 15:35:12.492073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.790 qpair failed and we were unable to recover it. 00:30:08.790 [2024-07-15 15:35:12.492242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.790 [2024-07-15 15:35:12.492255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.790 qpair failed and we were unable to recover it. 00:30:08.790 [2024-07-15 15:35:12.492488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.790 [2024-07-15 15:35:12.492501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.790 qpair failed and we were unable to recover it. 00:30:08.790 [2024-07-15 15:35:12.492735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.790 [2024-07-15 15:35:12.492748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.790 qpair failed and we were unable to recover it. 00:30:08.790 [2024-07-15 15:35:12.492985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.790 [2024-07-15 15:35:12.492998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.790 qpair failed and we were unable to recover it. 00:30:08.790 [2024-07-15 15:35:12.493193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.790 [2024-07-15 15:35:12.493206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.790 qpair failed and we were unable to recover it. 00:30:08.790 [2024-07-15 15:35:12.493534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.790 [2024-07-15 15:35:12.493547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.790 qpair failed and we were unable to recover it. 00:30:08.790 [2024-07-15 15:35:12.493739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.790 [2024-07-15 15:35:12.493752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.790 qpair failed and we were unable to recover it. 00:30:08.790 [2024-07-15 15:35:12.493951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.790 [2024-07-15 15:35:12.493964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.790 qpair failed and we were unable to recover it. 00:30:08.790 [2024-07-15 15:35:12.494207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.790 [2024-07-15 15:35:12.494219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.790 qpair failed and we were unable to recover it. 00:30:08.790 [2024-07-15 15:35:12.494455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.790 [2024-07-15 15:35:12.494467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.790 qpair failed and we were unable to recover it. 00:30:08.790 [2024-07-15 15:35:12.494654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.790 [2024-07-15 15:35:12.494667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.790 qpair failed and we were unable to recover it. 00:30:08.790 [2024-07-15 15:35:12.494986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.790 [2024-07-15 15:35:12.494999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.790 qpair failed and we were unable to recover it. 00:30:08.790 [2024-07-15 15:35:12.495254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.790 [2024-07-15 15:35:12.495267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.790 qpair failed and we were unable to recover it. 00:30:08.790 [2024-07-15 15:35:12.495525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.790 [2024-07-15 15:35:12.495538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.790 qpair failed and we were unable to recover it. 00:30:08.790 [2024-07-15 15:35:12.495715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.790 [2024-07-15 15:35:12.495728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.790 qpair failed and we were unable to recover it. 00:30:08.790 [2024-07-15 15:35:12.495988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.790 [2024-07-15 15:35:12.496001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.790 qpair failed and we were unable to recover it. 00:30:08.790 [2024-07-15 15:35:12.496240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.790 [2024-07-15 15:35:12.496253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.790 qpair failed and we were unable to recover it. 00:30:08.790 [2024-07-15 15:35:12.496626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.790 [2024-07-15 15:35:12.496638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.790 qpair failed and we were unable to recover it. 00:30:08.790 [2024-07-15 15:35:12.496948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.790 [2024-07-15 15:35:12.496961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.790 qpair failed and we were unable to recover it. 00:30:08.790 [2024-07-15 15:35:12.497291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.790 [2024-07-15 15:35:12.497304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.790 qpair failed and we were unable to recover it. 00:30:08.790 [2024-07-15 15:35:12.497555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.790 [2024-07-15 15:35:12.497568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.790 qpair failed and we were unable to recover it. 00:30:08.790 [2024-07-15 15:35:12.497814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.790 [2024-07-15 15:35:12.497827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.790 qpair failed and we were unable to recover it. 00:30:08.790 [2024-07-15 15:35:12.498096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.790 [2024-07-15 15:35:12.498108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.790 qpair failed and we were unable to recover it. 00:30:08.790 [2024-07-15 15:35:12.498366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.790 [2024-07-15 15:35:12.498381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.790 qpair failed and we were unable to recover it. 00:30:08.790 [2024-07-15 15:35:12.498621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.790 [2024-07-15 15:35:12.498634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.790 qpair failed and we were unable to recover it. 00:30:08.790 [2024-07-15 15:35:12.498974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.791 [2024-07-15 15:35:12.498987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.791 qpair failed and we were unable to recover it. 00:30:08.791 [2024-07-15 15:35:12.499232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.791 [2024-07-15 15:35:12.499245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.791 qpair failed and we were unable to recover it. 00:30:08.791 [2024-07-15 15:35:12.499441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.791 [2024-07-15 15:35:12.499454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.791 qpair failed and we were unable to recover it. 00:30:08.791 [2024-07-15 15:35:12.499708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.791 [2024-07-15 15:35:12.499721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.791 qpair failed and we were unable to recover it. 00:30:08.791 [2024-07-15 15:35:12.500022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.791 [2024-07-15 15:35:12.500035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.791 qpair failed and we were unable to recover it. 00:30:08.791 [2024-07-15 15:35:12.500336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.791 [2024-07-15 15:35:12.500348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.791 qpair failed and we were unable to recover it. 00:30:08.791 [2024-07-15 15:35:12.500588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.791 [2024-07-15 15:35:12.500601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.791 qpair failed and we were unable to recover it. 00:30:08.791 [2024-07-15 15:35:12.500904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.791 [2024-07-15 15:35:12.500917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.791 qpair failed and we were unable to recover it. 00:30:08.791 [2024-07-15 15:35:12.501076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.791 [2024-07-15 15:35:12.501089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.791 qpair failed and we were unable to recover it. 00:30:08.791 [2024-07-15 15:35:12.501416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.791 [2024-07-15 15:35:12.501429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.791 qpair failed and we were unable to recover it. 00:30:08.791 [2024-07-15 15:35:12.501743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.791 [2024-07-15 15:35:12.501756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.791 qpair failed and we were unable to recover it. 00:30:08.791 [2024-07-15 15:35:12.501942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.791 [2024-07-15 15:35:12.501955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.791 qpair failed and we were unable to recover it. 00:30:08.791 [2024-07-15 15:35:12.502256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.791 [2024-07-15 15:35:12.502269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.791 qpair failed and we were unable to recover it. 00:30:08.791 [2024-07-15 15:35:12.502570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.791 [2024-07-15 15:35:12.502583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.791 qpair failed and we were unable to recover it. 00:30:08.791 [2024-07-15 15:35:12.502784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.791 [2024-07-15 15:35:12.502797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.791 qpair failed and we were unable to recover it. 00:30:08.791 [2024-07-15 15:35:12.503032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.791 [2024-07-15 15:35:12.503044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.791 qpair failed and we were unable to recover it. 00:30:08.791 [2024-07-15 15:35:12.503310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.791 [2024-07-15 15:35:12.503323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.791 qpair failed and we were unable to recover it. 00:30:08.791 [2024-07-15 15:35:12.503570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.791 [2024-07-15 15:35:12.503584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.791 qpair failed and we were unable to recover it. 00:30:08.791 [2024-07-15 15:35:12.503826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.791 [2024-07-15 15:35:12.503844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.791 qpair failed and we were unable to recover it. 00:30:08.791 [2024-07-15 15:35:12.504088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.791 [2024-07-15 15:35:12.504101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.791 qpair failed and we were unable to recover it. 00:30:08.791 [2024-07-15 15:35:12.504378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.791 [2024-07-15 15:35:12.504391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.791 qpair failed and we were unable to recover it. 00:30:08.791 [2024-07-15 15:35:12.504624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.791 [2024-07-15 15:35:12.504637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.791 qpair failed and we were unable to recover it. 00:30:08.791 [2024-07-15 15:35:12.504953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.791 [2024-07-15 15:35:12.504966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.791 qpair failed and we were unable to recover it. 00:30:08.791 [2024-07-15 15:35:12.505209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.791 [2024-07-15 15:35:12.505222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.791 qpair failed and we were unable to recover it. 00:30:08.791 [2024-07-15 15:35:12.505467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.791 [2024-07-15 15:35:12.505481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.791 qpair failed and we were unable to recover it. 00:30:08.791 [2024-07-15 15:35:12.505778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.791 [2024-07-15 15:35:12.505816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.791 qpair failed and we were unable to recover it. 00:30:08.791 [2024-07-15 15:35:12.506064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.791 [2024-07-15 15:35:12.506083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.791 qpair failed and we were unable to recover it. 00:30:08.791 [2024-07-15 15:35:12.506282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.791 [2024-07-15 15:35:12.506300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.791 qpair failed and we were unable to recover it. 00:30:08.791 [2024-07-15 15:35:12.506511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.791 [2024-07-15 15:35:12.506528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.791 qpair failed and we were unable to recover it. 00:30:08.791 [2024-07-15 15:35:12.506769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.791 [2024-07-15 15:35:12.506783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.791 qpair failed and we were unable to recover it. 00:30:08.791 [2024-07-15 15:35:12.507032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.791 [2024-07-15 15:35:12.507045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.791 qpair failed and we were unable to recover it. 00:30:08.791 [2024-07-15 15:35:12.507210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.791 [2024-07-15 15:35:12.507222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.791 qpair failed and we were unable to recover it. 00:30:08.791 [2024-07-15 15:35:12.507514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.791 [2024-07-15 15:35:12.507527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.791 qpair failed and we were unable to recover it. 00:30:08.791 [2024-07-15 15:35:12.507761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.791 [2024-07-15 15:35:12.507774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.791 qpair failed and we were unable to recover it. 00:30:08.791 [2024-07-15 15:35:12.508097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.791 [2024-07-15 15:35:12.508110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.791 qpair failed and we were unable to recover it. 00:30:08.791 [2024-07-15 15:35:12.508409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.791 [2024-07-15 15:35:12.508422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.791 qpair failed and we were unable to recover it. 00:30:08.791 [2024-07-15 15:35:12.508680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.791 [2024-07-15 15:35:12.508693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.791 qpair failed and we were unable to recover it. 00:30:08.791 [2024-07-15 15:35:12.508938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.791 [2024-07-15 15:35:12.508951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.791 qpair failed and we were unable to recover it. 00:30:08.791 [2024-07-15 15:35:12.509284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.791 [2024-07-15 15:35:12.509297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.791 qpair failed and we were unable to recover it. 00:30:08.791 [2024-07-15 15:35:12.509561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.791 [2024-07-15 15:35:12.509574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.791 qpair failed and we were unable to recover it. 00:30:08.791 [2024-07-15 15:35:12.509898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.791 [2024-07-15 15:35:12.509911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.791 qpair failed and we were unable to recover it. 00:30:08.791 [2024-07-15 15:35:12.510162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.792 [2024-07-15 15:35:12.510175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.792 qpair failed and we were unable to recover it. 00:30:08.792 [2024-07-15 15:35:12.510501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.792 [2024-07-15 15:35:12.510513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.792 qpair failed and we were unable to recover it. 00:30:08.792 [2024-07-15 15:35:12.510813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.792 [2024-07-15 15:35:12.510825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.792 qpair failed and we were unable to recover it. 00:30:08.792 [2024-07-15 15:35:12.511081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.792 [2024-07-15 15:35:12.511094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.792 qpair failed and we were unable to recover it. 00:30:08.792 [2024-07-15 15:35:12.511341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.792 [2024-07-15 15:35:12.511353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.792 qpair failed and we were unable to recover it. 00:30:08.792 [2024-07-15 15:35:12.511623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.792 [2024-07-15 15:35:12.511636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.792 qpair failed and we were unable to recover it. 00:30:08.792 [2024-07-15 15:35:12.511869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.792 [2024-07-15 15:35:12.511882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.792 qpair failed and we were unable to recover it. 00:30:08.792 [2024-07-15 15:35:12.512135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.792 [2024-07-15 15:35:12.512148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.792 qpair failed and we were unable to recover it. 00:30:08.792 [2024-07-15 15:35:12.512378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.792 [2024-07-15 15:35:12.512391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.792 qpair failed and we were unable to recover it. 00:30:08.792 [2024-07-15 15:35:12.512658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.792 [2024-07-15 15:35:12.512671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.792 qpair failed and we were unable to recover it. 00:30:08.792 [2024-07-15 15:35:12.512916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.792 [2024-07-15 15:35:12.512929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.792 qpair failed and we were unable to recover it. 00:30:08.792 [2024-07-15 15:35:12.513232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.792 [2024-07-15 15:35:12.513245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.792 qpair failed and we were unable to recover it. 00:30:08.792 [2024-07-15 15:35:12.513355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.792 [2024-07-15 15:35:12.513368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.792 qpair failed and we were unable to recover it. 00:30:08.792 [2024-07-15 15:35:12.513626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.792 [2024-07-15 15:35:12.513639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.792 qpair failed and we were unable to recover it. 00:30:08.792 [2024-07-15 15:35:12.513889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.792 [2024-07-15 15:35:12.513902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.792 qpair failed and we were unable to recover it. 00:30:08.792 [2024-07-15 15:35:12.514105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.792 [2024-07-15 15:35:12.514118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.792 qpair failed and we were unable to recover it. 00:30:08.792 [2024-07-15 15:35:12.514353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.792 [2024-07-15 15:35:12.514365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.792 qpair failed and we were unable to recover it. 00:30:08.792 [2024-07-15 15:35:12.514612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.792 [2024-07-15 15:35:12.514625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.792 qpair failed and we were unable to recover it. 00:30:08.792 [2024-07-15 15:35:12.514976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.792 [2024-07-15 15:35:12.514989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.792 qpair failed and we were unable to recover it. 00:30:08.792 [2024-07-15 15:35:12.515224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.792 [2024-07-15 15:35:12.515237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.792 qpair failed and we were unable to recover it. 00:30:08.792 [2024-07-15 15:35:12.515561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.792 [2024-07-15 15:35:12.515573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.792 qpair failed and we were unable to recover it. 00:30:08.792 [2024-07-15 15:35:12.515875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.792 [2024-07-15 15:35:12.515888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.792 qpair failed and we were unable to recover it. 00:30:08.792 [2024-07-15 15:35:12.516141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.792 [2024-07-15 15:35:12.516153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.792 qpair failed and we were unable to recover it. 00:30:08.792 [2024-07-15 15:35:12.516462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.792 [2024-07-15 15:35:12.516475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.792 qpair failed and we were unable to recover it. 00:30:08.792 [2024-07-15 15:35:12.516660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.792 [2024-07-15 15:35:12.516675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.792 qpair failed and we were unable to recover it. 00:30:08.792 [2024-07-15 15:35:12.516928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.792 [2024-07-15 15:35:12.516941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.792 qpair failed and we were unable to recover it. 00:30:08.792 [2024-07-15 15:35:12.517192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.792 [2024-07-15 15:35:12.517205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.792 qpair failed and we were unable to recover it. 00:30:08.792 [2024-07-15 15:35:12.517514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.792 [2024-07-15 15:35:12.517528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.792 qpair failed and we were unable to recover it. 00:30:08.792 [2024-07-15 15:35:12.517708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.792 [2024-07-15 15:35:12.517721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.792 qpair failed and we were unable to recover it. 00:30:08.792 [2024-07-15 15:35:12.517976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.792 [2024-07-15 15:35:12.517989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.792 qpair failed and we were unable to recover it. 00:30:08.792 [2024-07-15 15:35:12.518173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.792 [2024-07-15 15:35:12.518186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.792 qpair failed and we were unable to recover it. 00:30:08.792 [2024-07-15 15:35:12.518430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.792 [2024-07-15 15:35:12.518442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.792 qpair failed and we were unable to recover it. 00:30:08.792 [2024-07-15 15:35:12.518680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.792 [2024-07-15 15:35:12.518693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.792 qpair failed and we were unable to recover it. 00:30:08.792 [2024-07-15 15:35:12.518861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.792 [2024-07-15 15:35:12.518874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.792 qpair failed and we were unable to recover it. 00:30:08.792 [2024-07-15 15:35:12.519148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.792 [2024-07-15 15:35:12.519160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.792 qpair failed and we were unable to recover it. 00:30:08.792 [2024-07-15 15:35:12.519487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.792 [2024-07-15 15:35:12.519500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.792 qpair failed and we were unable to recover it. 00:30:08.793 [2024-07-15 15:35:12.519742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.793 [2024-07-15 15:35:12.519755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-07-15 15:35:12.519935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.793 [2024-07-15 15:35:12.519948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-07-15 15:35:12.520277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.793 [2024-07-15 15:35:12.520290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-07-15 15:35:12.520484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.793 [2024-07-15 15:35:12.520496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-07-15 15:35:12.520838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.793 [2024-07-15 15:35:12.520851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-07-15 15:35:12.521171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.793 [2024-07-15 15:35:12.521184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-07-15 15:35:12.521418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.793 [2024-07-15 15:35:12.521431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-07-15 15:35:12.521555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.793 [2024-07-15 15:35:12.521567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-07-15 15:35:12.521869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.793 [2024-07-15 15:35:12.521881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-07-15 15:35:12.522065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.793 [2024-07-15 15:35:12.522078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-07-15 15:35:12.522329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.793 [2024-07-15 15:35:12.522341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-07-15 15:35:12.522589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.793 [2024-07-15 15:35:12.522601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-07-15 15:35:12.522843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.793 [2024-07-15 15:35:12.522855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-07-15 15:35:12.523180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.793 [2024-07-15 15:35:12.523193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-07-15 15:35:12.523286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.793 [2024-07-15 15:35:12.523299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-07-15 15:35:12.523546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.793 [2024-07-15 15:35:12.523558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-07-15 15:35:12.523802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.793 [2024-07-15 15:35:12.523815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-07-15 15:35:12.524055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.793 [2024-07-15 15:35:12.524069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-07-15 15:35:12.524229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.793 [2024-07-15 15:35:12.524241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-07-15 15:35:12.524511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.793 [2024-07-15 15:35:12.524524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-07-15 15:35:12.524769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.793 [2024-07-15 15:35:12.524782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-07-15 15:35:12.525086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.793 [2024-07-15 15:35:12.525099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-07-15 15:35:12.525349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.793 [2024-07-15 15:35:12.525362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-07-15 15:35:12.525705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.793 [2024-07-15 15:35:12.525717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-07-15 15:35:12.525901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.793 [2024-07-15 15:35:12.525914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-07-15 15:35:12.526222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.793 [2024-07-15 15:35:12.526234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-07-15 15:35:12.526469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.793 [2024-07-15 15:35:12.526482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-07-15 15:35:12.526733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.793 [2024-07-15 15:35:12.526745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-07-15 15:35:12.527023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.793 [2024-07-15 15:35:12.527038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-07-15 15:35:12.527273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.793 [2024-07-15 15:35:12.527286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-07-15 15:35:12.527531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.793 [2024-07-15 15:35:12.527544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-07-15 15:35:12.527790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.793 [2024-07-15 15:35:12.527803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-07-15 15:35:12.528033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.793 [2024-07-15 15:35:12.528046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-07-15 15:35:12.528282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.793 [2024-07-15 15:35:12.528295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-07-15 15:35:12.528401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.793 [2024-07-15 15:35:12.528414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-07-15 15:35:12.528759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.793 [2024-07-15 15:35:12.528772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-07-15 15:35:12.528947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.793 [2024-07-15 15:35:12.528961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-07-15 15:35:12.529263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.793 [2024-07-15 15:35:12.529276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-07-15 15:35:12.529522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.793 [2024-07-15 15:35:12.529534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-07-15 15:35:12.529716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.793 [2024-07-15 15:35:12.529729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.793 [2024-07-15 15:35:12.529995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.793 [2024-07-15 15:35:12.530007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.793 qpair failed and we were unable to recover it. 00:30:08.794 [2024-07-15 15:35:12.530260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.794 [2024-07-15 15:35:12.530273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.794 qpair failed and we were unable to recover it. 00:30:08.794 [2024-07-15 15:35:12.530511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.794 [2024-07-15 15:35:12.530524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.794 qpair failed and we were unable to recover it. 00:30:08.794 [2024-07-15 15:35:12.530759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.794 [2024-07-15 15:35:12.530771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.794 qpair failed and we were unable to recover it. 00:30:08.794 [2024-07-15 15:35:12.531091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.794 [2024-07-15 15:35:12.531103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.794 qpair failed and we were unable to recover it. 00:30:08.794 [2024-07-15 15:35:12.531357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.794 [2024-07-15 15:35:12.531370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.794 qpair failed and we were unable to recover it. 00:30:08.794 [2024-07-15 15:35:12.531638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.794 [2024-07-15 15:35:12.531651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.794 qpair failed and we were unable to recover it. 00:30:08.794 [2024-07-15 15:35:12.531929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.794 [2024-07-15 15:35:12.531942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.794 qpair failed and we were unable to recover it. 00:30:08.794 [2024-07-15 15:35:12.532178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.794 [2024-07-15 15:35:12.532190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.794 qpair failed and we were unable to recover it. 00:30:08.794 [2024-07-15 15:35:12.532426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.794 [2024-07-15 15:35:12.532439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.794 qpair failed and we were unable to recover it. 00:30:08.794 [2024-07-15 15:35:12.532772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.794 [2024-07-15 15:35:12.532785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.794 qpair failed and we were unable to recover it. 00:30:08.794 [2024-07-15 15:35:12.532961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.794 [2024-07-15 15:35:12.532974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.794 qpair failed and we were unable to recover it. 00:30:08.794 [2024-07-15 15:35:12.533246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.794 [2024-07-15 15:35:12.533259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.794 qpair failed and we were unable to recover it. 00:30:08.794 [2024-07-15 15:35:12.533559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.794 [2024-07-15 15:35:12.533572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.794 qpair failed and we were unable to recover it. 00:30:08.794 [2024-07-15 15:35:12.533888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.794 [2024-07-15 15:35:12.533901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.794 qpair failed and we were unable to recover it. 00:30:08.794 [2024-07-15 15:35:12.534229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.794 [2024-07-15 15:35:12.534242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.794 qpair failed and we were unable to recover it. 00:30:08.794 [2024-07-15 15:35:12.534490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.794 [2024-07-15 15:35:12.534503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.794 qpair failed and we were unable to recover it. 00:30:08.794 [2024-07-15 15:35:12.534753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.794 [2024-07-15 15:35:12.534766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.794 qpair failed and we were unable to recover it. 00:30:08.794 [2024-07-15 15:35:12.534867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.794 [2024-07-15 15:35:12.534879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.794 qpair failed and we were unable to recover it. 00:30:08.794 [2024-07-15 15:35:12.535061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.794 [2024-07-15 15:35:12.535074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.794 qpair failed and we were unable to recover it. 00:30:08.794 [2024-07-15 15:35:12.535340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.794 [2024-07-15 15:35:12.535353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.794 qpair failed and we were unable to recover it. 00:30:08.794 [2024-07-15 15:35:12.535528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.794 [2024-07-15 15:35:12.535541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.794 qpair failed and we were unable to recover it. 00:30:08.794 [2024-07-15 15:35:12.535803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.794 [2024-07-15 15:35:12.535816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.794 qpair failed and we were unable to recover it. 00:30:08.794 [2024-07-15 15:35:12.536067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.794 [2024-07-15 15:35:12.536080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.794 qpair failed and we were unable to recover it. 00:30:08.794 [2024-07-15 15:35:12.536349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.794 [2024-07-15 15:35:12.536361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.794 qpair failed and we were unable to recover it. 00:30:08.794 [2024-07-15 15:35:12.536595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.794 [2024-07-15 15:35:12.536608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.794 qpair failed and we were unable to recover it. 00:30:08.794 [2024-07-15 15:35:12.536912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.794 [2024-07-15 15:35:12.536925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.794 qpair failed and we were unable to recover it. 00:30:08.794 [2024-07-15 15:35:12.537253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.794 [2024-07-15 15:35:12.537266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.794 qpair failed and we were unable to recover it. 00:30:08.794 [2024-07-15 15:35:12.537591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.794 [2024-07-15 15:35:12.537606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.794 qpair failed and we were unable to recover it. 00:30:08.794 [2024-07-15 15:35:12.537961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.794 [2024-07-15 15:35:12.537973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.794 qpair failed and we were unable to recover it. 00:30:08.794 [2024-07-15 15:35:12.538320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.794 [2024-07-15 15:35:12.538333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.794 qpair failed and we were unable to recover it. 00:30:08.794 [2024-07-15 15:35:12.538617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.794 [2024-07-15 15:35:12.538630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.794 qpair failed and we were unable to recover it. 00:30:08.794 [2024-07-15 15:35:12.538864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.794 [2024-07-15 15:35:12.538877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.794 qpair failed and we were unable to recover it. 00:30:08.794 [2024-07-15 15:35:12.539107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.794 [2024-07-15 15:35:12.539120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.794 qpair failed and we were unable to recover it. 00:30:08.794 [2024-07-15 15:35:12.539392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.794 [2024-07-15 15:35:12.539405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.794 qpair failed and we were unable to recover it. 00:30:08.794 [2024-07-15 15:35:12.539723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.794 [2024-07-15 15:35:12.539735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.794 qpair failed and we were unable to recover it. 00:30:08.794 [2024-07-15 15:35:12.539920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.794 [2024-07-15 15:35:12.539933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.794 qpair failed and we were unable to recover it. 00:30:08.794 [2024-07-15 15:35:12.540250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.794 [2024-07-15 15:35:12.540263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.794 qpair failed and we were unable to recover it. 00:30:08.794 [2024-07-15 15:35:12.540499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.794 [2024-07-15 15:35:12.540511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.794 qpair failed and we were unable to recover it. 00:30:08.794 [2024-07-15 15:35:12.540762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.794 [2024-07-15 15:35:12.540775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.794 qpair failed and we were unable to recover it. 00:30:08.794 [2024-07-15 15:35:12.541034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.794 [2024-07-15 15:35:12.541047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.795 qpair failed and we were unable to recover it. 00:30:08.795 [2024-07-15 15:35:12.541360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.795 [2024-07-15 15:35:12.541373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.795 qpair failed and we were unable to recover it. 00:30:08.795 [2024-07-15 15:35:12.541554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.795 [2024-07-15 15:35:12.541567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.795 qpair failed and we were unable to recover it. 00:30:08.795 [2024-07-15 15:35:12.541814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.795 [2024-07-15 15:35:12.541827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.795 qpair failed and we were unable to recover it. 00:30:08.795 [2024-07-15 15:35:12.542025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.795 [2024-07-15 15:35:12.542037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.795 qpair failed and we were unable to recover it. 00:30:08.795 [2024-07-15 15:35:12.542283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.795 [2024-07-15 15:35:12.542296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.795 qpair failed and we were unable to recover it. 00:30:08.795 [2024-07-15 15:35:12.542529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.795 [2024-07-15 15:35:12.542541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.795 qpair failed and we were unable to recover it. 00:30:08.795 [2024-07-15 15:35:12.542844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.795 [2024-07-15 15:35:12.542857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.795 qpair failed and we were unable to recover it. 00:30:08.795 [2024-07-15 15:35:12.543092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.795 [2024-07-15 15:35:12.543105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.795 qpair failed and we were unable to recover it. 00:30:08.795 [2024-07-15 15:35:12.543428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.795 [2024-07-15 15:35:12.543441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.795 qpair failed and we were unable to recover it. 00:30:08.795 [2024-07-15 15:35:12.543743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.795 [2024-07-15 15:35:12.543755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.795 qpair failed and we were unable to recover it. 00:30:08.795 [2024-07-15 15:35:12.544082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.795 [2024-07-15 15:35:12.544094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.795 qpair failed and we were unable to recover it. 00:30:08.795 [2024-07-15 15:35:12.544260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.795 [2024-07-15 15:35:12.544272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.795 qpair failed and we were unable to recover it. 00:30:08.795 [2024-07-15 15:35:12.544543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.795 [2024-07-15 15:35:12.544556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.795 qpair failed and we were unable to recover it. 00:30:08.795 [2024-07-15 15:35:12.544787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.795 [2024-07-15 15:35:12.544800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.795 qpair failed and we were unable to recover it. 00:30:08.795 [2024-07-15 15:35:12.545072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.795 [2024-07-15 15:35:12.545085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.795 qpair failed and we were unable to recover it. 00:30:08.795 [2024-07-15 15:35:12.545386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.795 [2024-07-15 15:35:12.545398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.795 qpair failed and we were unable to recover it. 00:30:08.795 [2024-07-15 15:35:12.545577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.795 [2024-07-15 15:35:12.545590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.795 qpair failed and we were unable to recover it. 00:30:08.795 [2024-07-15 15:35:12.545918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.795 [2024-07-15 15:35:12.545931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.795 qpair failed and we were unable to recover it. 00:30:08.795 [2024-07-15 15:35:12.546115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.795 [2024-07-15 15:35:12.546128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.795 qpair failed and we were unable to recover it. 00:30:08.795 [2024-07-15 15:35:12.546475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.795 [2024-07-15 15:35:12.546487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.795 qpair failed and we were unable to recover it. 00:30:08.795 [2024-07-15 15:35:12.546608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.795 [2024-07-15 15:35:12.546621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.795 qpair failed and we were unable to recover it. 00:30:08.795 [2024-07-15 15:35:12.546892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.795 [2024-07-15 15:35:12.546905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.795 qpair failed and we were unable to recover it. 00:30:08.795 [2024-07-15 15:35:12.547134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.795 [2024-07-15 15:35:12.547146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.795 qpair failed and we were unable to recover it. 00:30:08.795 [2024-07-15 15:35:12.547397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.795 [2024-07-15 15:35:12.547409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.795 qpair failed and we were unable to recover it. 00:30:08.795 [2024-07-15 15:35:12.547666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.795 [2024-07-15 15:35:12.547679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.795 qpair failed and we were unable to recover it. 00:30:08.795 [2024-07-15 15:35:12.547980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.795 [2024-07-15 15:35:12.547993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.795 qpair failed and we were unable to recover it. 00:30:08.795 [2024-07-15 15:35:12.548227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.795 [2024-07-15 15:35:12.548240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.795 qpair failed and we were unable to recover it. 00:30:08.795 [2024-07-15 15:35:12.548564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.795 [2024-07-15 15:35:12.548578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.795 qpair failed and we were unable to recover it. 00:30:08.795 [2024-07-15 15:35:12.548835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.795 [2024-07-15 15:35:12.548848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.795 qpair failed and we were unable to recover it. 00:30:08.795 [2024-07-15 15:35:12.549125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.795 [2024-07-15 15:35:12.549138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.795 qpair failed and we were unable to recover it. 00:30:08.795 [2024-07-15 15:35:12.549464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.795 [2024-07-15 15:35:12.549476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.795 qpair failed and we were unable to recover it. 00:30:08.795 [2024-07-15 15:35:12.549807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.795 [2024-07-15 15:35:12.549819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.795 qpair failed and we were unable to recover it. 00:30:08.795 [2024-07-15 15:35:12.550068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.795 [2024-07-15 15:35:12.550080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.795 qpair failed and we were unable to recover it. 00:30:08.795 [2024-07-15 15:35:12.550355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.795 [2024-07-15 15:35:12.550368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.795 qpair failed and we were unable to recover it. 00:30:08.795 [2024-07-15 15:35:12.550695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.795 [2024-07-15 15:35:12.550708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.795 qpair failed and we were unable to recover it. 00:30:08.795 [2024-07-15 15:35:12.550956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.796 [2024-07-15 15:35:12.550969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.796 qpair failed and we were unable to recover it. 00:30:08.796 [2024-07-15 15:35:12.551201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.796 [2024-07-15 15:35:12.551214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.796 qpair failed and we were unable to recover it. 00:30:08.796 [2024-07-15 15:35:12.551515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.796 [2024-07-15 15:35:12.551528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.796 qpair failed and we were unable to recover it. 00:30:08.796 [2024-07-15 15:35:12.551856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.796 [2024-07-15 15:35:12.551869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.796 qpair failed and we were unable to recover it. 00:30:08.796 [2024-07-15 15:35:12.552036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.796 [2024-07-15 15:35:12.552049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.796 qpair failed and we were unable to recover it. 00:30:08.796 [2024-07-15 15:35:12.552285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.796 [2024-07-15 15:35:12.552298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.796 qpair failed and we were unable to recover it. 00:30:08.796 [2024-07-15 15:35:12.552564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.796 [2024-07-15 15:35:12.552577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.796 qpair failed and we were unable to recover it. 00:30:08.796 [2024-07-15 15:35:12.552889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.796 [2024-07-15 15:35:12.552902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.796 qpair failed and we were unable to recover it. 00:30:08.796 [2024-07-15 15:35:12.553171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.796 [2024-07-15 15:35:12.553184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.796 qpair failed and we were unable to recover it. 00:30:08.796 [2024-07-15 15:35:12.553441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.796 [2024-07-15 15:35:12.553453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.796 qpair failed and we were unable to recover it. 00:30:08.796 [2024-07-15 15:35:12.553704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.796 [2024-07-15 15:35:12.553717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.796 qpair failed and we were unable to recover it. 00:30:08.796 [2024-07-15 15:35:12.554066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.796 [2024-07-15 15:35:12.554079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.796 qpair failed and we were unable to recover it. 00:30:08.796 [2024-07-15 15:35:12.554353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.796 [2024-07-15 15:35:12.554365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.796 qpair failed and we were unable to recover it. 00:30:08.796 [2024-07-15 15:35:12.554687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.796 [2024-07-15 15:35:12.554700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.796 qpair failed and we were unable to recover it. 00:30:08.796 [2024-07-15 15:35:12.554882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.796 [2024-07-15 15:35:12.554895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.796 qpair failed and we were unable to recover it. 00:30:08.796 [2024-07-15 15:35:12.555220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.796 [2024-07-15 15:35:12.555232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.796 qpair failed and we were unable to recover it. 00:30:08.796 [2024-07-15 15:35:12.555417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.796 [2024-07-15 15:35:12.555430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.796 qpair failed and we were unable to recover it. 00:30:08.796 [2024-07-15 15:35:12.555713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.796 [2024-07-15 15:35:12.555725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.796 qpair failed and we were unable to recover it. 00:30:08.796 [2024-07-15 15:35:12.555908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.796 [2024-07-15 15:35:12.555921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.796 qpair failed and we were unable to recover it. 00:30:08.796 [2024-07-15 15:35:12.556194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.796 [2024-07-15 15:35:12.556207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.796 qpair failed and we were unable to recover it. 00:30:08.796 [2024-07-15 15:35:12.556390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.796 [2024-07-15 15:35:12.556403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.796 qpair failed and we were unable to recover it. 00:30:08.796 [2024-07-15 15:35:12.556704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.796 [2024-07-15 15:35:12.556716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.796 qpair failed and we were unable to recover it. 00:30:08.796 [2024-07-15 15:35:12.556904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.796 [2024-07-15 15:35:12.556917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.796 qpair failed and we were unable to recover it. 00:30:08.796 [2024-07-15 15:35:12.557225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.796 [2024-07-15 15:35:12.557238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.796 qpair failed and we were unable to recover it. 00:30:08.796 [2024-07-15 15:35:12.557538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.796 [2024-07-15 15:35:12.557550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.796 qpair failed and we were unable to recover it. 00:30:08.796 [2024-07-15 15:35:12.557725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.796 [2024-07-15 15:35:12.557738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.796 qpair failed and we were unable to recover it. 00:30:08.796 [2024-07-15 15:35:12.558010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.796 [2024-07-15 15:35:12.558022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.796 qpair failed and we were unable to recover it. 00:30:08.796 [2024-07-15 15:35:12.558257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.796 [2024-07-15 15:35:12.558270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.796 qpair failed and we were unable to recover it. 00:30:08.796 [2024-07-15 15:35:12.558590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.796 [2024-07-15 15:35:12.558603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.796 qpair failed and we were unable to recover it. 00:30:08.796 [2024-07-15 15:35:12.558949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.796 [2024-07-15 15:35:12.558962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.796 qpair failed and we were unable to recover it. 00:30:08.796 [2024-07-15 15:35:12.559215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.796 [2024-07-15 15:35:12.559227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.796 qpair failed and we were unable to recover it. 00:30:08.796 [2024-07-15 15:35:12.559428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.796 [2024-07-15 15:35:12.559441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.796 qpair failed and we were unable to recover it. 00:30:08.796 [2024-07-15 15:35:12.559693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.796 [2024-07-15 15:35:12.559709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.796 qpair failed and we were unable to recover it. 00:30:08.796 [2024-07-15 15:35:12.559977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.796 [2024-07-15 15:35:12.559990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.796 qpair failed and we were unable to recover it. 00:30:08.796 [2024-07-15 15:35:12.560224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.797 [2024-07-15 15:35:12.560237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.797 qpair failed and we were unable to recover it. 00:30:08.797 [2024-07-15 15:35:12.560401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.797 [2024-07-15 15:35:12.560414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.797 qpair failed and we were unable to recover it. 00:30:08.797 [2024-07-15 15:35:12.560587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.797 [2024-07-15 15:35:12.560599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.797 qpair failed and we were unable to recover it. 00:30:08.797 [2024-07-15 15:35:12.560688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.797 [2024-07-15 15:35:12.560701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.797 qpair failed and we were unable to recover it. 00:30:08.797 [2024-07-15 15:35:12.560886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.797 [2024-07-15 15:35:12.560899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.797 qpair failed and we were unable to recover it. 00:30:08.797 [2024-07-15 15:35:12.561150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.797 [2024-07-15 15:35:12.561162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.797 qpair failed and we were unable to recover it. 00:30:08.797 [2024-07-15 15:35:12.561345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.797 [2024-07-15 15:35:12.561357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.797 qpair failed and we were unable to recover it. 00:30:08.797 [2024-07-15 15:35:12.561597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.797 [2024-07-15 15:35:12.561610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.797 qpair failed and we were unable to recover it. 00:30:08.797 [2024-07-15 15:35:12.561856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.797 [2024-07-15 15:35:12.561869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.797 qpair failed and we were unable to recover it. 00:30:08.797 [2024-07-15 15:35:12.562194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.797 [2024-07-15 15:35:12.562206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.797 qpair failed and we were unable to recover it. 00:30:08.797 [2024-07-15 15:35:12.562508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.797 [2024-07-15 15:35:12.562521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.797 qpair failed and we were unable to recover it. 00:30:08.797 [2024-07-15 15:35:12.562792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.797 [2024-07-15 15:35:12.562805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.797 qpair failed and we were unable to recover it. 00:30:08.797 [2024-07-15 15:35:12.563058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.797 [2024-07-15 15:35:12.563071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.797 qpair failed and we were unable to recover it. 00:30:08.797 [2024-07-15 15:35:12.563318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.797 [2024-07-15 15:35:12.563330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.797 qpair failed and we were unable to recover it. 00:30:08.797 [2024-07-15 15:35:12.563642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.797 [2024-07-15 15:35:12.563655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.797 qpair failed and we were unable to recover it. 00:30:08.797 [2024-07-15 15:35:12.563956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.797 [2024-07-15 15:35:12.563969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.797 qpair failed and we were unable to recover it. 00:30:08.797 [2024-07-15 15:35:12.564268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.797 [2024-07-15 15:35:12.564281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.797 qpair failed and we were unable to recover it. 00:30:08.797 [2024-07-15 15:35:12.564539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.797 [2024-07-15 15:35:12.564552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.797 qpair failed and we were unable to recover it. 00:30:08.797 [2024-07-15 15:35:12.564803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.797 [2024-07-15 15:35:12.564817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.797 qpair failed and we were unable to recover it. 00:30:08.797 [2024-07-15 15:35:12.565055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.797 [2024-07-15 15:35:12.565068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.797 qpair failed and we were unable to recover it. 00:30:08.797 [2024-07-15 15:35:12.565325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.797 [2024-07-15 15:35:12.565337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.797 qpair failed and we were unable to recover it. 00:30:08.797 [2024-07-15 15:35:12.565639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.797 [2024-07-15 15:35:12.565651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.797 qpair failed and we were unable to recover it. 00:30:08.797 [2024-07-15 15:35:12.565907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.797 [2024-07-15 15:35:12.565919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.797 qpair failed and we were unable to recover it. 00:30:08.797 [2024-07-15 15:35:12.566218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.797 [2024-07-15 15:35:12.566231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.797 qpair failed and we were unable to recover it. 00:30:08.797 [2024-07-15 15:35:12.566477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.797 [2024-07-15 15:35:12.566489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.797 qpair failed and we were unable to recover it. 00:30:08.797 [2024-07-15 15:35:12.566687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.797 [2024-07-15 15:35:12.566700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.797 qpair failed and we were unable to recover it. 00:30:08.797 [2024-07-15 15:35:12.567016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.797 [2024-07-15 15:35:12.567029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.797 qpair failed and we were unable to recover it. 00:30:08.797 [2024-07-15 15:35:12.567269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.797 [2024-07-15 15:35:12.567281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.797 qpair failed and we were unable to recover it. 00:30:08.797 [2024-07-15 15:35:12.567533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.797 [2024-07-15 15:35:12.567546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.797 qpair failed and we were unable to recover it. 00:30:08.797 [2024-07-15 15:35:12.567871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.797 [2024-07-15 15:35:12.567883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.797 qpair failed and we were unable to recover it. 00:30:08.797 [2024-07-15 15:35:12.568189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.797 [2024-07-15 15:35:12.568202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.797 qpair failed and we were unable to recover it. 00:30:08.797 [2024-07-15 15:35:12.568388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.797 [2024-07-15 15:35:12.568400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.797 qpair failed and we were unable to recover it. 00:30:08.797 [2024-07-15 15:35:12.568677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.797 [2024-07-15 15:35:12.568689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.797 qpair failed and we were unable to recover it. 00:30:08.797 [2024-07-15 15:35:12.568989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.797 [2024-07-15 15:35:12.569002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.797 qpair failed and we were unable to recover it. 00:30:08.797 [2024-07-15 15:35:12.569260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.797 [2024-07-15 15:35:12.569273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.797 qpair failed and we were unable to recover it. 00:30:08.797 [2024-07-15 15:35:12.569575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.797 [2024-07-15 15:35:12.569587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.797 qpair failed and we were unable to recover it. 00:30:08.797 [2024-07-15 15:35:12.569887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.797 [2024-07-15 15:35:12.569900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.797 qpair failed and we were unable to recover it. 00:30:08.797 [2024-07-15 15:35:12.570095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.797 [2024-07-15 15:35:12.570107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.797 qpair failed and we were unable to recover it. 00:30:08.797 [2024-07-15 15:35:12.570434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.797 [2024-07-15 15:35:12.570448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.797 qpair failed and we were unable to recover it. 00:30:08.797 [2024-07-15 15:35:12.570679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.797 [2024-07-15 15:35:12.570692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.797 qpair failed and we were unable to recover it. 00:30:08.797 [2024-07-15 15:35:12.570945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.797 [2024-07-15 15:35:12.570958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.798 qpair failed and we were unable to recover it. 00:30:08.798 [2024-07-15 15:35:12.571219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.798 [2024-07-15 15:35:12.571232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.798 qpair failed and we were unable to recover it. 00:30:08.798 [2024-07-15 15:35:12.571552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.798 [2024-07-15 15:35:12.571565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.798 qpair failed and we were unable to recover it. 00:30:08.798 [2024-07-15 15:35:12.571730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.798 [2024-07-15 15:35:12.571743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.798 qpair failed and we were unable to recover it. 00:30:08.798 [2024-07-15 15:35:12.571912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.798 [2024-07-15 15:35:12.571925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.798 qpair failed and we were unable to recover it. 00:30:08.798 [2024-07-15 15:35:12.572155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.798 [2024-07-15 15:35:12.572168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.798 qpair failed and we were unable to recover it. 00:30:08.798 [2024-07-15 15:35:12.572402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.798 [2024-07-15 15:35:12.572414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.798 qpair failed and we were unable to recover it. 00:30:08.798 [2024-07-15 15:35:12.572667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.798 [2024-07-15 15:35:12.572679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.798 qpair failed and we were unable to recover it. 00:30:08.798 [2024-07-15 15:35:12.573006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.798 [2024-07-15 15:35:12.573019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.798 qpair failed and we were unable to recover it. 00:30:08.798 [2024-07-15 15:35:12.573319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.798 [2024-07-15 15:35:12.573332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.798 qpair failed and we were unable to recover it. 00:30:08.798 [2024-07-15 15:35:12.573645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.798 [2024-07-15 15:35:12.573657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.798 qpair failed and we were unable to recover it. 00:30:08.798 [2024-07-15 15:35:12.573956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.798 [2024-07-15 15:35:12.573969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.798 qpair failed and we were unable to recover it. 00:30:08.798 [2024-07-15 15:35:12.574297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.798 [2024-07-15 15:35:12.574310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.798 qpair failed and we were unable to recover it. 00:30:08.798 [2024-07-15 15:35:12.574612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.798 [2024-07-15 15:35:12.574625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.798 qpair failed and we were unable to recover it. 00:30:08.798 [2024-07-15 15:35:12.574896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.798 [2024-07-15 15:35:12.574909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.798 qpair failed and we were unable to recover it. 00:30:08.798 [2024-07-15 15:35:12.575236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.798 [2024-07-15 15:35:12.575248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.798 qpair failed and we were unable to recover it. 00:30:08.798 [2024-07-15 15:35:12.575571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.798 [2024-07-15 15:35:12.575584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.798 qpair failed and we were unable to recover it. 00:30:08.798 [2024-07-15 15:35:12.575823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.798 [2024-07-15 15:35:12.575839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.798 qpair failed and we were unable to recover it. 00:30:08.798 [2024-07-15 15:35:12.576166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.798 [2024-07-15 15:35:12.576179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.798 qpair failed and we were unable to recover it. 00:30:08.798 [2024-07-15 15:35:12.576368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.798 [2024-07-15 15:35:12.576381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.798 qpair failed and we were unable to recover it. 00:30:08.798 [2024-07-15 15:35:12.576704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.798 [2024-07-15 15:35:12.576717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.798 qpair failed and we were unable to recover it. 00:30:08.798 [2024-07-15 15:35:12.576973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.798 [2024-07-15 15:35:12.576987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.798 qpair failed and we were unable to recover it. 00:30:08.798 [2024-07-15 15:35:12.577312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.798 [2024-07-15 15:35:12.577324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.798 qpair failed and we were unable to recover it. 00:30:08.798 [2024-07-15 15:35:12.577573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.798 [2024-07-15 15:35:12.577585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.798 qpair failed and we were unable to recover it. 00:30:08.798 [2024-07-15 15:35:12.577818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.798 [2024-07-15 15:35:12.577834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.798 qpair failed and we were unable to recover it. 00:30:08.798 [2024-07-15 15:35:12.578088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.798 [2024-07-15 15:35:12.578101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.798 qpair failed and we were unable to recover it. 00:30:08.798 [2024-07-15 15:35:12.578447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.798 [2024-07-15 15:35:12.578459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.798 qpair failed and we were unable to recover it. 00:30:08.798 [2024-07-15 15:35:12.578643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.798 [2024-07-15 15:35:12.578655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.798 qpair failed and we were unable to recover it. 00:30:08.798 [2024-07-15 15:35:12.578905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.798 [2024-07-15 15:35:12.578917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.798 qpair failed and we were unable to recover it. 00:30:08.798 [2024-07-15 15:35:12.579098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.798 [2024-07-15 15:35:12.579111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.798 qpair failed and we were unable to recover it. 00:30:08.798 [2024-07-15 15:35:12.579293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.798 [2024-07-15 15:35:12.579305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.798 qpair failed and we were unable to recover it. 00:30:08.798 [2024-07-15 15:35:12.579544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.798 [2024-07-15 15:35:12.579557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.798 qpair failed and we were unable to recover it. 00:30:08.798 [2024-07-15 15:35:12.579830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.798 [2024-07-15 15:35:12.579846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.798 qpair failed and we were unable to recover it. 00:30:08.798 [2024-07-15 15:35:12.580156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.798 [2024-07-15 15:35:12.580169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.798 qpair failed and we were unable to recover it. 00:30:08.798 [2024-07-15 15:35:12.580404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.798 [2024-07-15 15:35:12.580416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.798 qpair failed and we were unable to recover it. 00:30:08.798 [2024-07-15 15:35:12.580672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.798 [2024-07-15 15:35:12.580684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.798 qpair failed and we were unable to recover it. 00:30:08.798 [2024-07-15 15:35:12.580942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.798 [2024-07-15 15:35:12.580955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.798 qpair failed and we were unable to recover it. 00:30:08.798 [2024-07-15 15:35:12.581278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.798 [2024-07-15 15:35:12.581291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.798 qpair failed and we were unable to recover it. 00:30:08.798 [2024-07-15 15:35:12.581638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.798 [2024-07-15 15:35:12.581653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.798 qpair failed and we were unable to recover it. 00:30:08.798 [2024-07-15 15:35:12.581955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.798 [2024-07-15 15:35:12.581968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.798 qpair failed and we were unable to recover it. 00:30:08.798 [2024-07-15 15:35:12.582297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.798 [2024-07-15 15:35:12.582311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.798 qpair failed and we were unable to recover it. 00:30:08.798 [2024-07-15 15:35:12.582496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.799 [2024-07-15 15:35:12.582508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.799 qpair failed and we were unable to recover it. 00:30:08.799 [2024-07-15 15:35:12.582807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.799 [2024-07-15 15:35:12.582819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.799 qpair failed and we were unable to recover it. 00:30:08.799 [2024-07-15 15:35:12.583123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.799 [2024-07-15 15:35:12.583135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.799 qpair failed and we were unable to recover it. 00:30:08.799 [2024-07-15 15:35:12.583318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.799 [2024-07-15 15:35:12.583331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.799 qpair failed and we were unable to recover it. 00:30:08.799 [2024-07-15 15:35:12.583591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.799 [2024-07-15 15:35:12.583604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.799 qpair failed and we were unable to recover it. 00:30:08.799 [2024-07-15 15:35:12.583878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.799 [2024-07-15 15:35:12.583891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.799 qpair failed and we were unable to recover it. 00:30:08.799 [2024-07-15 15:35:12.584225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.799 [2024-07-15 15:35:12.584237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.799 qpair failed and we were unable to recover it. 00:30:08.799 [2024-07-15 15:35:12.584553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.799 [2024-07-15 15:35:12.584566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.799 qpair failed and we were unable to recover it. 00:30:08.799 [2024-07-15 15:35:12.584822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.799 [2024-07-15 15:35:12.584837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.799 qpair failed and we were unable to recover it. 00:30:08.799 [2024-07-15 15:35:12.585087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.799 [2024-07-15 15:35:12.585099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.799 qpair failed and we were unable to recover it. 00:30:08.799 [2024-07-15 15:35:12.585272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.799 [2024-07-15 15:35:12.585285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.799 qpair failed and we were unable to recover it. 00:30:08.799 [2024-07-15 15:35:12.585490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.799 [2024-07-15 15:35:12.585503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.799 qpair failed and we were unable to recover it. 00:30:08.799 [2024-07-15 15:35:12.585732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.799 [2024-07-15 15:35:12.585745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.799 qpair failed and we were unable to recover it. 00:30:08.799 [2024-07-15 15:35:12.586005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.799 [2024-07-15 15:35:12.586018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.799 qpair failed and we were unable to recover it. 00:30:08.799 [2024-07-15 15:35:12.586253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.799 [2024-07-15 15:35:12.586266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.799 qpair failed and we were unable to recover it. 00:30:08.799 [2024-07-15 15:35:12.586593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.799 [2024-07-15 15:35:12.586606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.799 qpair failed and we were unable to recover it. 00:30:08.799 [2024-07-15 15:35:12.586788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.799 [2024-07-15 15:35:12.586801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.799 qpair failed and we were unable to recover it. 00:30:08.799 [2024-07-15 15:35:12.586968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.799 [2024-07-15 15:35:12.586981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.799 qpair failed and we were unable to recover it. 00:30:08.799 [2024-07-15 15:35:12.587213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.799 [2024-07-15 15:35:12.587226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.799 qpair failed and we were unable to recover it. 00:30:08.799 [2024-07-15 15:35:12.587550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.799 [2024-07-15 15:35:12.587563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.799 qpair failed and we were unable to recover it. 00:30:08.799 [2024-07-15 15:35:12.587890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.799 [2024-07-15 15:35:12.587903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.799 qpair failed and we were unable to recover it. 00:30:08.799 [2024-07-15 15:35:12.588101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.799 [2024-07-15 15:35:12.588113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.799 qpair failed and we were unable to recover it. 00:30:08.799 [2024-07-15 15:35:12.588306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.799 [2024-07-15 15:35:12.588319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.799 qpair failed and we were unable to recover it. 00:30:08.799 [2024-07-15 15:35:12.588560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.799 [2024-07-15 15:35:12.588573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.799 qpair failed and we were unable to recover it. 00:30:08.799 [2024-07-15 15:35:12.588899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.799 [2024-07-15 15:35:12.588912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.799 qpair failed and we were unable to recover it. 00:30:08.799 [2024-07-15 15:35:12.589107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.799 [2024-07-15 15:35:12.589120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.799 qpair failed and we were unable to recover it. 00:30:08.799 [2024-07-15 15:35:12.589365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.799 [2024-07-15 15:35:12.589378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.799 qpair failed and we were unable to recover it. 00:30:08.799 [2024-07-15 15:35:12.589633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.799 [2024-07-15 15:35:12.589645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.799 qpair failed and we were unable to recover it. 00:30:08.799 [2024-07-15 15:35:12.589948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.799 [2024-07-15 15:35:12.589961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.799 qpair failed and we were unable to recover it. 00:30:08.799 [2024-07-15 15:35:12.590198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.799 [2024-07-15 15:35:12.590211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.799 qpair failed and we were unable to recover it. 00:30:08.799 [2024-07-15 15:35:12.590445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.799 [2024-07-15 15:35:12.590458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.799 qpair failed and we were unable to recover it. 00:30:08.799 [2024-07-15 15:35:12.590641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.799 [2024-07-15 15:35:12.590654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.799 qpair failed and we were unable to recover it. 00:30:08.799 [2024-07-15 15:35:12.590903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.799 [2024-07-15 15:35:12.590917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.799 qpair failed and we were unable to recover it. 00:30:08.799 [2024-07-15 15:35:12.591193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.799 [2024-07-15 15:35:12.591206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.799 qpair failed and we were unable to recover it. 00:30:08.799 [2024-07-15 15:35:12.591456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.799 [2024-07-15 15:35:12.591469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.799 qpair failed and we were unable to recover it. 00:30:08.799 [2024-07-15 15:35:12.591721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.799 [2024-07-15 15:35:12.591733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.799 qpair failed and we were unable to recover it. 00:30:08.799 [2024-07-15 15:35:12.592038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.799 [2024-07-15 15:35:12.592051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.799 qpair failed and we were unable to recover it. 00:30:08.799 [2024-07-15 15:35:12.592304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.799 [2024-07-15 15:35:12.592319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.799 qpair failed and we were unable to recover it. 00:30:08.799 [2024-07-15 15:35:12.592548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.799 [2024-07-15 15:35:12.592561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.799 qpair failed and we were unable to recover it. 00:30:08.799 [2024-07-15 15:35:12.592747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.799 [2024-07-15 15:35:12.592760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.799 qpair failed and we were unable to recover it. 00:30:08.799 [2024-07-15 15:35:12.592859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.799 [2024-07-15 15:35:12.592872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.799 qpair failed and we were unable to recover it. 00:30:08.800 [2024-07-15 15:35:12.593115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.800 [2024-07-15 15:35:12.593128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.800 qpair failed and we were unable to recover it. 00:30:08.800 [2024-07-15 15:35:12.593430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.800 [2024-07-15 15:35:12.593442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.800 qpair failed and we were unable to recover it. 00:30:08.800 [2024-07-15 15:35:12.593742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.800 [2024-07-15 15:35:12.593755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.800 qpair failed and we were unable to recover it. 00:30:08.800 [2024-07-15 15:35:12.594079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.800 [2024-07-15 15:35:12.594092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.800 qpair failed and we were unable to recover it. 00:30:08.800 [2024-07-15 15:35:12.594340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.800 [2024-07-15 15:35:12.594352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.800 qpair failed and we were unable to recover it. 00:30:08.800 [2024-07-15 15:35:12.594690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.800 [2024-07-15 15:35:12.594703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.800 qpair failed and we were unable to recover it. 00:30:08.800 [2024-07-15 15:35:12.594813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.800 [2024-07-15 15:35:12.594826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.800 qpair failed and we were unable to recover it. 00:30:08.800 [2024-07-15 15:35:12.595068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.800 [2024-07-15 15:35:12.595081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.800 qpair failed and we were unable to recover it. 00:30:08.800 [2024-07-15 15:35:12.595342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.800 [2024-07-15 15:35:12.595354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.800 qpair failed and we were unable to recover it. 00:30:08.800 [2024-07-15 15:35:12.595666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.800 [2024-07-15 15:35:12.595679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.800 qpair failed and we were unable to recover it. 00:30:08.800 [2024-07-15 15:35:12.595933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.800 [2024-07-15 15:35:12.595946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.800 qpair failed and we were unable to recover it. 00:30:08.800 [2024-07-15 15:35:12.596193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.800 [2024-07-15 15:35:12.596205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.800 qpair failed and we were unable to recover it. 00:30:08.800 [2024-07-15 15:35:12.596458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.800 [2024-07-15 15:35:12.596471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.800 qpair failed and we were unable to recover it. 00:30:08.800 [2024-07-15 15:35:12.596795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.800 [2024-07-15 15:35:12.596807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.800 qpair failed and we were unable to recover it. 00:30:08.800 [2024-07-15 15:35:12.597109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.800 [2024-07-15 15:35:12.597122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.800 qpair failed and we were unable to recover it. 00:30:08.800 [2024-07-15 15:35:12.597357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.800 [2024-07-15 15:35:12.597370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.800 qpair failed and we were unable to recover it. 00:30:08.800 [2024-07-15 15:35:12.597618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.800 [2024-07-15 15:35:12.597631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.800 qpair failed and we were unable to recover it. 00:30:08.800 [2024-07-15 15:35:12.597931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.800 [2024-07-15 15:35:12.597944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.800 qpair failed and we were unable to recover it. 00:30:08.800 [2024-07-15 15:35:12.598120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.800 [2024-07-15 15:35:12.598132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.800 qpair failed and we were unable to recover it. 00:30:08.800 [2024-07-15 15:35:12.598385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.800 [2024-07-15 15:35:12.598398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.800 qpair failed and we were unable to recover it. 00:30:08.800 [2024-07-15 15:35:12.598698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.800 [2024-07-15 15:35:12.598711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.800 qpair failed and we were unable to recover it. 00:30:08.800 [2024-07-15 15:35:12.598941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.800 [2024-07-15 15:35:12.598955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.800 qpair failed and we were unable to recover it. 00:30:08.800 [2024-07-15 15:35:12.599070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.800 [2024-07-15 15:35:12.599082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.800 qpair failed and we were unable to recover it. 00:30:08.800 [2024-07-15 15:35:12.599355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.800 [2024-07-15 15:35:12.599368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.800 qpair failed and we were unable to recover it. 00:30:08.800 [2024-07-15 15:35:12.599622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.800 [2024-07-15 15:35:12.599635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.800 qpair failed and we were unable to recover it. 00:30:08.800 [2024-07-15 15:35:12.599808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.800 [2024-07-15 15:35:12.599821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.800 qpair failed and we were unable to recover it. 00:30:08.800 [2024-07-15 15:35:12.600081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.800 [2024-07-15 15:35:12.600094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.800 qpair failed and we were unable to recover it. 00:30:08.800 [2024-07-15 15:35:12.600442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.800 [2024-07-15 15:35:12.600454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.800 qpair failed and we were unable to recover it. 00:30:08.800 [2024-07-15 15:35:12.600706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.800 [2024-07-15 15:35:12.600718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.800 qpair failed and we were unable to recover it. 00:30:08.800 [2024-07-15 15:35:12.601046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.800 [2024-07-15 15:35:12.601058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.800 qpair failed and we were unable to recover it. 00:30:08.800 [2024-07-15 15:35:12.601317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.800 [2024-07-15 15:35:12.601329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.800 qpair failed and we were unable to recover it. 00:30:08.800 [2024-07-15 15:35:12.601515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.800 [2024-07-15 15:35:12.601528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.800 qpair failed and we were unable to recover it. 00:30:08.800 [2024-07-15 15:35:12.601853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.800 [2024-07-15 15:35:12.601866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.800 qpair failed and we were unable to recover it. 00:30:08.800 [2024-07-15 15:35:12.602058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.800 [2024-07-15 15:35:12.602071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.800 qpair failed and we were unable to recover it. 00:30:08.800 [2024-07-15 15:35:12.602321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.800 [2024-07-15 15:35:12.602333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.800 qpair failed and we were unable to recover it. 00:30:08.800 [2024-07-15 15:35:12.602593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.800 [2024-07-15 15:35:12.602605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.800 qpair failed and we were unable to recover it. 00:30:08.800 [2024-07-15 15:35:12.602930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.800 [2024-07-15 15:35:12.602944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.800 qpair failed and we were unable to recover it. 00:30:08.800 [2024-07-15 15:35:12.603211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.800 [2024-07-15 15:35:12.603224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.800 qpair failed and we were unable to recover it. 00:30:08.800 [2024-07-15 15:35:12.603383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.800 [2024-07-15 15:35:12.603395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.800 qpair failed and we were unable to recover it. 00:30:08.800 [2024-07-15 15:35:12.603629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.800 [2024-07-15 15:35:12.603642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.800 qpair failed and we were unable to recover it. 00:30:08.800 [2024-07-15 15:35:12.603822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.801 [2024-07-15 15:35:12.603845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.801 qpair failed and we were unable to recover it. 00:30:08.801 [2024-07-15 15:35:12.604170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.801 [2024-07-15 15:35:12.604182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.801 qpair failed and we were unable to recover it. 00:30:08.801 [2024-07-15 15:35:12.604377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.801 [2024-07-15 15:35:12.604390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.801 qpair failed and we were unable to recover it. 00:30:08.801 [2024-07-15 15:35:12.604635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.801 [2024-07-15 15:35:12.604647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.801 qpair failed and we were unable to recover it. 00:30:08.801 [2024-07-15 15:35:12.604893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.801 [2024-07-15 15:35:12.604906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.801 qpair failed and we were unable to recover it. 00:30:08.801 [2024-07-15 15:35:12.605139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.801 [2024-07-15 15:35:12.605152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.801 qpair failed and we were unable to recover it. 00:30:08.801 [2024-07-15 15:35:12.605397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.801 [2024-07-15 15:35:12.605410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.801 qpair failed and we were unable to recover it. 00:30:08.801 [2024-07-15 15:35:12.605664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.801 [2024-07-15 15:35:12.605676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.801 qpair failed and we were unable to recover it. 00:30:08.801 [2024-07-15 15:35:12.605944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.801 [2024-07-15 15:35:12.605957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.801 qpair failed and we were unable to recover it. 00:30:08.801 [2024-07-15 15:35:12.606154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.801 [2024-07-15 15:35:12.606167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.801 qpair failed and we were unable to recover it. 00:30:08.801 [2024-07-15 15:35:12.606494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.801 [2024-07-15 15:35:12.606507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.801 qpair failed and we were unable to recover it. 00:30:08.801 [2024-07-15 15:35:12.606764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.801 [2024-07-15 15:35:12.606776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.801 qpair failed and we were unable to recover it. 00:30:08.801 [2024-07-15 15:35:12.607025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.801 [2024-07-15 15:35:12.607039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.801 qpair failed and we were unable to recover it. 00:30:08.801 [2024-07-15 15:35:12.607270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.801 [2024-07-15 15:35:12.607283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.801 qpair failed and we were unable to recover it. 00:30:08.801 [2024-07-15 15:35:12.607459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.801 [2024-07-15 15:35:12.607472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.801 qpair failed and we were unable to recover it. 00:30:08.801 [2024-07-15 15:35:12.607723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.801 [2024-07-15 15:35:12.607736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.801 qpair failed and we were unable to recover it. 00:30:08.801 [2024-07-15 15:35:12.607967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.801 [2024-07-15 15:35:12.607980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.801 qpair failed and we were unable to recover it. 00:30:08.801 [2024-07-15 15:35:12.608087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.801 [2024-07-15 15:35:12.608100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.801 qpair failed and we were unable to recover it. 00:30:08.801 [2024-07-15 15:35:12.608346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.801 [2024-07-15 15:35:12.608359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.801 qpair failed and we were unable to recover it. 00:30:08.801 [2024-07-15 15:35:12.608595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.801 [2024-07-15 15:35:12.608607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.801 qpair failed and we were unable to recover it. 00:30:08.801 [2024-07-15 15:35:12.608789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.801 [2024-07-15 15:35:12.608802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.801 qpair failed and we were unable to recover it. 00:30:08.801 [2024-07-15 15:35:12.608981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.801 [2024-07-15 15:35:12.608994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.801 qpair failed and we were unable to recover it. 00:30:08.801 [2024-07-15 15:35:12.609250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.801 [2024-07-15 15:35:12.609262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.801 qpair failed and we were unable to recover it. 00:30:08.801 [2024-07-15 15:35:12.609456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.801 [2024-07-15 15:35:12.609469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.801 qpair failed and we were unable to recover it. 00:30:08.801 [2024-07-15 15:35:12.609726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.801 [2024-07-15 15:35:12.609739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.801 qpair failed and we were unable to recover it. 00:30:08.801 [2024-07-15 15:35:12.609991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.801 [2024-07-15 15:35:12.610004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.801 qpair failed and we were unable to recover it. 00:30:08.801 [2024-07-15 15:35:12.610298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.801 [2024-07-15 15:35:12.610311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.801 qpair failed and we were unable to recover it. 00:30:08.801 [2024-07-15 15:35:12.610559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.801 [2024-07-15 15:35:12.610572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.801 qpair failed and we were unable to recover it. 00:30:08.801 [2024-07-15 15:35:12.610768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.801 [2024-07-15 15:35:12.610780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.801 qpair failed and we were unable to recover it. 00:30:08.801 [2024-07-15 15:35:12.611057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.801 [2024-07-15 15:35:12.611070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.801 qpair failed and we were unable to recover it. 00:30:08.801 [2024-07-15 15:35:12.611352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.801 [2024-07-15 15:35:12.611365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.801 qpair failed and we were unable to recover it. 00:30:08.801 [2024-07-15 15:35:12.611615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.801 [2024-07-15 15:35:12.611629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.801 qpair failed and we were unable to recover it. 00:30:08.801 [2024-07-15 15:35:12.611821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.801 [2024-07-15 15:35:12.611872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.801 qpair failed and we were unable to recover it. 00:30:08.801 [2024-07-15 15:35:12.612261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.801 [2024-07-15 15:35:12.612302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.801 qpair failed and we were unable to recover it. 00:30:08.801 [2024-07-15 15:35:12.612616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.801 [2024-07-15 15:35:12.612656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.801 qpair failed and we were unable to recover it. 00:30:08.801 [2024-07-15 15:35:12.613019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.802 [2024-07-15 15:35:12.613060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.802 qpair failed and we were unable to recover it. 00:30:08.802 [2024-07-15 15:35:12.613447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.802 [2024-07-15 15:35:12.613497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.802 qpair failed and we were unable to recover it. 00:30:08.802 [2024-07-15 15:35:12.613788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.802 [2024-07-15 15:35:12.613801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.802 qpair failed and we were unable to recover it. 00:30:08.802 [2024-07-15 15:35:12.613968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.802 [2024-07-15 15:35:12.613981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.802 qpair failed and we were unable to recover it. 00:30:08.802 [2024-07-15 15:35:12.614286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.802 [2024-07-15 15:35:12.614326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.802 qpair failed and we were unable to recover it. 00:30:08.802 [2024-07-15 15:35:12.614643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.802 [2024-07-15 15:35:12.614684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.802 qpair failed and we were unable to recover it. 00:30:08.802 [2024-07-15 15:35:12.614943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.802 [2024-07-15 15:35:12.614985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.802 qpair failed and we were unable to recover it. 00:30:08.802 [2024-07-15 15:35:12.615347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.802 [2024-07-15 15:35:12.615387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.802 qpair failed and we were unable to recover it. 00:30:08.802 [2024-07-15 15:35:12.615776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.802 [2024-07-15 15:35:12.615816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.802 qpair failed and we were unable to recover it. 00:30:08.802 [2024-07-15 15:35:12.616054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.802 [2024-07-15 15:35:12.616094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.802 qpair failed and we were unable to recover it. 00:30:08.802 [2024-07-15 15:35:12.616460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.802 [2024-07-15 15:35:12.616499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.802 qpair failed and we were unable to recover it. 00:30:08.802 [2024-07-15 15:35:12.616887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.802 [2024-07-15 15:35:12.616927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.802 qpair failed and we were unable to recover it. 00:30:08.802 [2024-07-15 15:35:12.617245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.802 [2024-07-15 15:35:12.617285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.802 qpair failed and we were unable to recover it. 00:30:08.802 [2024-07-15 15:35:12.617583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.802 [2024-07-15 15:35:12.617622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.802 qpair failed and we were unable to recover it. 00:30:08.802 [2024-07-15 15:35:12.618013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.802 [2024-07-15 15:35:12.618053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.802 qpair failed and we were unable to recover it. 00:30:08.802 [2024-07-15 15:35:12.618422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.802 [2024-07-15 15:35:12.618461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.802 qpair failed and we were unable to recover it. 00:30:08.802 [2024-07-15 15:35:12.618733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.802 [2024-07-15 15:35:12.618745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.802 qpair failed and we were unable to recover it. 00:30:08.802 [2024-07-15 15:35:12.619034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.802 [2024-07-15 15:35:12.619075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.802 qpair failed and we were unable to recover it. 00:30:08.802 [2024-07-15 15:35:12.619373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.802 [2024-07-15 15:35:12.619413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.802 qpair failed and we were unable to recover it. 00:30:08.802 [2024-07-15 15:35:12.619724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.802 [2024-07-15 15:35:12.619763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.802 qpair failed and we were unable to recover it. 00:30:08.802 [2024-07-15 15:35:12.619938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.802 [2024-07-15 15:35:12.619951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.802 qpair failed and we were unable to recover it. 00:30:08.802 [2024-07-15 15:35:12.620278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.802 [2024-07-15 15:35:12.620317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.802 qpair failed and we were unable to recover it. 00:30:08.802 [2024-07-15 15:35:12.620696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.802 [2024-07-15 15:35:12.620736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.802 qpair failed and we were unable to recover it. 00:30:08.802 [2024-07-15 15:35:12.620998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.802 [2024-07-15 15:35:12.621012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.802 qpair failed and we were unable to recover it. 00:30:08.802 [2024-07-15 15:35:12.621256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.802 [2024-07-15 15:35:12.621291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.802 qpair failed and we were unable to recover it. 00:30:08.802 [2024-07-15 15:35:12.621588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.802 [2024-07-15 15:35:12.621627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.802 qpair failed and we were unable to recover it. 00:30:08.802 [2024-07-15 15:35:12.622039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.802 [2024-07-15 15:35:12.622079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.802 qpair failed and we were unable to recover it. 00:30:08.802 [2024-07-15 15:35:12.622387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.802 [2024-07-15 15:35:12.622428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.802 qpair failed and we were unable to recover it. 00:30:08.802 [2024-07-15 15:35:12.622649] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19eb1f0 is same with the state(5) to be set 00:30:08.802 [2024-07-15 15:35:12.623032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.802 [2024-07-15 15:35:12.623113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.802 qpair failed and we were unable to recover it. 00:30:08.802 [2024-07-15 15:35:12.623463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.802 [2024-07-15 15:35:12.623505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.802 qpair failed and we were unable to recover it. 00:30:08.802 [2024-07-15 15:35:12.623839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.802 [2024-07-15 15:35:12.623857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.802 qpair failed and we were unable to recover it. 00:30:08.802 [2024-07-15 15:35:12.624208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.802 [2024-07-15 15:35:12.624225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.802 qpair failed and we were unable to recover it. 00:30:08.802 [2024-07-15 15:35:12.624480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.802 [2024-07-15 15:35:12.624497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.802 qpair failed and we were unable to recover it. 00:30:08.802 [2024-07-15 15:35:12.624858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.802 [2024-07-15 15:35:12.624882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.802 qpair failed and we were unable to recover it. 00:30:08.802 [2024-07-15 15:35:12.625154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.802 [2024-07-15 15:35:12.625194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.802 qpair failed and we were unable to recover it. 00:30:08.802 [2024-07-15 15:35:12.625581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.802 [2024-07-15 15:35:12.625620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.802 qpair failed and we were unable to recover it. 00:30:08.802 [2024-07-15 15:35:12.625926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.802 [2024-07-15 15:35:12.625967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.802 qpair failed and we were unable to recover it. 00:30:08.802 [2024-07-15 15:35:12.626299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.802 [2024-07-15 15:35:12.626338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.802 qpair failed and we were unable to recover it. 00:30:08.802 [2024-07-15 15:35:12.626724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.802 [2024-07-15 15:35:12.626764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.802 qpair failed and we were unable to recover it. 00:30:08.802 [2024-07-15 15:35:12.627095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.802 [2024-07-15 15:35:12.627113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.802 qpair failed and we were unable to recover it. 00:30:08.802 [2024-07-15 15:35:12.627366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.802 [2024-07-15 15:35:12.627382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.802 qpair failed and we were unable to recover it. 00:30:08.803 [2024-07-15 15:35:12.627708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.803 [2024-07-15 15:35:12.627757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.803 qpair failed and we were unable to recover it. 00:30:08.803 [2024-07-15 15:35:12.628065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.803 [2024-07-15 15:35:12.628106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.803 qpair failed and we were unable to recover it. 00:30:08.803 [2024-07-15 15:35:12.628497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.803 [2024-07-15 15:35:12.628536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.803 qpair failed and we were unable to recover it. 00:30:08.803 [2024-07-15 15:35:12.628777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.803 [2024-07-15 15:35:12.628816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.803 qpair failed and we were unable to recover it. 00:30:08.803 [2024-07-15 15:35:12.629223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.803 [2024-07-15 15:35:12.629264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.803 qpair failed and we were unable to recover it. 00:30:08.803 [2024-07-15 15:35:12.629628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.803 [2024-07-15 15:35:12.629668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.803 qpair failed and we were unable to recover it. 00:30:08.803 [2024-07-15 15:35:12.630042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.803 [2024-07-15 15:35:12.630082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.803 qpair failed and we were unable to recover it. 00:30:08.803 [2024-07-15 15:35:12.630466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.803 [2024-07-15 15:35:12.630506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.803 qpair failed and we were unable to recover it. 00:30:08.803 [2024-07-15 15:35:12.630890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.803 [2024-07-15 15:35:12.630930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.803 qpair failed and we were unable to recover it. 00:30:08.803 [2024-07-15 15:35:12.631238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.803 [2024-07-15 15:35:12.631277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.803 qpair failed and we were unable to recover it. 00:30:08.803 [2024-07-15 15:35:12.631594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.803 [2024-07-15 15:35:12.631633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.803 qpair failed and we were unable to recover it. 00:30:08.803 [2024-07-15 15:35:12.632003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.803 [2024-07-15 15:35:12.632046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.803 qpair failed and we were unable to recover it. 00:30:08.803 [2024-07-15 15:35:12.632346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.803 [2024-07-15 15:35:12.632385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.803 qpair failed and we were unable to recover it. 00:30:08.803 [2024-07-15 15:35:12.632707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.803 [2024-07-15 15:35:12.632746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.803 qpair failed and we were unable to recover it. 00:30:08.803 [2024-07-15 15:35:12.633098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.803 [2024-07-15 15:35:12.633139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.803 qpair failed and we were unable to recover it. 00:30:08.803 [2024-07-15 15:35:12.633503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.803 [2024-07-15 15:35:12.633542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.803 qpair failed and we were unable to recover it. 00:30:08.803 [2024-07-15 15:35:12.633763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.803 [2024-07-15 15:35:12.633780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.803 qpair failed and we were unable to recover it. 00:30:08.803 [2024-07-15 15:35:12.634093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.803 [2024-07-15 15:35:12.634110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.803 qpair failed and we were unable to recover it. 00:30:08.803 [2024-07-15 15:35:12.634440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.803 [2024-07-15 15:35:12.634457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.803 qpair failed and we were unable to recover it. 00:30:08.803 [2024-07-15 15:35:12.634796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.803 [2024-07-15 15:35:12.634844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.803 qpair failed and we were unable to recover it. 00:30:08.803 [2024-07-15 15:35:12.635077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.803 [2024-07-15 15:35:12.635117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.803 qpair failed and we were unable to recover it. 00:30:08.803 [2024-07-15 15:35:12.635537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.803 [2024-07-15 15:35:12.635576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.803 qpair failed and we were unable to recover it. 00:30:08.803 [2024-07-15 15:35:12.635817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.803 [2024-07-15 15:35:12.635848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.803 qpair failed and we were unable to recover it. 00:30:08.803 [2024-07-15 15:35:12.636099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.803 [2024-07-15 15:35:12.636138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.803 qpair failed and we were unable to recover it. 00:30:08.803 [2024-07-15 15:35:12.636499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.803 [2024-07-15 15:35:12.636538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.803 qpair failed and we were unable to recover it. 00:30:08.803 [2024-07-15 15:35:12.636894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.803 [2024-07-15 15:35:12.636911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.803 qpair failed and we were unable to recover it. 00:30:08.803 [2024-07-15 15:35:12.637157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.803 [2024-07-15 15:35:12.637188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.803 qpair failed and we were unable to recover it. 00:30:08.803 [2024-07-15 15:35:12.637497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.803 [2024-07-15 15:35:12.637540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.803 qpair failed and we were unable to recover it. 00:30:08.803 [2024-07-15 15:35:12.637800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.803 [2024-07-15 15:35:12.637817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.803 qpair failed and we were unable to recover it. 00:30:08.803 [2024-07-15 15:35:12.638137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.803 [2024-07-15 15:35:12.638155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.803 qpair failed and we were unable to recover it. 00:30:08.803 [2024-07-15 15:35:12.638526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.803 [2024-07-15 15:35:12.638543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.803 qpair failed and we were unable to recover it. 00:30:08.803 [2024-07-15 15:35:12.638855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.803 [2024-07-15 15:35:12.638873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.803 qpair failed and we were unable to recover it. 00:30:08.803 [2024-07-15 15:35:12.639141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.803 [2024-07-15 15:35:12.639182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.803 qpair failed and we were unable to recover it. 00:30:08.803 [2024-07-15 15:35:12.639429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.803 [2024-07-15 15:35:12.639468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.803 qpair failed and we were unable to recover it. 00:30:08.803 [2024-07-15 15:35:12.639774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.803 [2024-07-15 15:35:12.639813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.803 qpair failed and we were unable to recover it. 00:30:08.803 [2024-07-15 15:35:12.640193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.803 [2024-07-15 15:35:12.640233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.803 qpair failed and we were unable to recover it. 00:30:08.803 [2024-07-15 15:35:12.640621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.803 [2024-07-15 15:35:12.640661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.803 qpair failed and we were unable to recover it. 00:30:08.803 [2024-07-15 15:35:12.641045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.803 [2024-07-15 15:35:12.641086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.803 qpair failed and we were unable to recover it. 00:30:08.803 [2024-07-15 15:35:12.641428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.803 [2024-07-15 15:35:12.641467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.803 qpair failed and we were unable to recover it. 00:30:08.803 [2024-07-15 15:35:12.641772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.803 [2024-07-15 15:35:12.641812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.803 qpair failed and we were unable to recover it. 00:30:08.803 [2024-07-15 15:35:12.642054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.803 [2024-07-15 15:35:12.642071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.803 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-15 15:35:12.642386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-15 15:35:12.642438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-15 15:35:12.642752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-15 15:35:12.642791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-15 15:35:12.643189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-15 15:35:12.643206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-15 15:35:12.643550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-15 15:35:12.643566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-15 15:35:12.643908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-15 15:35:12.643949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-15 15:35:12.644312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-15 15:35:12.644352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-15 15:35:12.644649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-15 15:35:12.644688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-15 15:35:12.645072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-15 15:35:12.645112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-15 15:35:12.645405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-15 15:35:12.645445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-15 15:35:12.645805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-15 15:35:12.645852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-15 15:35:12.646154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-15 15:35:12.646171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-15 15:35:12.646383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-15 15:35:12.646400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-15 15:35:12.646677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-15 15:35:12.646717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-15 15:35:12.646944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-15 15:35:12.646991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-15 15:35:12.647286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-15 15:35:12.647326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-15 15:35:12.647695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-15 15:35:12.647735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-15 15:35:12.648098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-15 15:35:12.648140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-15 15:35:12.648449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-15 15:35:12.648488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-15 15:35:12.648804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-15 15:35:12.648821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-15 15:35:12.649108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-15 15:35:12.649148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-15 15:35:12.649467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-15 15:35:12.649507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-15 15:35:12.649895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-15 15:35:12.649935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-15 15:35:12.650204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-15 15:35:12.650243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-15 15:35:12.650536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-15 15:35:12.650576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-15 15:35:12.650910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-15 15:35:12.650950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-15 15:35:12.651245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-15 15:35:12.651284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-15 15:35:12.651646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-15 15:35:12.651686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-15 15:35:12.651933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-15 15:35:12.651950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-15 15:35:12.652198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-15 15:35:12.652238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-15 15:35:12.652545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-15 15:35:12.652584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-15 15:35:12.652893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-15 15:35:12.652910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-15 15:35:12.653254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-15 15:35:12.653293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-15 15:35:12.653679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-15 15:35:12.653720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-15 15:35:12.654054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-15 15:35:12.654071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-15 15:35:12.654359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-15 15:35:12.654398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-15 15:35:12.654792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-15 15:35:12.654854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-15 15:35:12.655099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-15 15:35:12.655116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-15 15:35:12.655458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-15 15:35:12.655498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-15 15:35:12.655796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-15 15:35:12.655846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-15 15:35:12.656088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-15 15:35:12.656105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.804 [2024-07-15 15:35:12.656374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.804 [2024-07-15 15:35:12.656413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.804 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-15 15:35:12.656808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-15 15:35:12.656861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-15 15:35:12.657166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-15 15:35:12.657205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-15 15:35:12.657433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-15 15:35:12.657472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-15 15:35:12.657856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-15 15:35:12.657896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-15 15:35:12.658259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-15 15:35:12.658298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-15 15:35:12.658671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-15 15:35:12.658710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-15 15:35:12.659073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-15 15:35:12.659113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-15 15:35:12.659357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-15 15:35:12.659396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-15 15:35:12.659687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-15 15:35:12.659726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-15 15:35:12.660032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-15 15:35:12.660049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-15 15:35:12.660229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-15 15:35:12.660246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-15 15:35:12.660512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-15 15:35:12.660551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-15 15:35:12.660883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-15 15:35:12.660925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-15 15:35:12.661145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-15 15:35:12.661175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-15 15:35:12.661290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-15 15:35:12.661303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-15 15:35:12.661639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-15 15:35:12.661680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-15 15:35:12.662079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-15 15:35:12.662121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-15 15:35:12.662381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-15 15:35:12.662420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-15 15:35:12.662683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-15 15:35:12.662723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-15 15:35:12.663110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-15 15:35:12.663151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-15 15:35:12.663536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-15 15:35:12.663575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-15 15:35:12.663818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-15 15:35:12.663872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-15 15:35:12.664174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-15 15:35:12.664186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-15 15:35:12.664419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-15 15:35:12.664431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-15 15:35:12.664672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-15 15:35:12.664685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-15 15:35:12.664966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-15 15:35:12.665006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-15 15:35:12.665257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-15 15:35:12.665306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-15 15:35:12.665698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-15 15:35:12.665711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-15 15:35:12.665979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-15 15:35:12.665991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-15 15:35:12.666246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-15 15:35:12.666258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-15 15:35:12.666532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-15 15:35:12.666544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-15 15:35:12.666789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-15 15:35:12.666823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-15 15:35:12.667145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-15 15:35:12.667201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:08.805 [2024-07-15 15:35:12.667582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.805 [2024-07-15 15:35:12.667621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:08.805 qpair failed and we were unable to recover it. 00:30:09.081 [2024-07-15 15:35:12.667922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.081 [2024-07-15 15:35:12.667935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.081 qpair failed and we were unable to recover it. 00:30:09.081 [2024-07-15 15:35:12.668196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.081 [2024-07-15 15:35:12.668209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.081 qpair failed and we were unable to recover it. 00:30:09.081 [2024-07-15 15:35:12.668482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.081 [2024-07-15 15:35:12.668495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.081 qpair failed and we were unable to recover it. 00:30:09.081 [2024-07-15 15:35:12.668798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.081 [2024-07-15 15:35:12.668811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.081 qpair failed and we were unable to recover it. 00:30:09.081 [2024-07-15 15:35:12.668986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.081 [2024-07-15 15:35:12.668999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.081 qpair failed and we were unable to recover it. 00:30:09.081 [2024-07-15 15:35:12.669334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.081 [2024-07-15 15:35:12.669346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.081 qpair failed and we were unable to recover it. 00:30:09.081 [2024-07-15 15:35:12.669551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.081 [2024-07-15 15:35:12.669564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.081 qpair failed and we were unable to recover it. 00:30:09.081 [2024-07-15 15:35:12.669865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.081 [2024-07-15 15:35:12.669878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.081 qpair failed and we were unable to recover it. 00:30:09.081 [2024-07-15 15:35:12.670210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.081 [2024-07-15 15:35:12.670223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.081 qpair failed and we were unable to recover it. 00:30:09.081 [2024-07-15 15:35:12.670456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.081 [2024-07-15 15:35:12.670469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.081 qpair failed and we were unable to recover it. 00:30:09.081 [2024-07-15 15:35:12.670738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.081 [2024-07-15 15:35:12.670750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.081 qpair failed and we were unable to recover it. 00:30:09.081 [2024-07-15 15:35:12.671075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.081 [2024-07-15 15:35:12.671088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.081 qpair failed and we were unable to recover it. 00:30:09.081 [2024-07-15 15:35:12.671264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.081 [2024-07-15 15:35:12.671276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.081 qpair failed and we were unable to recover it. 00:30:09.081 [2024-07-15 15:35:12.671460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.081 [2024-07-15 15:35:12.671473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.081 qpair failed and we were unable to recover it. 00:30:09.081 [2024-07-15 15:35:12.671822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.081 [2024-07-15 15:35:12.671863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.081 qpair failed and we were unable to recover it. 00:30:09.082 [2024-07-15 15:35:12.672108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.082 [2024-07-15 15:35:12.672148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.082 qpair failed and we were unable to recover it. 00:30:09.082 [2024-07-15 15:35:12.672547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.082 [2024-07-15 15:35:12.672587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.082 qpair failed and we were unable to recover it. 00:30:09.082 [2024-07-15 15:35:12.672922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.082 [2024-07-15 15:35:12.672962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.082 qpair failed and we were unable to recover it. 00:30:09.082 [2024-07-15 15:35:12.673263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.082 [2024-07-15 15:35:12.673302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.082 qpair failed and we were unable to recover it. 00:30:09.082 [2024-07-15 15:35:12.673604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.082 [2024-07-15 15:35:12.673644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.082 qpair failed and we were unable to recover it. 00:30:09.082 [2024-07-15 15:35:12.673893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.082 [2024-07-15 15:35:12.673933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.082 qpair failed and we were unable to recover it. 00:30:09.082 [2024-07-15 15:35:12.674248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.082 [2024-07-15 15:35:12.674287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.082 qpair failed and we were unable to recover it. 00:30:09.082 [2024-07-15 15:35:12.674682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.082 [2024-07-15 15:35:12.674722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.082 qpair failed and we were unable to recover it. 00:30:09.082 [2024-07-15 15:35:12.675016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.082 [2024-07-15 15:35:12.675058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.082 qpair failed and we were unable to recover it. 00:30:09.082 [2024-07-15 15:35:12.675429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.082 [2024-07-15 15:35:12.675468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.082 qpair failed and we were unable to recover it. 00:30:09.082 [2024-07-15 15:35:12.675804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.082 [2024-07-15 15:35:12.675851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.082 qpair failed and we were unable to recover it. 00:30:09.082 [2024-07-15 15:35:12.676269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.082 [2024-07-15 15:35:12.676308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.082 qpair failed and we were unable to recover it. 00:30:09.082 [2024-07-15 15:35:12.676673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.082 [2024-07-15 15:35:12.676713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.082 qpair failed and we were unable to recover it. 00:30:09.082 [2024-07-15 15:35:12.677027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.082 [2024-07-15 15:35:12.677068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.082 qpair failed and we were unable to recover it. 00:30:09.082 [2024-07-15 15:35:12.677369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.082 [2024-07-15 15:35:12.677381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.082 qpair failed and we were unable to recover it. 00:30:09.082 [2024-07-15 15:35:12.677638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.082 [2024-07-15 15:35:12.677681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.082 qpair failed and we were unable to recover it. 00:30:09.082 [2024-07-15 15:35:12.677989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.082 [2024-07-15 15:35:12.678029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.082 qpair failed and we were unable to recover it. 00:30:09.082 [2024-07-15 15:35:12.678415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.082 [2024-07-15 15:35:12.678455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.082 qpair failed and we were unable to recover it. 00:30:09.082 [2024-07-15 15:35:12.678620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.082 [2024-07-15 15:35:12.678661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.082 qpair failed and we were unable to recover it. 00:30:09.082 [2024-07-15 15:35:12.678920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.082 [2024-07-15 15:35:12.678961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.082 qpair failed and we were unable to recover it. 00:30:09.082 [2024-07-15 15:35:12.679347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.082 [2024-07-15 15:35:12.679387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.082 qpair failed and we were unable to recover it. 00:30:09.082 [2024-07-15 15:35:12.679694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.082 [2024-07-15 15:35:12.679733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.082 qpair failed and we were unable to recover it. 00:30:09.082 [2024-07-15 15:35:12.679973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.082 [2024-07-15 15:35:12.679986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.082 qpair failed and we were unable to recover it. 00:30:09.082 [2024-07-15 15:35:12.680221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.082 [2024-07-15 15:35:12.680233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.082 qpair failed and we were unable to recover it. 00:30:09.082 [2024-07-15 15:35:12.680467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.082 [2024-07-15 15:35:12.680479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.082 qpair failed and we were unable to recover it. 00:30:09.082 [2024-07-15 15:35:12.680740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.082 [2024-07-15 15:35:12.680776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.082 qpair failed and we were unable to recover it. 00:30:09.082 [2024-07-15 15:35:12.681100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.082 [2024-07-15 15:35:12.681141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.082 qpair failed and we were unable to recover it. 00:30:09.082 [2024-07-15 15:35:12.681435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.082 [2024-07-15 15:35:12.681474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.082 qpair failed and we were unable to recover it. 00:30:09.082 [2024-07-15 15:35:12.681763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.082 [2024-07-15 15:35:12.681774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.082 qpair failed and we were unable to recover it. 00:30:09.082 [2024-07-15 15:35:12.682112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.082 [2024-07-15 15:35:12.682153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.082 qpair failed and we were unable to recover it. 00:30:09.082 [2024-07-15 15:35:12.682377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.082 [2024-07-15 15:35:12.682417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.082 qpair failed and we were unable to recover it. 00:30:09.082 [2024-07-15 15:35:12.682785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.082 [2024-07-15 15:35:12.682825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.082 qpair failed and we were unable to recover it. 00:30:09.082 [2024-07-15 15:35:12.683151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.082 [2024-07-15 15:35:12.683191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.082 qpair failed and we were unable to recover it. 00:30:09.082 [2024-07-15 15:35:12.683570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.082 [2024-07-15 15:35:12.683609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.082 qpair failed and we were unable to recover it. 00:30:09.082 [2024-07-15 15:35:12.683901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.082 [2024-07-15 15:35:12.683914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.082 qpair failed and we were unable to recover it. 00:30:09.082 [2024-07-15 15:35:12.684242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.082 [2024-07-15 15:35:12.684281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.082 qpair failed and we were unable to recover it. 00:30:09.082 [2024-07-15 15:35:12.684611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.082 [2024-07-15 15:35:12.684650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.082 qpair failed and we were unable to recover it. 00:30:09.082 [2024-07-15 15:35:12.684993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.082 [2024-07-15 15:35:12.685033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.082 qpair failed and we were unable to recover it. 00:30:09.082 [2024-07-15 15:35:12.685336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.082 [2024-07-15 15:35:12.685375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.082 qpair failed and we were unable to recover it. 00:30:09.082 [2024-07-15 15:35:12.685715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.082 [2024-07-15 15:35:12.685754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.082 qpair failed and we were unable to recover it. 00:30:09.083 [2024-07-15 15:35:12.686131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.083 [2024-07-15 15:35:12.686172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.083 qpair failed and we were unable to recover it. 00:30:09.083 [2024-07-15 15:35:12.686560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.083 [2024-07-15 15:35:12.686600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.083 qpair failed and we were unable to recover it. 00:30:09.083 [2024-07-15 15:35:12.686940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.083 [2024-07-15 15:35:12.687011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.083 qpair failed and we were unable to recover it. 00:30:09.083 [2024-07-15 15:35:12.687375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.083 [2024-07-15 15:35:12.687414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.083 qpair failed and we were unable to recover it. 00:30:09.083 [2024-07-15 15:35:12.687807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.083 [2024-07-15 15:35:12.687863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.083 qpair failed and we were unable to recover it. 00:30:09.083 [2024-07-15 15:35:12.688252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.083 [2024-07-15 15:35:12.688292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.083 qpair failed and we were unable to recover it. 00:30:09.083 [2024-07-15 15:35:12.688610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.083 [2024-07-15 15:35:12.688650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.083 qpair failed and we were unable to recover it. 00:30:09.083 [2024-07-15 15:35:12.689011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.083 [2024-07-15 15:35:12.689023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.083 qpair failed and we were unable to recover it. 00:30:09.083 [2024-07-15 15:35:12.689226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.083 [2024-07-15 15:35:12.689239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.083 qpair failed and we were unable to recover it. 00:30:09.083 [2024-07-15 15:35:12.689508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.083 [2024-07-15 15:35:12.689520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.083 qpair failed and we were unable to recover it. 00:30:09.083 [2024-07-15 15:35:12.689786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.083 [2024-07-15 15:35:12.689798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.083 qpair failed and we were unable to recover it. 00:30:09.083 [2024-07-15 15:35:12.690102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.083 [2024-07-15 15:35:12.690147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.083 qpair failed and we were unable to recover it. 00:30:09.083 [2024-07-15 15:35:12.690462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.083 [2024-07-15 15:35:12.690502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.083 qpair failed and we were unable to recover it. 00:30:09.083 [2024-07-15 15:35:12.690752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.083 [2024-07-15 15:35:12.690791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.083 qpair failed and we were unable to recover it. 00:30:09.083 [2024-07-15 15:35:12.691170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.083 [2024-07-15 15:35:12.691212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.083 qpair failed and we were unable to recover it. 00:30:09.083 [2024-07-15 15:35:12.691506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.083 [2024-07-15 15:35:12.691545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.083 qpair failed and we were unable to recover it. 00:30:09.083 [2024-07-15 15:35:12.691880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.083 [2024-07-15 15:35:12.691921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.083 qpair failed and we were unable to recover it. 00:30:09.083 [2024-07-15 15:35:12.692238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.083 [2024-07-15 15:35:12.692278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.083 qpair failed and we were unable to recover it. 00:30:09.083 [2024-07-15 15:35:12.692530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.083 [2024-07-15 15:35:12.692570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.083 qpair failed and we were unable to recover it. 00:30:09.083 [2024-07-15 15:35:12.692869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.083 [2024-07-15 15:35:12.692881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.083 qpair failed and we were unable to recover it. 00:30:09.083 [2024-07-15 15:35:12.693212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.083 [2024-07-15 15:35:12.693252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.083 qpair failed and we were unable to recover it. 00:30:09.083 [2024-07-15 15:35:12.693562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.083 [2024-07-15 15:35:12.693601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.083 qpair failed and we were unable to recover it. 00:30:09.083 [2024-07-15 15:35:12.693868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.083 [2024-07-15 15:35:12.693897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.083 qpair failed and we were unable to recover it. 00:30:09.083 [2024-07-15 15:35:12.694200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.083 [2024-07-15 15:35:12.694239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.083 qpair failed and we were unable to recover it. 00:30:09.083 [2024-07-15 15:35:12.694624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.083 [2024-07-15 15:35:12.694664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.083 qpair failed and we were unable to recover it. 00:30:09.083 [2024-07-15 15:35:12.694880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.083 [2024-07-15 15:35:12.694893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.083 qpair failed and we were unable to recover it. 00:30:09.083 [2024-07-15 15:35:12.695212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.083 [2024-07-15 15:35:12.695224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.083 qpair failed and we were unable to recover it. 00:30:09.083 [2024-07-15 15:35:12.695548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.083 [2024-07-15 15:35:12.695560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.083 qpair failed and we were unable to recover it. 00:30:09.083 [2024-07-15 15:35:12.695809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.083 [2024-07-15 15:35:12.695822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.083 qpair failed and we were unable to recover it. 00:30:09.083 [2024-07-15 15:35:12.696016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.083 [2024-07-15 15:35:12.696049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.083 qpair failed and we were unable to recover it. 00:30:09.083 [2024-07-15 15:35:12.696370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.083 [2024-07-15 15:35:12.696409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.083 qpair failed and we were unable to recover it. 00:30:09.083 [2024-07-15 15:35:12.696716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.083 [2024-07-15 15:35:12.696756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.083 qpair failed and we were unable to recover it. 00:30:09.083 [2024-07-15 15:35:12.696920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.083 [2024-07-15 15:35:12.696932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.083 qpair failed and we were unable to recover it. 00:30:09.083 [2024-07-15 15:35:12.697129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.083 [2024-07-15 15:35:12.697181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.083 qpair failed and we were unable to recover it. 00:30:09.083 [2024-07-15 15:35:12.697503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.083 [2024-07-15 15:35:12.697543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.083 qpair failed and we were unable to recover it. 00:30:09.083 [2024-07-15 15:35:12.697846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.083 [2024-07-15 15:35:12.697887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.083 qpair failed and we were unable to recover it. 00:30:09.083 [2024-07-15 15:35:12.698196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.083 [2024-07-15 15:35:12.698236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.083 qpair failed and we were unable to recover it. 00:30:09.083 [2024-07-15 15:35:12.698643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.083 [2024-07-15 15:35:12.698682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.083 qpair failed and we were unable to recover it. 00:30:09.083 [2024-07-15 15:35:12.698930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.083 [2024-07-15 15:35:12.698972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.083 qpair failed and we were unable to recover it. 00:30:09.083 [2024-07-15 15:35:12.699284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.083 [2024-07-15 15:35:12.699323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.083 qpair failed and we were unable to recover it. 00:30:09.083 [2024-07-15 15:35:12.699700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.083 [2024-07-15 15:35:12.699740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.084 qpair failed and we were unable to recover it. 00:30:09.084 [2024-07-15 15:35:12.700052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.084 [2024-07-15 15:35:12.700093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.084 qpair failed and we were unable to recover it. 00:30:09.084 [2024-07-15 15:35:12.700517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.084 [2024-07-15 15:35:12.700557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.084 qpair failed and we were unable to recover it. 00:30:09.084 [2024-07-15 15:35:12.700854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.084 [2024-07-15 15:35:12.700894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.084 qpair failed and we were unable to recover it. 00:30:09.084 [2024-07-15 15:35:12.701191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.084 [2024-07-15 15:35:12.701236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.084 qpair failed and we were unable to recover it. 00:30:09.084 [2024-07-15 15:35:12.701604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.084 [2024-07-15 15:35:12.701644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.084 qpair failed and we were unable to recover it. 00:30:09.084 [2024-07-15 15:35:12.702043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.084 [2024-07-15 15:35:12.702083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.084 qpair failed and we were unable to recover it. 00:30:09.084 [2024-07-15 15:35:12.702359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.084 [2024-07-15 15:35:12.702394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.084 qpair failed and we were unable to recover it. 00:30:09.084 [2024-07-15 15:35:12.702653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.084 [2024-07-15 15:35:12.702693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.084 qpair failed and we were unable to recover it. 00:30:09.084 [2024-07-15 15:35:12.702934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.084 [2024-07-15 15:35:12.702975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.084 qpair failed and we were unable to recover it. 00:30:09.084 [2024-07-15 15:35:12.703363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.084 [2024-07-15 15:35:12.703403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.084 qpair failed and we were unable to recover it. 00:30:09.084 [2024-07-15 15:35:12.703741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.084 [2024-07-15 15:35:12.703780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.084 qpair failed and we were unable to recover it. 00:30:09.084 [2024-07-15 15:35:12.704078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.084 [2024-07-15 15:35:12.704090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.084 qpair failed and we were unable to recover it. 00:30:09.084 [2024-07-15 15:35:12.704419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.084 [2024-07-15 15:35:12.704458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.084 qpair failed and we were unable to recover it. 00:30:09.084 [2024-07-15 15:35:12.704829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.084 [2024-07-15 15:35:12.704885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.084 qpair failed and we were unable to recover it. 00:30:09.084 [2024-07-15 15:35:12.705219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.084 [2024-07-15 15:35:12.705258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.084 qpair failed and we were unable to recover it. 00:30:09.084 [2024-07-15 15:35:12.705555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.084 [2024-07-15 15:35:12.705594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.084 qpair failed and we were unable to recover it. 00:30:09.084 [2024-07-15 15:35:12.705910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.084 [2024-07-15 15:35:12.705923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.084 qpair failed and we were unable to recover it. 00:30:09.084 [2024-07-15 15:35:12.706253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.084 [2024-07-15 15:35:12.706265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.084 qpair failed and we were unable to recover it. 00:30:09.084 [2024-07-15 15:35:12.706577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.084 [2024-07-15 15:35:12.706616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.084 qpair failed and we were unable to recover it. 00:30:09.084 [2024-07-15 15:35:12.707003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.084 [2024-07-15 15:35:12.707045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.084 qpair failed and we were unable to recover it. 00:30:09.084 [2024-07-15 15:35:12.707364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.084 [2024-07-15 15:35:12.707404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.084 qpair failed and we were unable to recover it. 00:30:09.084 [2024-07-15 15:35:12.707709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.084 [2024-07-15 15:35:12.707748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.084 qpair failed and we were unable to recover it. 00:30:09.084 [2024-07-15 15:35:12.708066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.084 [2024-07-15 15:35:12.708107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.084 qpair failed and we were unable to recover it. 00:30:09.084 [2024-07-15 15:35:12.708411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.084 [2024-07-15 15:35:12.708451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.084 qpair failed and we were unable to recover it. 00:30:09.084 [2024-07-15 15:35:12.708846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.084 [2024-07-15 15:35:12.708886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.084 qpair failed and we were unable to recover it. 00:30:09.084 [2024-07-15 15:35:12.709242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.084 [2024-07-15 15:35:12.709275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.084 qpair failed and we were unable to recover it. 00:30:09.084 [2024-07-15 15:35:12.709514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.084 [2024-07-15 15:35:12.709553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.084 qpair failed and we were unable to recover it. 00:30:09.084 [2024-07-15 15:35:12.709923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.084 [2024-07-15 15:35:12.709964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.084 qpair failed and we were unable to recover it. 00:30:09.084 [2024-07-15 15:35:12.710326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.084 [2024-07-15 15:35:12.710365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.084 qpair failed and we were unable to recover it. 00:30:09.084 [2024-07-15 15:35:12.710589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.084 [2024-07-15 15:35:12.710629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.084 qpair failed and we were unable to recover it. 00:30:09.084 [2024-07-15 15:35:12.711018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.084 [2024-07-15 15:35:12.711031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.084 qpair failed and we were unable to recover it. 00:30:09.084 [2024-07-15 15:35:12.711273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.084 [2024-07-15 15:35:12.711312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.084 qpair failed and we were unable to recover it. 00:30:09.084 [2024-07-15 15:35:12.711573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.084 [2024-07-15 15:35:12.711612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.084 qpair failed and we were unable to recover it. 00:30:09.084 [2024-07-15 15:35:12.711999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.084 [2024-07-15 15:35:12.712040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.084 qpair failed and we were unable to recover it. 00:30:09.084 [2024-07-15 15:35:12.712342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.084 [2024-07-15 15:35:12.712381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.084 qpair failed and we were unable to recover it. 00:30:09.084 [2024-07-15 15:35:12.712805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.084 [2024-07-15 15:35:12.712851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.084 qpair failed and we were unable to recover it. 00:30:09.084 [2024-07-15 15:35:12.713177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.084 [2024-07-15 15:35:12.713189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.084 qpair failed and we were unable to recover it. 00:30:09.084 [2024-07-15 15:35:12.713520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.084 [2024-07-15 15:35:12.713559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.085 qpair failed and we were unable to recover it. 00:30:09.085 [2024-07-15 15:35:12.713920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.085 [2024-07-15 15:35:12.713961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.085 qpair failed and we were unable to recover it. 00:30:09.085 [2024-07-15 15:35:12.714218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.085 [2024-07-15 15:35:12.714230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.085 qpair failed and we were unable to recover it. 00:30:09.085 [2024-07-15 15:35:12.714535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.085 [2024-07-15 15:35:12.714547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.085 qpair failed and we were unable to recover it. 00:30:09.085 [2024-07-15 15:35:12.714870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.085 [2024-07-15 15:35:12.714883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.085 qpair failed and we were unable to recover it. 00:30:09.085 [2024-07-15 15:35:12.715136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.085 [2024-07-15 15:35:12.715148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.085 qpair failed and we were unable to recover it. 00:30:09.085 [2024-07-15 15:35:12.715397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.085 [2024-07-15 15:35:12.715411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.085 qpair failed and we were unable to recover it. 00:30:09.085 [2024-07-15 15:35:12.715734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.085 [2024-07-15 15:35:12.715775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.085 qpair failed and we were unable to recover it. 00:30:09.085 [2024-07-15 15:35:12.716049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.085 [2024-07-15 15:35:12.716090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.085 qpair failed and we were unable to recover it. 00:30:09.085 [2024-07-15 15:35:12.716445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.085 [2024-07-15 15:35:12.716484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.085 qpair failed and we were unable to recover it. 00:30:09.085 [2024-07-15 15:35:12.716781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.085 [2024-07-15 15:35:12.716820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.085 qpair failed and we were unable to recover it. 00:30:09.085 [2024-07-15 15:35:12.717076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.085 [2024-07-15 15:35:12.717116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.085 qpair failed and we were unable to recover it. 00:30:09.085 [2024-07-15 15:35:12.717411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.085 [2024-07-15 15:35:12.717451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.085 qpair failed and we were unable to recover it. 00:30:09.085 [2024-07-15 15:35:12.717746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.085 [2024-07-15 15:35:12.717785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.085 qpair failed and we were unable to recover it. 00:30:09.085 [2024-07-15 15:35:12.718035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.085 [2024-07-15 15:35:12.718076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.085 qpair failed and we were unable to recover it. 00:30:09.085 [2024-07-15 15:35:12.718379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.085 [2024-07-15 15:35:12.718420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.085 qpair failed and we were unable to recover it. 00:30:09.085 [2024-07-15 15:35:12.718714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.085 [2024-07-15 15:35:12.718754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.085 qpair failed and we were unable to recover it. 00:30:09.085 [2024-07-15 15:35:12.719129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.085 [2024-07-15 15:35:12.719170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.085 qpair failed and we were unable to recover it. 00:30:09.085 [2024-07-15 15:35:12.719504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.085 [2024-07-15 15:35:12.719545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.085 qpair failed and we were unable to recover it. 00:30:09.085 [2024-07-15 15:35:12.719776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.085 [2024-07-15 15:35:12.719814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.085 qpair failed and we were unable to recover it. 00:30:09.085 [2024-07-15 15:35:12.720056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.085 [2024-07-15 15:35:12.720069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.085 qpair failed and we were unable to recover it. 00:30:09.085 [2024-07-15 15:35:12.720373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.085 [2024-07-15 15:35:12.720385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.085 qpair failed and we were unable to recover it. 00:30:09.085 [2024-07-15 15:35:12.720646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.085 [2024-07-15 15:35:12.720685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.085 qpair failed and we were unable to recover it. 00:30:09.085 [2024-07-15 15:35:12.721048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.085 [2024-07-15 15:35:12.721089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.085 qpair failed and we were unable to recover it. 00:30:09.085 [2024-07-15 15:35:12.721474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.085 [2024-07-15 15:35:12.721513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.085 qpair failed and we were unable to recover it. 00:30:09.085 [2024-07-15 15:35:12.721877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.085 [2024-07-15 15:35:12.721917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.085 qpair failed and we were unable to recover it. 00:30:09.085 [2024-07-15 15:35:12.722142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.085 [2024-07-15 15:35:12.722181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.085 qpair failed and we were unable to recover it. 00:30:09.085 [2024-07-15 15:35:12.722492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.085 [2024-07-15 15:35:12.722531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.085 qpair failed and we were unable to recover it. 00:30:09.085 [2024-07-15 15:35:12.722895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.085 [2024-07-15 15:35:12.722936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.085 qpair failed and we were unable to recover it. 00:30:09.085 [2024-07-15 15:35:12.723325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.085 [2024-07-15 15:35:12.723365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.085 qpair failed and we were unable to recover it. 00:30:09.085 [2024-07-15 15:35:12.723748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.085 [2024-07-15 15:35:12.723789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.085 qpair failed and we were unable to recover it. 00:30:09.085 [2024-07-15 15:35:12.724167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.085 [2024-07-15 15:35:12.724207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.085 qpair failed and we were unable to recover it. 00:30:09.085 [2024-07-15 15:35:12.724594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.085 [2024-07-15 15:35:12.724633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.085 qpair failed and we were unable to recover it. 00:30:09.085 [2024-07-15 15:35:12.724953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.085 [2024-07-15 15:35:12.724994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.085 qpair failed and we were unable to recover it. 00:30:09.085 [2024-07-15 15:35:12.725297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.085 [2024-07-15 15:35:12.725336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.085 qpair failed and we were unable to recover it. 00:30:09.085 [2024-07-15 15:35:12.725634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.085 [2024-07-15 15:35:12.725674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.085 qpair failed and we were unable to recover it. 00:30:09.086 [2024-07-15 15:35:12.725942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.086 [2024-07-15 15:35:12.725955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.086 qpair failed and we were unable to recover it. 00:30:09.086 [2024-07-15 15:35:12.726186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.086 [2024-07-15 15:35:12.726198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.086 qpair failed and we were unable to recover it. 00:30:09.086 [2024-07-15 15:35:12.726500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.086 [2024-07-15 15:35:12.726521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.086 qpair failed and we were unable to recover it. 00:30:09.086 [2024-07-15 15:35:12.726864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.086 [2024-07-15 15:35:12.726904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.086 qpair failed and we were unable to recover it. 00:30:09.086 [2024-07-15 15:35:12.727078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.086 [2024-07-15 15:35:12.727118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.086 qpair failed and we were unable to recover it. 00:30:09.086 [2024-07-15 15:35:12.727501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.086 [2024-07-15 15:35:12.727540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.086 qpair failed and we were unable to recover it. 00:30:09.086 [2024-07-15 15:35:12.727851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.086 [2024-07-15 15:35:12.727892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.086 qpair failed and we were unable to recover it. 00:30:09.086 [2024-07-15 15:35:12.728251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.086 [2024-07-15 15:35:12.728262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.086 qpair failed and we were unable to recover it. 00:30:09.086 [2024-07-15 15:35:12.728441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.086 [2024-07-15 15:35:12.728481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.086 qpair failed and we were unable to recover it. 00:30:09.086 [2024-07-15 15:35:12.728806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.086 [2024-07-15 15:35:12.728854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.086 qpair failed and we were unable to recover it. 00:30:09.086 [2024-07-15 15:35:12.729212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.086 [2024-07-15 15:35:12.729226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.086 qpair failed and we were unable to recover it. 00:30:09.086 [2024-07-15 15:35:12.729505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.086 [2024-07-15 15:35:12.729544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.086 qpair failed and we were unable to recover it. 00:30:09.086 [2024-07-15 15:35:12.729919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.086 [2024-07-15 15:35:12.729960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.086 qpair failed and we were unable to recover it. 00:30:09.086 [2024-07-15 15:35:12.730268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.086 [2024-07-15 15:35:12.730307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.086 qpair failed and we were unable to recover it. 00:30:09.086 [2024-07-15 15:35:12.730602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.086 [2024-07-15 15:35:12.730642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.086 qpair failed and we were unable to recover it. 00:30:09.086 [2024-07-15 15:35:12.730970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.086 [2024-07-15 15:35:12.731011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.086 qpair failed and we were unable to recover it. 00:30:09.086 [2024-07-15 15:35:12.731210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.086 [2024-07-15 15:35:12.731223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.086 qpair failed and we were unable to recover it. 00:30:09.086 [2024-07-15 15:35:12.731469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.086 [2024-07-15 15:35:12.731509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.086 qpair failed and we were unable to recover it. 00:30:09.086 [2024-07-15 15:35:12.731891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.086 [2024-07-15 15:35:12.731931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.086 qpair failed and we were unable to recover it. 00:30:09.086 [2024-07-15 15:35:12.732244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.086 [2024-07-15 15:35:12.732256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.086 qpair failed and we were unable to recover it. 00:30:09.086 [2024-07-15 15:35:12.732491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.086 [2024-07-15 15:35:12.732503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.086 qpair failed and we were unable to recover it. 00:30:09.086 [2024-07-15 15:35:12.732830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.086 [2024-07-15 15:35:12.732879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.086 qpair failed and we were unable to recover it. 00:30:09.086 [2024-07-15 15:35:12.733194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.086 [2024-07-15 15:35:12.733233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.086 qpair failed and we were unable to recover it. 00:30:09.086 [2024-07-15 15:35:12.733530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.086 [2024-07-15 15:35:12.733570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.086 qpair failed and we were unable to recover it. 00:30:09.086 [2024-07-15 15:35:12.733939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.086 [2024-07-15 15:35:12.733980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.086 qpair failed and we were unable to recover it. 00:30:09.086 [2024-07-15 15:35:12.734340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.086 [2024-07-15 15:35:12.734380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.086 qpair failed and we were unable to recover it. 00:30:09.086 [2024-07-15 15:35:12.734629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.086 [2024-07-15 15:35:12.734668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.086 qpair failed and we were unable to recover it. 00:30:09.086 [2024-07-15 15:35:12.734987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.086 [2024-07-15 15:35:12.735028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.086 qpair failed and we were unable to recover it. 00:30:09.086 [2024-07-15 15:35:12.735390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.086 [2024-07-15 15:35:12.735430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.086 qpair failed and we were unable to recover it. 00:30:09.086 [2024-07-15 15:35:12.735817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.086 [2024-07-15 15:35:12.735882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.086 qpair failed and we were unable to recover it. 00:30:09.086 [2024-07-15 15:35:12.736196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.086 [2024-07-15 15:35:12.736207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.086 qpair failed and we were unable to recover it. 00:30:09.086 [2024-07-15 15:35:12.736490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.086 [2024-07-15 15:35:12.736530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.086 qpair failed and we were unable to recover it. 00:30:09.086 [2024-07-15 15:35:12.736898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.086 [2024-07-15 15:35:12.736938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.086 qpair failed and we were unable to recover it. 00:30:09.086 [2024-07-15 15:35:12.737174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.086 [2024-07-15 15:35:12.737197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.086 qpair failed and we were unable to recover it. 00:30:09.086 [2024-07-15 15:35:12.737496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.086 [2024-07-15 15:35:12.737509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.086 qpair failed and we were unable to recover it. 00:30:09.086 [2024-07-15 15:35:12.737759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.086 [2024-07-15 15:35:12.737798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.086 qpair failed and we were unable to recover it. 00:30:09.086 [2024-07-15 15:35:12.738198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.086 [2024-07-15 15:35:12.738238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.086 qpair failed and we were unable to recover it. 00:30:09.086 [2024-07-15 15:35:12.738484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.086 [2024-07-15 15:35:12.738523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.086 qpair failed and we were unable to recover it. 00:30:09.087 [2024-07-15 15:35:12.738909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.087 [2024-07-15 15:35:12.738950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.087 qpair failed and we were unable to recover it. 00:30:09.087 [2024-07-15 15:35:12.739283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.087 [2024-07-15 15:35:12.739322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.087 qpair failed and we were unable to recover it. 00:30:09.087 [2024-07-15 15:35:12.739715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.087 [2024-07-15 15:35:12.739751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.087 qpair failed and we were unable to recover it. 00:30:09.087 [2024-07-15 15:35:12.740071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.087 [2024-07-15 15:35:12.740083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.087 qpair failed and we were unable to recover it. 00:30:09.087 [2024-07-15 15:35:12.740378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.087 [2024-07-15 15:35:12.740390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.087 qpair failed and we were unable to recover it. 00:30:09.087 [2024-07-15 15:35:12.740649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.087 [2024-07-15 15:35:12.740689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.087 qpair failed and we were unable to recover it. 00:30:09.087 [2024-07-15 15:35:12.741053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.087 [2024-07-15 15:35:12.741088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.087 qpair failed and we were unable to recover it. 00:30:09.087 [2024-07-15 15:35:12.741340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.087 [2024-07-15 15:35:12.741392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.087 qpair failed and we were unable to recover it. 00:30:09.087 [2024-07-15 15:35:12.741684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.087 [2024-07-15 15:35:12.741723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.087 qpair failed and we were unable to recover it. 00:30:09.087 [2024-07-15 15:35:12.742111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.087 [2024-07-15 15:35:12.742151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.087 qpair failed and we were unable to recover it. 00:30:09.087 [2024-07-15 15:35:12.742580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.087 [2024-07-15 15:35:12.742620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.087 qpair failed and we were unable to recover it. 00:30:09.087 [2024-07-15 15:35:12.743004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.087 [2024-07-15 15:35:12.743045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.087 qpair failed and we were unable to recover it. 00:30:09.087 [2024-07-15 15:35:12.743417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.087 [2024-07-15 15:35:12.743463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.087 qpair failed and we were unable to recover it. 00:30:09.087 [2024-07-15 15:35:12.743827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.087 [2024-07-15 15:35:12.743876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.087 qpair failed and we were unable to recover it. 00:30:09.087 [2024-07-15 15:35:12.744190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.087 [2024-07-15 15:35:12.744202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.087 qpair failed and we were unable to recover it. 00:30:09.087 [2024-07-15 15:35:12.744435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.087 [2024-07-15 15:35:12.744447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.087 qpair failed and we were unable to recover it. 00:30:09.087 [2024-07-15 15:35:12.744690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.087 [2024-07-15 15:35:12.744703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.087 qpair failed and we were unable to recover it. 00:30:09.087 [2024-07-15 15:35:12.745013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.087 [2024-07-15 15:35:12.745053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.087 qpair failed and we were unable to recover it. 00:30:09.087 [2024-07-15 15:35:12.745312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.087 [2024-07-15 15:35:12.745352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.087 qpair failed and we were unable to recover it. 00:30:09.087 [2024-07-15 15:35:12.745659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.087 [2024-07-15 15:35:12.745698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.087 qpair failed and we were unable to recover it. 00:30:09.087 [2024-07-15 15:35:12.746082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.087 [2024-07-15 15:35:12.746095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.087 qpair failed and we were unable to recover it. 00:30:09.087 [2024-07-15 15:35:12.746338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.087 [2024-07-15 15:35:12.746377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.087 qpair failed and we were unable to recover it. 00:30:09.087 [2024-07-15 15:35:12.746712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.087 [2024-07-15 15:35:12.746752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.087 qpair failed and we were unable to recover it. 00:30:09.087 [2024-07-15 15:35:12.747058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.087 [2024-07-15 15:35:12.747099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.087 qpair failed and we were unable to recover it. 00:30:09.087 [2024-07-15 15:35:12.747431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.087 [2024-07-15 15:35:12.747470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.087 qpair failed and we were unable to recover it. 00:30:09.087 [2024-07-15 15:35:12.747790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.087 [2024-07-15 15:35:12.747830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.087 qpair failed and we were unable to recover it. 00:30:09.087 [2024-07-15 15:35:12.748142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.087 [2024-07-15 15:35:12.748182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.087 qpair failed and we were unable to recover it. 00:30:09.087 [2024-07-15 15:35:12.748489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.087 [2024-07-15 15:35:12.748528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.087 qpair failed and we were unable to recover it. 00:30:09.087 [2024-07-15 15:35:12.748786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.087 [2024-07-15 15:35:12.748825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.087 qpair failed and we were unable to recover it. 00:30:09.087 [2024-07-15 15:35:12.749224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.087 [2024-07-15 15:35:12.749263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.087 qpair failed and we were unable to recover it. 00:30:09.087 [2024-07-15 15:35:12.749560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.087 [2024-07-15 15:35:12.749600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.087 qpair failed and we were unable to recover it. 00:30:09.087 [2024-07-15 15:35:12.749919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.087 [2024-07-15 15:35:12.749960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.087 qpair failed and we were unable to recover it. 00:30:09.087 [2024-07-15 15:35:12.750306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.087 [2024-07-15 15:35:12.750345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.087 qpair failed and we were unable to recover it. 00:30:09.087 [2024-07-15 15:35:12.750660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.087 [2024-07-15 15:35:12.750712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.087 qpair failed and we were unable to recover it. 00:30:09.087 [2024-07-15 15:35:12.750945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.087 [2024-07-15 15:35:12.750958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.087 qpair failed and we were unable to recover it. 00:30:09.087 [2024-07-15 15:35:12.751191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.087 [2024-07-15 15:35:12.751203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.087 qpair failed and we were unable to recover it. 00:30:09.087 [2024-07-15 15:35:12.751381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.087 [2024-07-15 15:35:12.751394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.087 qpair failed and we were unable to recover it. 00:30:09.087 [2024-07-15 15:35:12.751622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.087 [2024-07-15 15:35:12.751634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.087 qpair failed and we were unable to recover it. 00:30:09.087 [2024-07-15 15:35:12.751813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.087 [2024-07-15 15:35:12.751825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.087 qpair failed and we were unable to recover it. 00:30:09.087 [2024-07-15 15:35:12.752033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.087 [2024-07-15 15:35:12.752073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.087 qpair failed and we were unable to recover it. 00:30:09.087 [2024-07-15 15:35:12.752446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.087 [2024-07-15 15:35:12.752486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.088 qpair failed and we were unable to recover it. 00:30:09.088 [2024-07-15 15:35:12.752728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.088 [2024-07-15 15:35:12.752767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.088 qpair failed and we were unable to recover it. 00:30:09.088 [2024-07-15 15:35:12.753077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.088 [2024-07-15 15:35:12.753090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.088 qpair failed and we were unable to recover it. 00:30:09.088 [2024-07-15 15:35:12.753346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.088 [2024-07-15 15:35:12.753396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.088 qpair failed and we were unable to recover it. 00:30:09.088 [2024-07-15 15:35:12.753761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.088 [2024-07-15 15:35:12.753800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.088 qpair failed and we were unable to recover it. 00:30:09.088 [2024-07-15 15:35:12.754107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.088 [2024-07-15 15:35:12.754147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.088 qpair failed and we were unable to recover it. 00:30:09.088 [2024-07-15 15:35:12.754460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.088 [2024-07-15 15:35:12.754499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.088 qpair failed and we were unable to recover it. 00:30:09.088 [2024-07-15 15:35:12.754864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.088 [2024-07-15 15:35:12.754905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.088 qpair failed and we were unable to recover it. 00:30:09.088 [2024-07-15 15:35:12.755295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.088 [2024-07-15 15:35:12.755335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.088 qpair failed and we were unable to recover it. 00:30:09.088 [2024-07-15 15:35:12.755704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.088 [2024-07-15 15:35:12.755743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.088 qpair failed and we were unable to recover it. 00:30:09.088 [2024-07-15 15:35:12.755967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.088 [2024-07-15 15:35:12.756008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.088 qpair failed and we were unable to recover it. 00:30:09.088 [2024-07-15 15:35:12.756394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.088 [2024-07-15 15:35:12.756433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.088 qpair failed and we were unable to recover it. 00:30:09.088 [2024-07-15 15:35:12.756764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.088 [2024-07-15 15:35:12.756810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.088 qpair failed and we were unable to recover it. 00:30:09.088 [2024-07-15 15:35:12.757129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.088 [2024-07-15 15:35:12.757169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.088 qpair failed and we were unable to recover it. 00:30:09.088 [2024-07-15 15:35:12.757528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.088 [2024-07-15 15:35:12.757540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.088 qpair failed and we were unable to recover it. 00:30:09.088 [2024-07-15 15:35:12.757877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.088 [2024-07-15 15:35:12.757890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.088 qpair failed and we were unable to recover it. 00:30:09.088 [2024-07-15 15:35:12.758214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.088 [2024-07-15 15:35:12.758253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.088 qpair failed and we were unable to recover it. 00:30:09.088 [2024-07-15 15:35:12.758589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.088 [2024-07-15 15:35:12.758629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.088 qpair failed and we were unable to recover it. 00:30:09.088 [2024-07-15 15:35:12.759018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.088 [2024-07-15 15:35:12.759059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.088 qpair failed and we were unable to recover it. 00:30:09.088 [2024-07-15 15:35:12.759390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.088 [2024-07-15 15:35:12.759403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.088 qpair failed and we were unable to recover it. 00:30:09.088 [2024-07-15 15:35:12.759709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.088 [2024-07-15 15:35:12.759749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.088 qpair failed and we were unable to recover it. 00:30:09.088 [2024-07-15 15:35:12.759976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.088 [2024-07-15 15:35:12.760017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.088 qpair failed and we were unable to recover it. 00:30:09.088 [2024-07-15 15:35:12.760325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.088 [2024-07-15 15:35:12.760365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.088 qpair failed and we were unable to recover it. 00:30:09.088 [2024-07-15 15:35:12.760684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.088 [2024-07-15 15:35:12.760724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.088 qpair failed and we were unable to recover it. 00:30:09.088 [2024-07-15 15:35:12.761114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.088 [2024-07-15 15:35:12.761155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.088 qpair failed and we were unable to recover it. 00:30:09.088 [2024-07-15 15:35:12.761407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.088 [2024-07-15 15:35:12.761420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.088 qpair failed and we were unable to recover it. 00:30:09.088 [2024-07-15 15:35:12.761670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.088 [2024-07-15 15:35:12.761682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.088 qpair failed and we were unable to recover it. 00:30:09.088 [2024-07-15 15:35:12.761954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.088 [2024-07-15 15:35:12.761995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.088 qpair failed and we were unable to recover it. 00:30:09.088 [2024-07-15 15:35:12.762293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.088 [2024-07-15 15:35:12.762332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.088 qpair failed and we were unable to recover it. 00:30:09.088 [2024-07-15 15:35:12.762698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.088 [2024-07-15 15:35:12.762738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.088 qpair failed and we were unable to recover it. 00:30:09.088 [2024-07-15 15:35:12.763076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.088 [2024-07-15 15:35:12.763089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.088 qpair failed and we were unable to recover it. 00:30:09.088 [2024-07-15 15:35:12.763334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.088 [2024-07-15 15:35:12.763346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.088 qpair failed and we were unable to recover it. 00:30:09.088 [2024-07-15 15:35:12.763655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.088 [2024-07-15 15:35:12.763667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.088 qpair failed and we were unable to recover it. 00:30:09.088 [2024-07-15 15:35:12.763939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.088 [2024-07-15 15:35:12.763980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.088 qpair failed and we were unable to recover it. 00:30:09.088 [2024-07-15 15:35:12.764344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.088 [2024-07-15 15:35:12.764383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.088 qpair failed and we were unable to recover it. 00:30:09.088 [2024-07-15 15:35:12.764748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.088 [2024-07-15 15:35:12.764788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.088 qpair failed and we were unable to recover it. 00:30:09.088 [2024-07-15 15:35:12.765103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.088 [2024-07-15 15:35:12.765116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.088 qpair failed and we were unable to recover it. 00:30:09.088 [2024-07-15 15:35:12.765452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.088 [2024-07-15 15:35:12.765491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.088 qpair failed and we were unable to recover it. 00:30:09.088 [2024-07-15 15:35:12.765801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.088 [2024-07-15 15:35:12.765849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.088 qpair failed and we were unable to recover it. 00:30:09.088 [2024-07-15 15:35:12.766196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.088 [2024-07-15 15:35:12.766236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.088 qpair failed and we were unable to recover it. 00:30:09.088 [2024-07-15 15:35:12.766623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.088 [2024-07-15 15:35:12.766663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.088 qpair failed and we were unable to recover it. 00:30:09.088 [2024-07-15 15:35:12.767042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.089 [2024-07-15 15:35:12.767083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.089 qpair failed and we were unable to recover it. 00:30:09.089 [2024-07-15 15:35:12.767480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.089 [2024-07-15 15:35:12.767520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.089 qpair failed and we were unable to recover it. 00:30:09.089 [2024-07-15 15:35:12.767903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.089 [2024-07-15 15:35:12.767944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.089 qpair failed and we were unable to recover it. 00:30:09.089 [2024-07-15 15:35:12.768290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.089 [2024-07-15 15:35:12.768330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.089 qpair failed and we were unable to recover it. 00:30:09.089 [2024-07-15 15:35:12.768669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.089 [2024-07-15 15:35:12.768708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.089 qpair failed and we were unable to recover it. 00:30:09.089 [2024-07-15 15:35:12.768996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.089 [2024-07-15 15:35:12.769009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.089 qpair failed and we were unable to recover it. 00:30:09.089 [2024-07-15 15:35:12.769249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.089 [2024-07-15 15:35:12.769288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.089 qpair failed and we were unable to recover it. 00:30:09.089 [2024-07-15 15:35:12.769604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.089 [2024-07-15 15:35:12.769644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.089 qpair failed and we were unable to recover it. 00:30:09.089 [2024-07-15 15:35:12.769956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.089 [2024-07-15 15:35:12.769996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.089 qpair failed and we were unable to recover it. 00:30:09.089 [2024-07-15 15:35:12.770324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.089 [2024-07-15 15:35:12.770364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.089 qpair failed and we were unable to recover it. 00:30:09.089 [2024-07-15 15:35:12.770612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.089 [2024-07-15 15:35:12.770652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.089 qpair failed and we were unable to recover it. 00:30:09.089 [2024-07-15 15:35:12.770950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.089 [2024-07-15 15:35:12.770996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.089 qpair failed and we were unable to recover it. 00:30:09.089 [2024-07-15 15:35:12.771385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.089 [2024-07-15 15:35:12.771424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.089 qpair failed and we were unable to recover it. 00:30:09.089 [2024-07-15 15:35:12.771664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.089 [2024-07-15 15:35:12.771704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.089 qpair failed and we were unable to recover it. 00:30:09.089 [2024-07-15 15:35:12.772090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.089 [2024-07-15 15:35:12.772130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.089 qpair failed and we were unable to recover it. 00:30:09.089 [2024-07-15 15:35:12.772426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.089 [2024-07-15 15:35:12.772465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.089 qpair failed and we were unable to recover it. 00:30:09.089 [2024-07-15 15:35:12.772784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.089 [2024-07-15 15:35:12.772827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.089 qpair failed and we were unable to recover it. 00:30:09.089 [2024-07-15 15:35:12.773143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.089 [2024-07-15 15:35:12.773183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.089 qpair failed and we were unable to recover it. 00:30:09.089 [2024-07-15 15:35:12.773568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.089 [2024-07-15 15:35:12.773607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.089 qpair failed and we were unable to recover it. 00:30:09.089 [2024-07-15 15:35:12.773857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.089 [2024-07-15 15:35:12.773898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.089 qpair failed and we were unable to recover it. 00:30:09.089 [2024-07-15 15:35:12.774303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.089 [2024-07-15 15:35:12.774343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.089 qpair failed and we were unable to recover it. 00:30:09.089 [2024-07-15 15:35:12.774584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.089 [2024-07-15 15:35:12.774625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.089 qpair failed and we were unable to recover it. 00:30:09.089 [2024-07-15 15:35:12.775034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.089 [2024-07-15 15:35:12.775076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.089 qpair failed and we were unable to recover it. 00:30:09.089 [2024-07-15 15:35:12.775313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.089 [2024-07-15 15:35:12.775352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.089 qpair failed and we were unable to recover it. 00:30:09.089 [2024-07-15 15:35:12.775662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.089 [2024-07-15 15:35:12.775702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.089 qpair failed and we were unable to recover it. 00:30:09.089 [2024-07-15 15:35:12.776042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.089 [2024-07-15 15:35:12.776083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.089 qpair failed and we were unable to recover it. 00:30:09.089 [2024-07-15 15:35:12.776423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.089 [2024-07-15 15:35:12.776462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.089 qpair failed and we were unable to recover it. 00:30:09.089 [2024-07-15 15:35:12.776629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.089 [2024-07-15 15:35:12.776668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.089 qpair failed and we were unable to recover it. 00:30:09.089 [2024-07-15 15:35:12.776901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.089 [2024-07-15 15:35:12.776942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.089 qpair failed and we were unable to recover it. 00:30:09.089 [2024-07-15 15:35:12.777328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.089 [2024-07-15 15:35:12.777367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.089 qpair failed and we were unable to recover it. 00:30:09.089 [2024-07-15 15:35:12.777756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.089 [2024-07-15 15:35:12.777796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.089 qpair failed and we were unable to recover it. 00:30:09.089 [2024-07-15 15:35:12.778144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.089 [2024-07-15 15:35:12.778185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.089 qpair failed and we were unable to recover it. 00:30:09.089 [2024-07-15 15:35:12.778517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.089 [2024-07-15 15:35:12.778529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.089 qpair failed and we were unable to recover it. 00:30:09.089 [2024-07-15 15:35:12.778854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.089 [2024-07-15 15:35:12.778866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.089 qpair failed and we were unable to recover it. 00:30:09.089 [2024-07-15 15:35:12.779112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.089 [2024-07-15 15:35:12.779125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.089 qpair failed and we were unable to recover it. 00:30:09.089 [2024-07-15 15:35:12.779301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.089 [2024-07-15 15:35:12.779313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.089 qpair failed and we were unable to recover it. 00:30:09.089 [2024-07-15 15:35:12.779622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.089 [2024-07-15 15:35:12.779662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.089 qpair failed and we were unable to recover it. 00:30:09.089 [2024-07-15 15:35:12.779974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.089 [2024-07-15 15:35:12.780014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.089 qpair failed and we were unable to recover it. 00:30:09.089 [2024-07-15 15:35:12.780356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.089 [2024-07-15 15:35:12.780395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.089 qpair failed and we were unable to recover it. 00:30:09.089 [2024-07-15 15:35:12.780778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.089 [2024-07-15 15:35:12.780817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.089 qpair failed and we were unable to recover it. 00:30:09.089 [2024-07-15 15:35:12.781038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.089 [2024-07-15 15:35:12.781050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.089 qpair failed and we were unable to recover it. 00:30:09.090 [2024-07-15 15:35:12.781379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.090 [2024-07-15 15:35:12.781419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.090 qpair failed and we were unable to recover it. 00:30:09.090 [2024-07-15 15:35:12.781713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.090 [2024-07-15 15:35:12.781753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.090 qpair failed and we were unable to recover it. 00:30:09.090 [2024-07-15 15:35:12.782072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.090 [2024-07-15 15:35:12.782112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.090 qpair failed and we were unable to recover it. 00:30:09.090 [2024-07-15 15:35:12.782351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.090 [2024-07-15 15:35:12.782391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.090 qpair failed and we were unable to recover it. 00:30:09.090 [2024-07-15 15:35:12.782754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.090 [2024-07-15 15:35:12.782793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.090 qpair failed and we were unable to recover it. 00:30:09.090 [2024-07-15 15:35:12.783211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.090 [2024-07-15 15:35:12.783252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.090 qpair failed and we were unable to recover it. 00:30:09.090 [2024-07-15 15:35:12.783571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.090 [2024-07-15 15:35:12.783610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.090 qpair failed and we were unable to recover it. 00:30:09.090 [2024-07-15 15:35:12.783933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.090 [2024-07-15 15:35:12.783974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.090 qpair failed and we were unable to recover it. 00:30:09.090 [2024-07-15 15:35:12.784210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.090 [2024-07-15 15:35:12.784255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.090 qpair failed and we were unable to recover it. 00:30:09.090 [2024-07-15 15:35:12.784555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.090 [2024-07-15 15:35:12.784567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.090 qpair failed and we were unable to recover it. 00:30:09.090 [2024-07-15 15:35:12.784800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.090 [2024-07-15 15:35:12.784814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.090 qpair failed and we were unable to recover it. 00:30:09.090 [2024-07-15 15:35:12.785050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.090 [2024-07-15 15:35:12.785091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.090 qpair failed and we were unable to recover it. 00:30:09.090 [2024-07-15 15:35:12.785454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.090 [2024-07-15 15:35:12.785494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.090 qpair failed and we were unable to recover it. 00:30:09.090 [2024-07-15 15:35:12.785808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.090 [2024-07-15 15:35:12.785857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.090 qpair failed and we were unable to recover it. 00:30:09.090 [2024-07-15 15:35:12.786245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.090 [2024-07-15 15:35:12.786284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.090 qpair failed and we were unable to recover it. 00:30:09.090 [2024-07-15 15:35:12.786694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.090 [2024-07-15 15:35:12.786733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.090 qpair failed and we were unable to recover it. 00:30:09.090 [2024-07-15 15:35:12.787121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.090 [2024-07-15 15:35:12.787163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.090 qpair failed and we were unable to recover it. 00:30:09.090 [2024-07-15 15:35:12.787550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.090 [2024-07-15 15:35:12.787589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.090 qpair failed and we were unable to recover it. 00:30:09.090 [2024-07-15 15:35:12.787974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.090 [2024-07-15 15:35:12.788015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.090 qpair failed and we were unable to recover it. 00:30:09.090 [2024-07-15 15:35:12.788248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.090 [2024-07-15 15:35:12.788261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.090 qpair failed and we were unable to recover it. 00:30:09.090 [2024-07-15 15:35:12.788538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.090 [2024-07-15 15:35:12.788578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.090 qpair failed and we were unable to recover it. 00:30:09.090 [2024-07-15 15:35:12.788966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.090 [2024-07-15 15:35:12.789007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.090 qpair failed and we were unable to recover it. 00:30:09.090 [2024-07-15 15:35:12.789350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.090 [2024-07-15 15:35:12.789389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.090 qpair failed and we were unable to recover it. 00:30:09.090 [2024-07-15 15:35:12.789714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.090 [2024-07-15 15:35:12.789753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.090 qpair failed and we were unable to recover it. 00:30:09.090 [2024-07-15 15:35:12.790088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.090 [2024-07-15 15:35:12.790129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.090 qpair failed and we were unable to recover it. 00:30:09.090 [2024-07-15 15:35:12.790431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.090 [2024-07-15 15:35:12.790443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.090 qpair failed and we were unable to recover it. 00:30:09.090 [2024-07-15 15:35:12.790613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.090 [2024-07-15 15:35:12.790625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.090 qpair failed and we were unable to recover it. 00:30:09.090 [2024-07-15 15:35:12.790948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.090 [2024-07-15 15:35:12.790967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.090 qpair failed and we were unable to recover it. 00:30:09.090 [2024-07-15 15:35:12.791298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.090 [2024-07-15 15:35:12.791338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.090 qpair failed and we were unable to recover it. 00:30:09.090 [2024-07-15 15:35:12.791581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.090 [2024-07-15 15:35:12.791621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.090 qpair failed and we were unable to recover it. 00:30:09.090 [2024-07-15 15:35:12.791915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.090 [2024-07-15 15:35:12.791956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.090 qpair failed and we were unable to recover it. 00:30:09.090 [2024-07-15 15:35:12.792147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.090 [2024-07-15 15:35:12.792160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.090 qpair failed and we were unable to recover it. 00:30:09.090 [2024-07-15 15:35:12.792420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.090 [2024-07-15 15:35:12.792432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.090 qpair failed and we were unable to recover it. 00:30:09.090 [2024-07-15 15:35:12.792676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.090 [2024-07-15 15:35:12.792688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.090 qpair failed and we were unable to recover it. 00:30:09.090 [2024-07-15 15:35:12.793091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.090 [2024-07-15 15:35:12.793132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.091 qpair failed and we were unable to recover it. 00:30:09.091 [2024-07-15 15:35:12.793430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.091 [2024-07-15 15:35:12.793442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.091 qpair failed and we were unable to recover it. 00:30:09.091 [2024-07-15 15:35:12.793775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.091 [2024-07-15 15:35:12.793814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.091 qpair failed and we were unable to recover it. 00:30:09.091 [2024-07-15 15:35:12.794137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.091 [2024-07-15 15:35:12.794177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.091 qpair failed and we were unable to recover it. 00:30:09.091 [2024-07-15 15:35:12.794491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.091 [2024-07-15 15:35:12.794503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.091 qpair failed and we were unable to recover it. 00:30:09.091 [2024-07-15 15:35:12.794741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.091 [2024-07-15 15:35:12.794754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.091 qpair failed and we were unable to recover it. 00:30:09.091 [2024-07-15 15:35:12.795083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.091 [2024-07-15 15:35:12.795124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.091 qpair failed and we were unable to recover it. 00:30:09.091 [2024-07-15 15:35:12.795513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.091 [2024-07-15 15:35:12.795552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.091 qpair failed and we were unable to recover it. 00:30:09.091 [2024-07-15 15:35:12.795916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.091 [2024-07-15 15:35:12.795958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.091 qpair failed and we were unable to recover it. 00:30:09.091 [2024-07-15 15:35:12.796290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.091 [2024-07-15 15:35:12.796330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.091 qpair failed and we were unable to recover it. 00:30:09.091 [2024-07-15 15:35:12.796647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.091 [2024-07-15 15:35:12.796687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.091 qpair failed and we were unable to recover it. 00:30:09.091 [2024-07-15 15:35:12.797071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.091 [2024-07-15 15:35:12.797112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.091 qpair failed and we were unable to recover it. 00:30:09.091 [2024-07-15 15:35:12.797522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.091 [2024-07-15 15:35:12.797561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.091 qpair failed and we were unable to recover it. 00:30:09.091 [2024-07-15 15:35:12.797948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.091 [2024-07-15 15:35:12.797988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.091 qpair failed and we were unable to recover it. 00:30:09.091 [2024-07-15 15:35:12.798349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.091 [2024-07-15 15:35:12.798361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.091 qpair failed and we were unable to recover it. 00:30:09.091 [2024-07-15 15:35:12.798669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.091 [2024-07-15 15:35:12.798708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.091 qpair failed and we were unable to recover it. 00:30:09.091 [2024-07-15 15:35:12.799073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.091 [2024-07-15 15:35:12.799119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.091 qpair failed and we were unable to recover it. 00:30:09.091 [2024-07-15 15:35:12.799432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.091 [2024-07-15 15:35:12.799444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.091 qpair failed and we were unable to recover it. 00:30:09.091 [2024-07-15 15:35:12.799621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.091 [2024-07-15 15:35:12.799633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.091 qpair failed and we were unable to recover it. 00:30:09.091 [2024-07-15 15:35:12.799883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.091 [2024-07-15 15:35:12.799896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.091 qpair failed and we were unable to recover it. 00:30:09.091 [2024-07-15 15:35:12.800153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.091 [2024-07-15 15:35:12.800203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.091 qpair failed and we were unable to recover it. 00:30:09.091 [2024-07-15 15:35:12.800524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.091 [2024-07-15 15:35:12.800564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.091 qpair failed and we were unable to recover it. 00:30:09.091 [2024-07-15 15:35:12.800897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.091 [2024-07-15 15:35:12.800937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.091 qpair failed and we were unable to recover it. 00:30:09.091 [2024-07-15 15:35:12.801159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.091 [2024-07-15 15:35:12.801171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.091 qpair failed and we were unable to recover it. 00:30:09.091 [2024-07-15 15:35:12.801279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.091 [2024-07-15 15:35:12.801291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.091 qpair failed and we were unable to recover it. 00:30:09.091 [2024-07-15 15:35:12.801658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.091 [2024-07-15 15:35:12.801698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.091 qpair failed and we were unable to recover it. 00:30:09.091 [2024-07-15 15:35:12.802127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.091 [2024-07-15 15:35:12.802168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.091 qpair failed and we were unable to recover it. 00:30:09.091 [2024-07-15 15:35:12.802557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.091 [2024-07-15 15:35:12.802596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.091 qpair failed and we were unable to recover it. 00:30:09.091 [2024-07-15 15:35:12.802977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.091 [2024-07-15 15:35:12.803018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.091 qpair failed and we were unable to recover it. 00:30:09.091 [2024-07-15 15:35:12.803249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.091 [2024-07-15 15:35:12.803262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.091 qpair failed and we were unable to recover it. 00:30:09.091 [2024-07-15 15:35:12.803498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.091 [2024-07-15 15:35:12.803511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.091 qpair failed and we were unable to recover it. 00:30:09.091 [2024-07-15 15:35:12.803816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.091 [2024-07-15 15:35:12.803877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.091 qpair failed and we were unable to recover it. 00:30:09.091 [2024-07-15 15:35:12.804197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.091 [2024-07-15 15:35:12.804237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.091 qpair failed and we were unable to recover it. 00:30:09.091 [2024-07-15 15:35:12.804626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.091 [2024-07-15 15:35:12.804666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.091 qpair failed and we were unable to recover it. 00:30:09.091 [2024-07-15 15:35:12.805028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.091 [2024-07-15 15:35:12.805069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.091 qpair failed and we were unable to recover it. 00:30:09.091 [2024-07-15 15:35:12.805365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.091 [2024-07-15 15:35:12.805377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.091 qpair failed and we were unable to recover it. 00:30:09.091 [2024-07-15 15:35:12.805611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.091 [2024-07-15 15:35:12.805623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.091 qpair failed and we were unable to recover it. 00:30:09.091 [2024-07-15 15:35:12.805888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.091 [2024-07-15 15:35:12.805901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.091 qpair failed and we were unable to recover it. 00:30:09.091 [2024-07-15 15:35:12.806208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.091 [2024-07-15 15:35:12.806248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.091 qpair failed and we were unable to recover it. 00:30:09.091 [2024-07-15 15:35:12.806547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.091 [2024-07-15 15:35:12.806586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.091 qpair failed and we were unable to recover it. 00:30:09.091 [2024-07-15 15:35:12.806906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.091 [2024-07-15 15:35:12.806956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.091 qpair failed and we were unable to recover it. 00:30:09.091 [2024-07-15 15:35:12.807238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.092 [2024-07-15 15:35:12.807251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.092 qpair failed and we were unable to recover it. 00:30:09.092 [2024-07-15 15:35:12.807505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.092 [2024-07-15 15:35:12.807518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.092 qpair failed and we were unable to recover it. 00:30:09.092 [2024-07-15 15:35:12.807842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.092 [2024-07-15 15:35:12.807856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.092 qpair failed and we were unable to recover it. 00:30:09.092 [2024-07-15 15:35:12.808183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.092 [2024-07-15 15:35:12.808196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.092 qpair failed and we were unable to recover it. 00:30:09.092 [2024-07-15 15:35:12.808362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.092 [2024-07-15 15:35:12.808374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.092 qpair failed and we were unable to recover it. 00:30:09.092 [2024-07-15 15:35:12.808662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.092 [2024-07-15 15:35:12.808702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.092 qpair failed and we were unable to recover it. 00:30:09.092 [2024-07-15 15:35:12.809015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.092 [2024-07-15 15:35:12.809056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.092 qpair failed and we were unable to recover it. 00:30:09.092 [2024-07-15 15:35:12.809394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.092 [2024-07-15 15:35:12.809434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.092 qpair failed and we were unable to recover it. 00:30:09.092 [2024-07-15 15:35:12.809732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.092 [2024-07-15 15:35:12.809772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.092 qpair failed and we were unable to recover it. 00:30:09.092 [2024-07-15 15:35:12.810145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.092 [2024-07-15 15:35:12.810186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.092 qpair failed and we were unable to recover it. 00:30:09.092 [2024-07-15 15:35:12.810574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.092 [2024-07-15 15:35:12.810614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.092 qpair failed and we were unable to recover it. 00:30:09.092 [2024-07-15 15:35:12.810938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.092 [2024-07-15 15:35:12.810979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.092 qpair failed and we were unable to recover it. 00:30:09.092 [2024-07-15 15:35:12.811370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.092 [2024-07-15 15:35:12.811410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.092 qpair failed and we were unable to recover it. 00:30:09.092 [2024-07-15 15:35:12.811801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.092 [2024-07-15 15:35:12.811866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.092 qpair failed and we were unable to recover it. 00:30:09.092 [2024-07-15 15:35:12.812109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.092 [2024-07-15 15:35:12.812150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.092 qpair failed and we were unable to recover it. 00:30:09.092 [2024-07-15 15:35:12.812456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.092 [2024-07-15 15:35:12.812471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.092 qpair failed and we were unable to recover it. 00:30:09.092 [2024-07-15 15:35:12.812791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.092 [2024-07-15 15:35:12.812830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.092 qpair failed and we were unable to recover it. 00:30:09.092 [2024-07-15 15:35:12.813175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.092 [2024-07-15 15:35:12.813215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.092 qpair failed and we were unable to recover it. 00:30:09.092 [2024-07-15 15:35:12.813553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.092 [2024-07-15 15:35:12.813566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.092 qpair failed and we were unable to recover it. 00:30:09.092 [2024-07-15 15:35:12.813874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.092 [2024-07-15 15:35:12.813915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.092 qpair failed and we were unable to recover it. 00:30:09.092 [2024-07-15 15:35:12.814304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.092 [2024-07-15 15:35:12.814344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.092 qpair failed and we were unable to recover it. 00:30:09.092 [2024-07-15 15:35:12.814640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.092 [2024-07-15 15:35:12.814680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.092 qpair failed and we were unable to recover it. 00:30:09.092 [2024-07-15 15:35:12.815068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.092 [2024-07-15 15:35:12.815109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.092 qpair failed and we were unable to recover it. 00:30:09.092 [2024-07-15 15:35:12.815400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.092 [2024-07-15 15:35:12.815440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.092 qpair failed and we were unable to recover it. 00:30:09.092 [2024-07-15 15:35:12.815764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.092 [2024-07-15 15:35:12.815777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.092 qpair failed and we were unable to recover it. 00:30:09.092 [2024-07-15 15:35:12.816011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.092 [2024-07-15 15:35:12.816024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.092 qpair failed and we were unable to recover it. 00:30:09.092 [2024-07-15 15:35:12.816358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.092 [2024-07-15 15:35:12.816399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.092 qpair failed and we were unable to recover it. 00:30:09.092 [2024-07-15 15:35:12.816660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.092 [2024-07-15 15:35:12.816701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.092 qpair failed and we were unable to recover it. 00:30:09.092 [2024-07-15 15:35:12.817012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.092 [2024-07-15 15:35:12.817054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.092 qpair failed and we were unable to recover it. 00:30:09.092 [2024-07-15 15:35:12.817374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.092 [2024-07-15 15:35:12.817414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.092 qpair failed and we were unable to recover it. 00:30:09.092 [2024-07-15 15:35:12.817634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.092 [2024-07-15 15:35:12.817674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.092 qpair failed and we were unable to recover it. 00:30:09.092 [2024-07-15 15:35:12.818002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.092 [2024-07-15 15:35:12.818043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.092 qpair failed and we were unable to recover it. 00:30:09.092 [2024-07-15 15:35:12.818434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.092 [2024-07-15 15:35:12.818474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.092 qpair failed and we were unable to recover it. 00:30:09.092 [2024-07-15 15:35:12.818772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.092 [2024-07-15 15:35:12.818811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.092 qpair failed and we were unable to recover it. 00:30:09.092 [2024-07-15 15:35:12.819209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.092 [2024-07-15 15:35:12.819249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.092 qpair failed and we were unable to recover it. 00:30:09.092 [2024-07-15 15:35:12.819586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.092 [2024-07-15 15:35:12.819626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.092 qpair failed and we were unable to recover it. 00:30:09.092 [2024-07-15 15:35:12.819969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.092 [2024-07-15 15:35:12.820010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.092 qpair failed and we were unable to recover it. 00:30:09.092 [2024-07-15 15:35:12.820327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.092 [2024-07-15 15:35:12.820367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.092 qpair failed and we were unable to recover it. 00:30:09.092 [2024-07-15 15:35:12.820621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.092 [2024-07-15 15:35:12.820661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.092 qpair failed and we were unable to recover it. 00:30:09.092 [2024-07-15 15:35:12.820992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.092 [2024-07-15 15:35:12.821032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.092 qpair failed and we were unable to recover it. 00:30:09.092 [2024-07-15 15:35:12.821346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.092 [2024-07-15 15:35:12.821386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.092 qpair failed and we were unable to recover it. 00:30:09.093 [2024-07-15 15:35:12.821764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.093 [2024-07-15 15:35:12.821805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.093 qpair failed and we were unable to recover it. 00:30:09.093 [2024-07-15 15:35:12.822208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.093 [2024-07-15 15:35:12.822249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.093 qpair failed and we were unable to recover it. 00:30:09.093 [2024-07-15 15:35:12.822500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.093 [2024-07-15 15:35:12.822512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.093 qpair failed and we were unable to recover it. 00:30:09.093 [2024-07-15 15:35:12.822779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.093 [2024-07-15 15:35:12.822819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.093 qpair failed and we were unable to recover it. 00:30:09.093 [2024-07-15 15:35:12.823220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.093 [2024-07-15 15:35:12.823261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.093 qpair failed and we were unable to recover it. 00:30:09.093 [2024-07-15 15:35:12.823628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.093 [2024-07-15 15:35:12.823668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.093 qpair failed and we were unable to recover it. 00:30:09.093 [2024-07-15 15:35:12.824051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.093 [2024-07-15 15:35:12.824093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.093 qpair failed and we were unable to recover it. 00:30:09.093 [2024-07-15 15:35:12.824417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.093 [2024-07-15 15:35:12.824430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.093 qpair failed and we were unable to recover it. 00:30:09.093 [2024-07-15 15:35:12.824684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.093 [2024-07-15 15:35:12.824697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.093 qpair failed and we were unable to recover it. 00:30:09.093 [2024-07-15 15:35:12.825017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.093 [2024-07-15 15:35:12.825058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.093 qpair failed and we were unable to recover it. 00:30:09.093 [2024-07-15 15:35:12.825380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.093 [2024-07-15 15:35:12.825420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.093 qpair failed and we were unable to recover it. 00:30:09.093 [2024-07-15 15:35:12.825806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.093 [2024-07-15 15:35:12.825854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.093 qpair failed and we were unable to recover it. 00:30:09.093 [2024-07-15 15:35:12.826078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.093 [2024-07-15 15:35:12.826091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.093 qpair failed and we were unable to recover it. 00:30:09.093 [2024-07-15 15:35:12.826374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.093 [2024-07-15 15:35:12.826414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.093 qpair failed and we were unable to recover it. 00:30:09.093 [2024-07-15 15:35:12.826779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.093 [2024-07-15 15:35:12.826825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.093 qpair failed and we were unable to recover it. 00:30:09.093 [2024-07-15 15:35:12.827225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.093 [2024-07-15 15:35:12.827265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.093 qpair failed and we were unable to recover it. 00:30:09.093 [2024-07-15 15:35:12.827571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.093 [2024-07-15 15:35:12.827583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.093 qpair failed and we were unable to recover it. 00:30:09.093 [2024-07-15 15:35:12.827939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.093 [2024-07-15 15:35:12.827981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.093 qpair failed and we were unable to recover it. 00:30:09.093 [2024-07-15 15:35:12.828279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.093 [2024-07-15 15:35:12.828319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.093 qpair failed and we were unable to recover it. 00:30:09.093 [2024-07-15 15:35:12.828691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.093 [2024-07-15 15:35:12.828731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.093 qpair failed and we were unable to recover it. 00:30:09.093 [2024-07-15 15:35:12.829095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.093 [2024-07-15 15:35:12.829137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.093 qpair failed and we were unable to recover it. 00:30:09.093 [2024-07-15 15:35:12.829446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.093 [2024-07-15 15:35:12.829458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.093 qpair failed and we were unable to recover it. 00:30:09.093 [2024-07-15 15:35:12.829690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.093 [2024-07-15 15:35:12.829703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.093 qpair failed and we were unable to recover it. 00:30:09.093 [2024-07-15 15:35:12.830052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.093 [2024-07-15 15:35:12.830093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.093 qpair failed and we were unable to recover it. 00:30:09.093 [2024-07-15 15:35:12.830459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.093 [2024-07-15 15:35:12.830499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.093 qpair failed and we were unable to recover it. 00:30:09.093 [2024-07-15 15:35:12.830887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.093 [2024-07-15 15:35:12.830928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.093 qpair failed and we were unable to recover it. 00:30:09.093 [2024-07-15 15:35:12.831310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.093 [2024-07-15 15:35:12.831322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.093 qpair failed and we were unable to recover it. 00:30:09.093 [2024-07-15 15:35:12.831644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.093 [2024-07-15 15:35:12.831657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.093 qpair failed and we were unable to recover it. 00:30:09.093 [2024-07-15 15:35:12.831968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.093 [2024-07-15 15:35:12.831982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.093 qpair failed and we were unable to recover it. 00:30:09.093 [2024-07-15 15:35:12.832242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.093 [2024-07-15 15:35:12.832282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.093 qpair failed and we were unable to recover it. 00:30:09.093 [2024-07-15 15:35:12.832691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.093 [2024-07-15 15:35:12.832731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.093 qpair failed and we were unable to recover it. 00:30:09.093 [2024-07-15 15:35:12.833116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.093 [2024-07-15 15:35:12.833157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.093 qpair failed and we were unable to recover it. 00:30:09.093 [2024-07-15 15:35:12.833520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.093 [2024-07-15 15:35:12.833560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.093 qpair failed and we were unable to recover it. 00:30:09.093 [2024-07-15 15:35:12.833950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.093 [2024-07-15 15:35:12.833991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.093 qpair failed and we were unable to recover it. 00:30:09.093 [2024-07-15 15:35:12.834324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.093 [2024-07-15 15:35:12.834364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.093 qpair failed and we were unable to recover it. 00:30:09.093 [2024-07-15 15:35:12.834750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.093 [2024-07-15 15:35:12.834790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.093 qpair failed and we were unable to recover it. 00:30:09.093 [2024-07-15 15:35:12.835164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.093 [2024-07-15 15:35:12.835205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.093 qpair failed and we were unable to recover it. 00:30:09.093 [2024-07-15 15:35:12.835512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.093 [2024-07-15 15:35:12.835525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.093 qpair failed and we were unable to recover it. 00:30:09.093 [2024-07-15 15:35:12.835760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.093 [2024-07-15 15:35:12.835773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.093 qpair failed and we were unable to recover it. 00:30:09.093 [2024-07-15 15:35:12.836104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.093 [2024-07-15 15:35:12.836145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.093 qpair failed and we were unable to recover it. 00:30:09.093 [2024-07-15 15:35:12.836531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-15 15:35:12.836571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-15 15:35:12.836900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-15 15:35:12.836941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-15 15:35:12.837273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-15 15:35:12.837313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-15 15:35:12.837701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-15 15:35:12.837740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-15 15:35:12.838056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-15 15:35:12.838097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-15 15:35:12.838326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-15 15:35:12.838366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-15 15:35:12.838685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-15 15:35:12.838698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-15 15:35:12.839019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-15 15:35:12.839032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-15 15:35:12.839311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-15 15:35:12.839351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-15 15:35:12.839739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-15 15:35:12.839779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-15 15:35:12.840095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-15 15:35:12.840137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-15 15:35:12.840466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-15 15:35:12.840478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-15 15:35:12.840799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-15 15:35:12.840850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-15 15:35:12.841159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-15 15:35:12.841198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-15 15:35:12.841585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-15 15:35:12.841631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-15 15:35:12.842027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-15 15:35:12.842084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-15 15:35:12.842418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-15 15:35:12.842457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-15 15:35:12.842759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-15 15:35:12.842799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-15 15:35:12.843197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-15 15:35:12.843238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-15 15:35:12.843627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-15 15:35:12.843667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-15 15:35:12.844059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-15 15:35:12.844100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-15 15:35:12.844463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-15 15:35:12.844504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-15 15:35:12.844874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-15 15:35:12.844915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-15 15:35:12.845214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-15 15:35:12.845254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-15 15:35:12.845620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-15 15:35:12.845660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-15 15:35:12.846047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-15 15:35:12.846089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-15 15:35:12.846481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-15 15:35:12.846525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-15 15:35:12.846850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-15 15:35:12.846879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-15 15:35:12.847214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-15 15:35:12.847255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-15 15:35:12.847620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-15 15:35:12.847660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-15 15:35:12.847983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-15 15:35:12.848025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-15 15:35:12.848322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-15 15:35:12.848362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-15 15:35:12.848752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-15 15:35:12.848792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-15 15:35:12.849166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-15 15:35:12.849207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-15 15:35:12.849595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.094 [2024-07-15 15:35:12.849635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.094 qpair failed and we were unable to recover it. 00:30:09.094 [2024-07-15 15:35:12.849887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-15 15:35:12.849928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-15 15:35:12.850295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-15 15:35:12.850335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-15 15:35:12.850724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-15 15:35:12.850764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-15 15:35:12.851110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-15 15:35:12.851150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-15 15:35:12.851538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-15 15:35:12.851578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-15 15:35:12.851891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-15 15:35:12.851933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-15 15:35:12.852342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-15 15:35:12.852383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-15 15:35:12.852682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-15 15:35:12.852722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-15 15:35:12.853020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-15 15:35:12.853062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-15 15:35:12.853381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-15 15:35:12.853421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-15 15:35:12.853808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-15 15:35:12.853861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-15 15:35:12.854252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-15 15:35:12.854293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-15 15:35:12.854682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-15 15:35:12.854721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-15 15:35:12.855122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-15 15:35:12.855163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-15 15:35:12.855469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-15 15:35:12.855509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-15 15:35:12.855898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-15 15:35:12.855940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-15 15:35:12.856332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-15 15:35:12.856372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-15 15:35:12.856738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-15 15:35:12.856778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-15 15:35:12.857096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-15 15:35:12.857138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-15 15:35:12.857470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-15 15:35:12.857509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-15 15:35:12.857880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-15 15:35:12.857921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-15 15:35:12.858309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-15 15:35:12.858349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-15 15:35:12.858713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-15 15:35:12.858753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-15 15:35:12.859063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-15 15:35:12.859105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-15 15:35:12.859405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-15 15:35:12.859445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-15 15:35:12.859858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-15 15:35:12.859901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-15 15:35:12.860268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-15 15:35:12.860308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-15 15:35:12.860672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-15 15:35:12.860713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-15 15:35:12.861033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-15 15:35:12.861074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-15 15:35:12.861466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-15 15:35:12.861507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-15 15:35:12.861871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-15 15:35:12.861912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-15 15:35:12.862234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-15 15:35:12.862274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-15 15:35:12.862506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-15 15:35:12.862545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-15 15:35:12.862886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-15 15:35:12.862927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-15 15:35:12.863222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-15 15:35:12.863262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-15 15:35:12.863610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-15 15:35:12.863650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-15 15:35:12.864037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-15 15:35:12.864079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-15 15:35:12.864393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-15 15:35:12.864406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-15 15:35:12.864764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-15 15:35:12.864804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-15 15:35:12.865159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-15 15:35:12.865201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.095 [2024-07-15 15:35:12.865582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.095 [2024-07-15 15:35:12.865595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.095 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-15 15:35:12.865902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-15 15:35:12.865944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-15 15:35:12.866256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-15 15:35:12.866296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-15 15:35:12.866663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-15 15:35:12.866704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-15 15:35:12.867092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-15 15:35:12.867133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-15 15:35:12.867427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-15 15:35:12.867439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-15 15:35:12.867753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-15 15:35:12.867799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-15 15:35:12.868111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-15 15:35:12.868152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-15 15:35:12.868444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-15 15:35:12.868457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-15 15:35:12.868793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-15 15:35:12.868842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-15 15:35:12.869233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-15 15:35:12.869273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-15 15:35:12.869490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-15 15:35:12.869503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-15 15:35:12.869862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-15 15:35:12.869903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-15 15:35:12.870239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-15 15:35:12.870280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-15 15:35:12.870645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-15 15:35:12.870684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-15 15:35:12.871054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-15 15:35:12.871096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-15 15:35:12.871412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-15 15:35:12.871452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-15 15:35:12.871849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-15 15:35:12.871890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-15 15:35:12.872283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-15 15:35:12.872323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-15 15:35:12.872721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-15 15:35:12.872734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-15 15:35:12.873060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-15 15:35:12.873074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-15 15:35:12.873331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-15 15:35:12.873371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-15 15:35:12.873681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-15 15:35:12.873721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-15 15:35:12.874089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-15 15:35:12.874131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-15 15:35:12.874514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-15 15:35:12.874527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-15 15:35:12.874856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-15 15:35:12.874897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-15 15:35:12.875246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-15 15:35:12.875287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-15 15:35:12.875616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-15 15:35:12.875656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-15 15:35:12.875958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-15 15:35:12.876000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-15 15:35:12.876392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-15 15:35:12.876432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-15 15:35:12.876678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-15 15:35:12.876690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-15 15:35:12.876954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-15 15:35:12.876967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-15 15:35:12.877276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-15 15:35:12.877315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-15 15:35:12.877634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-15 15:35:12.877675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-15 15:35:12.878062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-15 15:35:12.878104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.096 qpair failed and we were unable to recover it. 00:30:09.096 [2024-07-15 15:35:12.878413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.096 [2024-07-15 15:35:12.878427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-15 15:35:12.878737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-15 15:35:12.878776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-15 15:35:12.879177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-15 15:35:12.879219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-15 15:35:12.879607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-15 15:35:12.879647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-15 15:35:12.879969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-15 15:35:12.880011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-15 15:35:12.880387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-15 15:35:12.880428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-15 15:35:12.880796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-15 15:35:12.880844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-15 15:35:12.881234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-15 15:35:12.881274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-15 15:35:12.881597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-15 15:35:12.881637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-15 15:35:12.881964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-15 15:35:12.882005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-15 15:35:12.882336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-15 15:35:12.882376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-15 15:35:12.882676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-15 15:35:12.882722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-15 15:35:12.883073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-15 15:35:12.883114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-15 15:35:12.883447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-15 15:35:12.883488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-15 15:35:12.883794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-15 15:35:12.883807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-15 15:35:12.884058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-15 15:35:12.884088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-15 15:35:12.884480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-15 15:35:12.884521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-15 15:35:12.884910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-15 15:35:12.884951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-15 15:35:12.885338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-15 15:35:12.885378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-15 15:35:12.885709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-15 15:35:12.885749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-15 15:35:12.886062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-15 15:35:12.886103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-15 15:35:12.886493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-15 15:35:12.886533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-15 15:35:12.886907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-15 15:35:12.886949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-15 15:35:12.887285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-15 15:35:12.887325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-15 15:35:12.887673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-15 15:35:12.887687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-15 15:35:12.887959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-15 15:35:12.887989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-15 15:35:12.888290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-15 15:35:12.888330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-15 15:35:12.888696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-15 15:35:12.888737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-15 15:35:12.889122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-15 15:35:12.889163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-15 15:35:12.889485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-15 15:35:12.889525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-15 15:35:12.889906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-15 15:35:12.889920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-15 15:35:12.890231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-15 15:35:12.890271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-15 15:35:12.890571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-15 15:35:12.890612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-15 15:35:12.890983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-15 15:35:12.891024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-15 15:35:12.891417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-15 15:35:12.891457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-15 15:35:12.891852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-15 15:35:12.891895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-15 15:35:12.892223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-15 15:35:12.892236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-15 15:35:12.892470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-15 15:35:12.892483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-15 15:35:12.892671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-15 15:35:12.892684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-15 15:35:12.892949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-15 15:35:12.892989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-15 15:35:12.893287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.097 [2024-07-15 15:35:12.893327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.097 qpair failed and we were unable to recover it. 00:30:09.097 [2024-07-15 15:35:12.893668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-15 15:35:12.893709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-15 15:35:12.894114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-15 15:35:12.894155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-15 15:35:12.894545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-15 15:35:12.894585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-15 15:35:12.894907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-15 15:35:12.894964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-15 15:35:12.895333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-15 15:35:12.895374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-15 15:35:12.895756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-15 15:35:12.895769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-15 15:35:12.896098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-15 15:35:12.896139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-15 15:35:12.896533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-15 15:35:12.896573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-15 15:35:12.896901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-15 15:35:12.896913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-15 15:35:12.897174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-15 15:35:12.897215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-15 15:35:12.897628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-15 15:35:12.897674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-15 15:35:12.897973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-15 15:35:12.898014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-15 15:35:12.898404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-15 15:35:12.898445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-15 15:35:12.898737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-15 15:35:12.898750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-15 15:35:12.899082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-15 15:35:12.899123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-15 15:35:12.899436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-15 15:35:12.899477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-15 15:35:12.899862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-15 15:35:12.899903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-15 15:35:12.900215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-15 15:35:12.900256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-15 15:35:12.900508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-15 15:35:12.900548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-15 15:35:12.900944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-15 15:35:12.900985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-15 15:35:12.901216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-15 15:35:12.901256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-15 15:35:12.901565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-15 15:35:12.901606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-15 15:35:12.901910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-15 15:35:12.901951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-15 15:35:12.902339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-15 15:35:12.902380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-15 15:35:12.902790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-15 15:35:12.902830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-15 15:35:12.903272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-15 15:35:12.903312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-15 15:35:12.903637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-15 15:35:12.903678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-15 15:35:12.904027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-15 15:35:12.904069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-15 15:35:12.904450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-15 15:35:12.904491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-15 15:35:12.904872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-15 15:35:12.904885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-15 15:35:12.905153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-15 15:35:12.905193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-15 15:35:12.905443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-15 15:35:12.905484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-15 15:35:12.905878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-15 15:35:12.905919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-15 15:35:12.906258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-15 15:35:12.906299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-15 15:35:12.906641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-15 15:35:12.906682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-15 15:35:12.907052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-15 15:35:12.907094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-15 15:35:12.907394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-15 15:35:12.907434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-15 15:35:12.907800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-15 15:35:12.907813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.098 [2024-07-15 15:35:12.908136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.098 [2024-07-15 15:35:12.908177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.098 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-15 15:35:12.908544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-15 15:35:12.908584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-15 15:35:12.908977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-15 15:35:12.909018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-15 15:35:12.909310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-15 15:35:12.909324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-15 15:35:12.909582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-15 15:35:12.909622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-15 15:35:12.909896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-15 15:35:12.909937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-15 15:35:12.910313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-15 15:35:12.910353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-15 15:35:12.910652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-15 15:35:12.910692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-15 15:35:12.910952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-15 15:35:12.910994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-15 15:35:12.911338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-15 15:35:12.911377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-15 15:35:12.911769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-15 15:35:12.911810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-15 15:35:12.912205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-15 15:35:12.912246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-15 15:35:12.912562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-15 15:35:12.912608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-15 15:35:12.913000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-15 15:35:12.913042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-15 15:35:12.913410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-15 15:35:12.913450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-15 15:35:12.913845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-15 15:35:12.913887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-15 15:35:12.914259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-15 15:35:12.914300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-15 15:35:12.914656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-15 15:35:12.914695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-15 15:35:12.915099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-15 15:35:12.915141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-15 15:35:12.915432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-15 15:35:12.915446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-15 15:35:12.915699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-15 15:35:12.915744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-15 15:35:12.916070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-15 15:35:12.916111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-15 15:35:12.916346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-15 15:35:12.916386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-15 15:35:12.916688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-15 15:35:12.916728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-15 15:35:12.917119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-15 15:35:12.917161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-15 15:35:12.917491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-15 15:35:12.917532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-15 15:35:12.917906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-15 15:35:12.917948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-15 15:35:12.918202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-15 15:35:12.918243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-15 15:35:12.918639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-15 15:35:12.918679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-15 15:35:12.918979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-15 15:35:12.919021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-15 15:35:12.919389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-15 15:35:12.919429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-15 15:35:12.919749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-15 15:35:12.919762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-15 15:35:12.919995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-15 15:35:12.920008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-15 15:35:12.920329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-15 15:35:12.920370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-15 15:35:12.920778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-15 15:35:12.920818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-15 15:35:12.921222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-15 15:35:12.921263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-15 15:35:12.921629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.099 [2024-07-15 15:35:12.921670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.099 qpair failed and we were unable to recover it. 00:30:09.099 [2024-07-15 15:35:12.922005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-15 15:35:12.922047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-15 15:35:12.922434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-15 15:35:12.922474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-15 15:35:12.922798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-15 15:35:12.922847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-15 15:35:12.923237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-15 15:35:12.923278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-15 15:35:12.923671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-15 15:35:12.923711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-15 15:35:12.924035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-15 15:35:12.924076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-15 15:35:12.924393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-15 15:35:12.924416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-15 15:35:12.924655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-15 15:35:12.924668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-15 15:35:12.924989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-15 15:35:12.925030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-15 15:35:12.925395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-15 15:35:12.925435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-15 15:35:12.925682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-15 15:35:12.925722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-15 15:35:12.926118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-15 15:35:12.926160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-15 15:35:12.926504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-15 15:35:12.926544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-15 15:35:12.926870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-15 15:35:12.926912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-15 15:35:12.927369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-15 15:35:12.927409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-15 15:35:12.927729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-15 15:35:12.927775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-15 15:35:12.928156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-15 15:35:12.928197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-15 15:35:12.928426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-15 15:35:12.928451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-15 15:35:12.928645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-15 15:35:12.928685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-15 15:35:12.929051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-15 15:35:12.929092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-15 15:35:12.929480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-15 15:35:12.929520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-15 15:35:12.929901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-15 15:35:12.929914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-15 15:35:12.930110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-15 15:35:12.930150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-15 15:35:12.930515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-15 15:35:12.930555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-15 15:35:12.930958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-15 15:35:12.931000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-15 15:35:12.931369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-15 15:35:12.931409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-15 15:35:12.931695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-15 15:35:12.931747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-15 15:35:12.932066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-15 15:35:12.932107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-15 15:35:12.932475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-15 15:35:12.932516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-15 15:35:12.932913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-15 15:35:12.932955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-15 15:35:12.933345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-15 15:35:12.933395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-15 15:35:12.933733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-15 15:35:12.933774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-15 15:35:12.934191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-15 15:35:12.934244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-15 15:35:12.934599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-15 15:35:12.934639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-15 15:35:12.935031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-15 15:35:12.935073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-15 15:35:12.935465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-15 15:35:12.935505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-15 15:35:12.935898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-15 15:35:12.935941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-15 15:35:12.936329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-15 15:35:12.936369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-15 15:35:12.936764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-15 15:35:12.936804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-15 15:35:12.937157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-15 15:35:12.937198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.100 [2024-07-15 15:35:12.937561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.100 [2024-07-15 15:35:12.937574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.100 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-15 15:35:12.937839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-15 15:35:12.937853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-15 15:35:12.938130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-15 15:35:12.938170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-15 15:35:12.938498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-15 15:35:12.938538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-15 15:35:12.938929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-15 15:35:12.938971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-15 15:35:12.939268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-15 15:35:12.939309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-15 15:35:12.939705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-15 15:35:12.939745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-15 15:35:12.939995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-15 15:35:12.940037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-15 15:35:12.940432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-15 15:35:12.940472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-15 15:35:12.940867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-15 15:35:12.940908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-15 15:35:12.941298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-15 15:35:12.941338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-15 15:35:12.941702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-15 15:35:12.941716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-15 15:35:12.942032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-15 15:35:12.942072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-15 15:35:12.942461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-15 15:35:12.942502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-15 15:35:12.942856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-15 15:35:12.942898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-15 15:35:12.943277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-15 15:35:12.943323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-15 15:35:12.943719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-15 15:35:12.943759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-15 15:35:12.944143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-15 15:35:12.944184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-15 15:35:12.944576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-15 15:35:12.944616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-15 15:35:12.944920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-15 15:35:12.944961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-15 15:35:12.945350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-15 15:35:12.945391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-15 15:35:12.945785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-15 15:35:12.945826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-15 15:35:12.946226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-15 15:35:12.946267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-15 15:35:12.946660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-15 15:35:12.946700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-15 15:35:12.947095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-15 15:35:12.947138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-15 15:35:12.947528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-15 15:35:12.947579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-15 15:35:12.947929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-15 15:35:12.947942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-15 15:35:12.948209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-15 15:35:12.948249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-15 15:35:12.948546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-15 15:35:12.948586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-15 15:35:12.948896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-15 15:35:12.948935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-15 15:35:12.949328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-15 15:35:12.949368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-15 15:35:12.949748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-15 15:35:12.949762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-15 15:35:12.950025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-15 15:35:12.950066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-15 15:35:12.950484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-15 15:35:12.950524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-15 15:35:12.950926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-15 15:35:12.950940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-15 15:35:12.951273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-15 15:35:12.951312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-15 15:35:12.951718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-15 15:35:12.951758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-15 15:35:12.952157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-15 15:35:12.952197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-15 15:35:12.952566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-15 15:35:12.952607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-15 15:35:12.953001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-15 15:35:12.953041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.101 [2024-07-15 15:35:12.953368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.101 [2024-07-15 15:35:12.953409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.101 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-15 15:35:12.953788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-15 15:35:12.953829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-15 15:35:12.954146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-15 15:35:12.954187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-15 15:35:12.954559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-15 15:35:12.954599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-15 15:35:12.954997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-15 15:35:12.955040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-15 15:35:12.955407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-15 15:35:12.955447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-15 15:35:12.955769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-15 15:35:12.955809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-15 15:35:12.956210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-15 15:35:12.956251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-15 15:35:12.956642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-15 15:35:12.956682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-15 15:35:12.957074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-15 15:35:12.957115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-15 15:35:12.957502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-15 15:35:12.957543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-15 15:35:12.957934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-15 15:35:12.957975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-15 15:35:12.958364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-15 15:35:12.958405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-15 15:35:12.958796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-15 15:35:12.958849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-15 15:35:12.959149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-15 15:35:12.959189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-15 15:35:12.959502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-15 15:35:12.959548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-15 15:35:12.959940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-15 15:35:12.959981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-15 15:35:12.960372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-15 15:35:12.960412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-15 15:35:12.960807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-15 15:35:12.960971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-15 15:35:12.961408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-15 15:35:12.961448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-15 15:35:12.961768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-15 15:35:12.961808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-15 15:35:12.962161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-15 15:35:12.962202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-15 15:35:12.962605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-15 15:35:12.962645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-15 15:35:12.963038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-15 15:35:12.963080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-15 15:35:12.963469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-15 15:35:12.963510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-15 15:35:12.963892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-15 15:35:12.963905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-15 15:35:12.964227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-15 15:35:12.964240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-15 15:35:12.964524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-15 15:35:12.964564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-15 15:35:12.964932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-15 15:35:12.964974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-15 15:35:12.965368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-15 15:35:12.965409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-15 15:35:12.965773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-15 15:35:12.965813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-15 15:35:12.966124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-15 15:35:12.966165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-15 15:35:12.966554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-15 15:35:12.966594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-15 15:35:12.966914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-15 15:35:12.966955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-15 15:35:12.967344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-15 15:35:12.967385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-15 15:35:12.967768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-15 15:35:12.967782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-15 15:35:12.968087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-15 15:35:12.968101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-15 15:35:12.968297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-15 15:35:12.968310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-15 15:35:12.968479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-15 15:35:12.968493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-15 15:35:12.968754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-15 15:35:12.968767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-15 15:35:12.969099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.102 [2024-07-15 15:35:12.969113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.102 qpair failed and we were unable to recover it. 00:30:09.102 [2024-07-15 15:35:12.969415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-07-15 15:35:12.969428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-07-15 15:35:12.969613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-07-15 15:35:12.969627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-07-15 15:35:12.969961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-07-15 15:35:12.970001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-07-15 15:35:12.970391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-07-15 15:35:12.970431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-07-15 15:35:12.970797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-07-15 15:35:12.970855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.103 [2024-07-15 15:35:12.971184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.103 [2024-07-15 15:35:12.971197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.103 qpair failed and we were unable to recover it. 00:30:09.375 [2024-07-15 15:35:12.971377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.375 [2024-07-15 15:35:12.971391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.375 qpair failed and we were unable to recover it. 00:30:09.375 [2024-07-15 15:35:12.971720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.375 [2024-07-15 15:35:12.971734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.375 qpair failed and we were unable to recover it. 00:30:09.375 [2024-07-15 15:35:12.971973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.375 [2024-07-15 15:35:12.971987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.375 qpair failed and we were unable to recover it. 00:30:09.375 [2024-07-15 15:35:12.972240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.375 [2024-07-15 15:35:12.972254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.375 qpair failed and we were unable to recover it. 00:30:09.375 [2024-07-15 15:35:12.972557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.375 [2024-07-15 15:35:12.972570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.375 qpair failed and we were unable to recover it. 00:30:09.375 [2024-07-15 15:35:12.972804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.375 [2024-07-15 15:35:12.972818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.375 qpair failed and we were unable to recover it. 00:30:09.375 [2024-07-15 15:35:12.973095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.375 [2024-07-15 15:35:12.973109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.375 qpair failed and we were unable to recover it. 00:30:09.376 [2024-07-15 15:35:12.973415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.376 [2024-07-15 15:35:12.973429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.376 qpair failed and we were unable to recover it. 00:30:09.376 [2024-07-15 15:35:12.973780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.376 [2024-07-15 15:35:12.973796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.376 qpair failed and we were unable to recover it. 00:30:09.376 [2024-07-15 15:35:12.974155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.376 [2024-07-15 15:35:12.974195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.376 qpair failed and we were unable to recover it. 00:30:09.376 [2024-07-15 15:35:12.974513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.376 [2024-07-15 15:35:12.974553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.376 qpair failed and we were unable to recover it. 00:30:09.376 [2024-07-15 15:35:12.974887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.376 [2024-07-15 15:35:12.974928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.376 qpair failed and we were unable to recover it. 00:30:09.376 [2024-07-15 15:35:12.975317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.376 [2024-07-15 15:35:12.975356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.376 qpair failed and we were unable to recover it. 00:30:09.376 [2024-07-15 15:35:12.975747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.376 [2024-07-15 15:35:12.975787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.376 qpair failed and we were unable to recover it. 00:30:09.376 [2024-07-15 15:35:12.976291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.376 [2024-07-15 15:35:12.976372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.376 qpair failed and we were unable to recover it. 00:30:09.376 [2024-07-15 15:35:12.976762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.376 [2024-07-15 15:35:12.976805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.376 qpair failed and we were unable to recover it. 00:30:09.376 [2024-07-15 15:35:12.977157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.376 [2024-07-15 15:35:12.977199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.376 qpair failed and we were unable to recover it. 00:30:09.376 [2024-07-15 15:35:12.977599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.376 [2024-07-15 15:35:12.977640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.376 qpair failed and we were unable to recover it. 00:30:09.376 [2024-07-15 15:35:12.978033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.376 [2024-07-15 15:35:12.978075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.376 qpair failed and we were unable to recover it. 00:30:09.376 [2024-07-15 15:35:12.978399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.376 [2024-07-15 15:35:12.978440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.376 qpair failed and we were unable to recover it. 00:30:09.376 [2024-07-15 15:35:12.978739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.376 [2024-07-15 15:35:12.978779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.376 qpair failed and we were unable to recover it. 00:30:09.376 [2024-07-15 15:35:12.979178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.376 [2024-07-15 15:35:12.979220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.376 qpair failed and we were unable to recover it. 00:30:09.376 [2024-07-15 15:35:12.979570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.376 [2024-07-15 15:35:12.979611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.376 qpair failed and we were unable to recover it. 00:30:09.376 [2024-07-15 15:35:12.980016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.376 [2024-07-15 15:35:12.980057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.376 qpair failed and we were unable to recover it. 00:30:09.376 [2024-07-15 15:35:12.980446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.376 [2024-07-15 15:35:12.980486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.376 qpair failed and we were unable to recover it. 00:30:09.376 [2024-07-15 15:35:12.980766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.376 [2024-07-15 15:35:12.980784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.376 qpair failed and we were unable to recover it. 00:30:09.376 [2024-07-15 15:35:12.981130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.376 [2024-07-15 15:35:12.981171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.376 qpair failed and we were unable to recover it. 00:30:09.376 [2024-07-15 15:35:12.981471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.376 [2024-07-15 15:35:12.981511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.376 qpair failed and we were unable to recover it. 00:30:09.376 [2024-07-15 15:35:12.981807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.376 [2024-07-15 15:35:12.981825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.376 qpair failed and we were unable to recover it. 00:30:09.376 [2024-07-15 15:35:12.982191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.376 [2024-07-15 15:35:12.982210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.376 qpair failed and we were unable to recover it. 00:30:09.376 [2024-07-15 15:35:12.982451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.376 [2024-07-15 15:35:12.982496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.376 qpair failed and we were unable to recover it. 00:30:09.376 [2024-07-15 15:35:12.982886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.376 [2024-07-15 15:35:12.982927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.376 qpair failed and we were unable to recover it. 00:30:09.376 [2024-07-15 15:35:12.983227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.376 [2024-07-15 15:35:12.983267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.376 qpair failed and we were unable to recover it. 00:30:09.376 [2024-07-15 15:35:12.983658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.376 [2024-07-15 15:35:12.983698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.376 qpair failed and we were unable to recover it. 00:30:09.376 [2024-07-15 15:35:12.984009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.376 [2024-07-15 15:35:12.984050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.376 qpair failed and we were unable to recover it. 00:30:09.376 [2024-07-15 15:35:12.984441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.376 [2024-07-15 15:35:12.984486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.376 qpair failed and we were unable to recover it. 00:30:09.376 [2024-07-15 15:35:12.984857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.376 [2024-07-15 15:35:12.984898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.376 qpair failed and we were unable to recover it. 00:30:09.376 [2024-07-15 15:35:12.985270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.376 [2024-07-15 15:35:12.985310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.376 qpair failed and we were unable to recover it. 00:30:09.377 [2024-07-15 15:35:12.985600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-07-15 15:35:12.985618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.377 qpair failed and we were unable to recover it. 00:30:09.377 [2024-07-15 15:35:12.985939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-07-15 15:35:12.985981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.377 qpair failed and we were unable to recover it. 00:30:09.377 [2024-07-15 15:35:12.986390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-07-15 15:35:12.986430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.377 qpair failed and we were unable to recover it. 00:30:09.377 [2024-07-15 15:35:12.986739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-07-15 15:35:12.986757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.377 qpair failed and we were unable to recover it. 00:30:09.377 [2024-07-15 15:35:12.987049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-07-15 15:35:12.987089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.377 qpair failed and we were unable to recover it. 00:30:09.377 [2024-07-15 15:35:12.987410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-07-15 15:35:12.987450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.377 qpair failed and we were unable to recover it. 00:30:09.377 [2024-07-15 15:35:12.987750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-07-15 15:35:12.987790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.377 qpair failed and we were unable to recover it. 00:30:09.377 [2024-07-15 15:35:12.988191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-07-15 15:35:12.988232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.377 qpair failed and we were unable to recover it. 00:30:09.377 [2024-07-15 15:35:12.988602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-07-15 15:35:12.988643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.377 qpair failed and we were unable to recover it. 00:30:09.377 [2024-07-15 15:35:12.988905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-07-15 15:35:12.988924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.377 qpair failed and we were unable to recover it. 00:30:09.377 [2024-07-15 15:35:12.989267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-07-15 15:35:12.989306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.377 qpair failed and we were unable to recover it. 00:30:09.377 [2024-07-15 15:35:12.989613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-07-15 15:35:12.989653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.377 qpair failed and we were unable to recover it. 00:30:09.377 [2024-07-15 15:35:12.990045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-07-15 15:35:12.990087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.377 qpair failed and we were unable to recover it. 00:30:09.377 [2024-07-15 15:35:12.990405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-07-15 15:35:12.990446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.377 qpair failed and we were unable to recover it. 00:30:09.377 [2024-07-15 15:35:12.990845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-07-15 15:35:12.990886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.377 qpair failed and we were unable to recover it. 00:30:09.377 [2024-07-15 15:35:12.991224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-07-15 15:35:12.991264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.377 qpair failed and we were unable to recover it. 00:30:09.377 [2024-07-15 15:35:12.991651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-07-15 15:35:12.991691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.377 qpair failed and we were unable to recover it. 00:30:09.377 [2024-07-15 15:35:12.992088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-07-15 15:35:12.992130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.377 qpair failed and we were unable to recover it. 00:30:09.377 [2024-07-15 15:35:12.992500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-07-15 15:35:12.992542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.377 qpair failed and we were unable to recover it. 00:30:09.377 [2024-07-15 15:35:12.992841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-07-15 15:35:12.992883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.377 qpair failed and we were unable to recover it. 00:30:09.377 [2024-07-15 15:35:12.993275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-07-15 15:35:12.993314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.377 qpair failed and we were unable to recover it. 00:30:09.377 [2024-07-15 15:35:12.993677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-07-15 15:35:12.993695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.377 qpair failed and we were unable to recover it. 00:30:09.377 [2024-07-15 15:35:12.993883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-07-15 15:35:12.993902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.377 qpair failed and we were unable to recover it. 00:30:09.377 [2024-07-15 15:35:12.994243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-07-15 15:35:12.994283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.377 qpair failed and we were unable to recover it. 00:30:09.377 [2024-07-15 15:35:12.994672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-07-15 15:35:12.994712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.377 qpair failed and we were unable to recover it. 00:30:09.377 [2024-07-15 15:35:12.995030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-07-15 15:35:12.995071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.377 qpair failed and we were unable to recover it. 00:30:09.377 [2024-07-15 15:35:12.995464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-07-15 15:35:12.995504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.377 qpair failed and we were unable to recover it. 00:30:09.377 [2024-07-15 15:35:12.995884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-07-15 15:35:12.995925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.377 qpair failed and we were unable to recover it. 00:30:09.377 [2024-07-15 15:35:12.996290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-07-15 15:35:12.996330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.377 qpair failed and we were unable to recover it. 00:30:09.377 [2024-07-15 15:35:12.996703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.377 [2024-07-15 15:35:12.996744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.377 qpair failed and we were unable to recover it. 00:30:09.378 [2024-07-15 15:35:12.997112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-07-15 15:35:12.997154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.378 qpair failed and we were unable to recover it. 00:30:09.378 [2024-07-15 15:35:12.997526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-07-15 15:35:12.997566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.378 qpair failed and we were unable to recover it. 00:30:09.378 [2024-07-15 15:35:12.997956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-07-15 15:35:12.997998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.378 qpair failed and we were unable to recover it. 00:30:09.378 [2024-07-15 15:35:12.998390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-07-15 15:35:12.998431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.378 qpair failed and we were unable to recover it. 00:30:09.378 [2024-07-15 15:35:12.998766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-07-15 15:35:12.998784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.378 qpair failed and we were unable to recover it. 00:30:09.378 [2024-07-15 15:35:12.999149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-07-15 15:35:12.999191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.378 qpair failed and we were unable to recover it. 00:30:09.378 [2024-07-15 15:35:12.999582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-07-15 15:35:12.999622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.378 qpair failed and we were unable to recover it. 00:30:09.378 [2024-07-15 15:35:13.000013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-07-15 15:35:13.000054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.378 qpair failed and we were unable to recover it. 00:30:09.378 [2024-07-15 15:35:13.000455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-07-15 15:35:13.000497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.378 qpair failed and we were unable to recover it. 00:30:09.378 [2024-07-15 15:35:13.000850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-07-15 15:35:13.000891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.378 qpair failed and we were unable to recover it. 00:30:09.378 [2024-07-15 15:35:13.001191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-07-15 15:35:13.001231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.378 qpair failed and we were unable to recover it. 00:30:09.378 [2024-07-15 15:35:13.001621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-07-15 15:35:13.001662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.378 qpair failed and we were unable to recover it. 00:30:09.378 [2024-07-15 15:35:13.002030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-07-15 15:35:13.002071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.378 qpair failed and we were unable to recover it. 00:30:09.378 [2024-07-15 15:35:13.002466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-07-15 15:35:13.002507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.378 qpair failed and we were unable to recover it. 00:30:09.378 [2024-07-15 15:35:13.002817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-07-15 15:35:13.002870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.378 qpair failed and we were unable to recover it. 00:30:09.378 [2024-07-15 15:35:13.003264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-07-15 15:35:13.003305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.378 qpair failed and we were unable to recover it. 00:30:09.378 [2024-07-15 15:35:13.003689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-07-15 15:35:13.003707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.378 qpair failed and we were unable to recover it. 00:30:09.378 [2024-07-15 15:35:13.003958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-07-15 15:35:13.003977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.378 qpair failed and we were unable to recover it. 00:30:09.378 [2024-07-15 15:35:13.004236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-07-15 15:35:13.004254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.378 qpair failed and we were unable to recover it. 00:30:09.378 [2024-07-15 15:35:13.004451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-07-15 15:35:13.004470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.378 qpair failed and we were unable to recover it. 00:30:09.378 [2024-07-15 15:35:13.004787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-07-15 15:35:13.004805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.378 qpair failed and we were unable to recover it. 00:30:09.378 [2024-07-15 15:35:13.005081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-07-15 15:35:13.005099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.378 qpair failed and we were unable to recover it. 00:30:09.378 [2024-07-15 15:35:13.005464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-07-15 15:35:13.005505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.378 qpair failed and we were unable to recover it. 00:30:09.378 [2024-07-15 15:35:13.005831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-07-15 15:35:13.005886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.378 qpair failed and we were unable to recover it. 00:30:09.378 [2024-07-15 15:35:13.006256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-07-15 15:35:13.006296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.378 qpair failed and we were unable to recover it. 00:30:09.378 [2024-07-15 15:35:13.006620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-07-15 15:35:13.006660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.378 qpair failed and we were unable to recover it. 00:30:09.378 [2024-07-15 15:35:13.006991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-07-15 15:35:13.007033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.378 qpair failed and we were unable to recover it. 00:30:09.378 [2024-07-15 15:35:13.007352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-07-15 15:35:13.007392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.378 qpair failed and we were unable to recover it. 00:30:09.378 [2024-07-15 15:35:13.007725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-07-15 15:35:13.007743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.378 qpair failed and we were unable to recover it. 00:30:09.378 [2024-07-15 15:35:13.008087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.378 [2024-07-15 15:35:13.008129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.379 qpair failed and we were unable to recover it. 00:30:09.379 [2024-07-15 15:35:13.008497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-07-15 15:35:13.008537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.379 qpair failed and we were unable to recover it. 00:30:09.379 [2024-07-15 15:35:13.008852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-07-15 15:35:13.008894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.379 qpair failed and we were unable to recover it. 00:30:09.379 [2024-07-15 15:35:13.009216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-07-15 15:35:13.009256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.379 qpair failed and we were unable to recover it. 00:30:09.379 [2024-07-15 15:35:13.009576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-07-15 15:35:13.009617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.379 qpair failed and we were unable to recover it. 00:30:09.379 [2024-07-15 15:35:13.010007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-07-15 15:35:13.010048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.379 qpair failed and we were unable to recover it. 00:30:09.379 [2024-07-15 15:35:13.010359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-07-15 15:35:13.010405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.379 qpair failed and we were unable to recover it. 00:30:09.379 [2024-07-15 15:35:13.010737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-07-15 15:35:13.010777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.379 qpair failed and we were unable to recover it. 00:30:09.379 [2024-07-15 15:35:13.011174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-07-15 15:35:13.011215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.379 qpair failed and we were unable to recover it. 00:30:09.379 [2024-07-15 15:35:13.011573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-07-15 15:35:13.011613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.379 qpair failed and we were unable to recover it. 00:30:09.379 [2024-07-15 15:35:13.011917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-07-15 15:35:13.011936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.379 qpair failed and we were unable to recover it. 00:30:09.379 [2024-07-15 15:35:13.012275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-07-15 15:35:13.012315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.379 qpair failed and we were unable to recover it. 00:30:09.379 [2024-07-15 15:35:13.012706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-07-15 15:35:13.012746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.379 qpair failed and we were unable to recover it. 00:30:09.379 [2024-07-15 15:35:13.013107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-07-15 15:35:13.013126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.379 qpair failed and we were unable to recover it. 00:30:09.379 [2024-07-15 15:35:13.013441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-07-15 15:35:13.013460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.379 qpair failed and we were unable to recover it. 00:30:09.379 [2024-07-15 15:35:13.013748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-07-15 15:35:13.013788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.379 qpair failed and we were unable to recover it. 00:30:09.379 [2024-07-15 15:35:13.014182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-07-15 15:35:13.014224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.379 qpair failed and we were unable to recover it. 00:30:09.379 [2024-07-15 15:35:13.014635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-07-15 15:35:13.014676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.379 qpair failed and we were unable to recover it. 00:30:09.379 [2024-07-15 15:35:13.014995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-07-15 15:35:13.015015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.379 qpair failed and we were unable to recover it. 00:30:09.379 [2024-07-15 15:35:13.015335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-07-15 15:35:13.015376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.379 qpair failed and we were unable to recover it. 00:30:09.379 [2024-07-15 15:35:13.015775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-07-15 15:35:13.015816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.379 qpair failed and we were unable to recover it. 00:30:09.379 [2024-07-15 15:35:13.016155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-07-15 15:35:13.016196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.379 qpair failed and we were unable to recover it. 00:30:09.379 [2024-07-15 15:35:13.016589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-07-15 15:35:13.016629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.379 qpair failed and we were unable to recover it. 00:30:09.379 [2024-07-15 15:35:13.017021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-07-15 15:35:13.017063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.379 qpair failed and we were unable to recover it. 00:30:09.379 [2024-07-15 15:35:13.017456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-07-15 15:35:13.017497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.379 qpair failed and we were unable to recover it. 00:30:09.379 [2024-07-15 15:35:13.017797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-07-15 15:35:13.017815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.379 qpair failed and we were unable to recover it. 00:30:09.379 [2024-07-15 15:35:13.018141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-07-15 15:35:13.018159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.379 qpair failed and we were unable to recover it. 00:30:09.379 [2024-07-15 15:35:13.018546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-07-15 15:35:13.018586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.379 qpair failed and we were unable to recover it. 00:30:09.379 [2024-07-15 15:35:13.018955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-07-15 15:35:13.018973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.379 qpair failed and we were unable to recover it. 00:30:09.379 [2024-07-15 15:35:13.019284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-07-15 15:35:13.019325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.379 qpair failed and we were unable to recover it. 00:30:09.379 [2024-07-15 15:35:13.019645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-07-15 15:35:13.019685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.379 qpair failed and we were unable to recover it. 00:30:09.379 [2024-07-15 15:35:13.020081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-07-15 15:35:13.020123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.379 qpair failed and we were unable to recover it. 00:30:09.379 [2024-07-15 15:35:13.020355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.379 [2024-07-15 15:35:13.020396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.379 qpair failed and we were unable to recover it. 00:30:09.379 [2024-07-15 15:35:13.020810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-07-15 15:35:13.020861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.380 qpair failed and we were unable to recover it. 00:30:09.380 [2024-07-15 15:35:13.021149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-07-15 15:35:13.021189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.380 qpair failed and we were unable to recover it. 00:30:09.380 [2024-07-15 15:35:13.021487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-07-15 15:35:13.021527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.380 qpair failed and we were unable to recover it. 00:30:09.380 [2024-07-15 15:35:13.021919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-07-15 15:35:13.021960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.380 qpair failed and we were unable to recover it. 00:30:09.380 [2024-07-15 15:35:13.022212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-07-15 15:35:13.022252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.380 qpair failed and we were unable to recover it. 00:30:09.380 [2024-07-15 15:35:13.022620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-07-15 15:35:13.022660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.380 qpair failed and we were unable to recover it. 00:30:09.380 [2024-07-15 15:35:13.023006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-07-15 15:35:13.023047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.380 qpair failed and we were unable to recover it. 00:30:09.380 [2024-07-15 15:35:13.023460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-07-15 15:35:13.023500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.380 qpair failed and we were unable to recover it. 00:30:09.380 [2024-07-15 15:35:13.023821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-07-15 15:35:13.023846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.380 qpair failed and we were unable to recover it. 00:30:09.380 [2024-07-15 15:35:13.024126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-07-15 15:35:13.024166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.380 qpair failed and we were unable to recover it. 00:30:09.380 [2024-07-15 15:35:13.024560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-07-15 15:35:13.024600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.380 qpair failed and we were unable to recover it. 00:30:09.380 [2024-07-15 15:35:13.024904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-07-15 15:35:13.024947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.380 qpair failed and we were unable to recover it. 00:30:09.380 [2024-07-15 15:35:13.025263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-07-15 15:35:13.025304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.380 qpair failed and we were unable to recover it. 00:30:09.380 [2024-07-15 15:35:13.025636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-07-15 15:35:13.025677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.380 qpair failed and we were unable to recover it. 00:30:09.380 [2024-07-15 15:35:13.026072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-07-15 15:35:13.026115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.380 qpair failed and we were unable to recover it. 00:30:09.380 [2024-07-15 15:35:13.026505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-07-15 15:35:13.026545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.380 qpair failed and we were unable to recover it. 00:30:09.380 [2024-07-15 15:35:13.026935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-07-15 15:35:13.026978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.380 qpair failed and we were unable to recover it. 00:30:09.380 [2024-07-15 15:35:13.027371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-07-15 15:35:13.027411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.380 qpair failed and we were unable to recover it. 00:30:09.380 [2024-07-15 15:35:13.027729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-07-15 15:35:13.027770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.380 qpair failed and we were unable to recover it. 00:30:09.380 [2024-07-15 15:35:13.028111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-07-15 15:35:13.028153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.380 qpair failed and we were unable to recover it. 00:30:09.380 [2024-07-15 15:35:13.028490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-07-15 15:35:13.028531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.380 qpair failed and we were unable to recover it. 00:30:09.380 [2024-07-15 15:35:13.028926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-07-15 15:35:13.028945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.380 qpair failed and we were unable to recover it. 00:30:09.380 [2024-07-15 15:35:13.029295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-07-15 15:35:13.029334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.380 qpair failed and we were unable to recover it. 00:30:09.380 [2024-07-15 15:35:13.029726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-07-15 15:35:13.029767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.380 qpair failed and we were unable to recover it. 00:30:09.380 [2024-07-15 15:35:13.030080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-07-15 15:35:13.030122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.380 qpair failed and we were unable to recover it. 00:30:09.380 [2024-07-15 15:35:13.030463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-07-15 15:35:13.030503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.380 qpair failed and we were unable to recover it. 00:30:09.380 [2024-07-15 15:35:13.030893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-07-15 15:35:13.030936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.380 qpair failed and we were unable to recover it. 00:30:09.380 [2024-07-15 15:35:13.031307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-07-15 15:35:13.031359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.380 qpair failed and we were unable to recover it. 00:30:09.380 [2024-07-15 15:35:13.031728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-07-15 15:35:13.031769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.380 qpair failed and we were unable to recover it. 00:30:09.380 [2024-07-15 15:35:13.032146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-07-15 15:35:13.032187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.380 qpair failed and we were unable to recover it. 00:30:09.380 [2024-07-15 15:35:13.032580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-07-15 15:35:13.032620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.380 qpair failed and we were unable to recover it. 00:30:09.380 [2024-07-15 15:35:13.032957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-07-15 15:35:13.032976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.380 qpair failed and we were unable to recover it. 00:30:09.380 [2024-07-15 15:35:13.033261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.380 [2024-07-15 15:35:13.033301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.380 qpair failed and we were unable to recover it. 00:30:09.381 [2024-07-15 15:35:13.033696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-07-15 15:35:13.033743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.381 qpair failed and we were unable to recover it. 00:30:09.381 [2024-07-15 15:35:13.034002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-07-15 15:35:13.034041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.381 qpair failed and we were unable to recover it. 00:30:09.381 [2024-07-15 15:35:13.034431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-07-15 15:35:13.034471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.381 qpair failed and we were unable to recover it. 00:30:09.381 [2024-07-15 15:35:13.034885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-07-15 15:35:13.034928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.381 qpair failed and we were unable to recover it. 00:30:09.381 [2024-07-15 15:35:13.035230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-07-15 15:35:13.035271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.381 qpair failed and we were unable to recover it. 00:30:09.381 [2024-07-15 15:35:13.035664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-07-15 15:35:13.035704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.381 qpair failed and we were unable to recover it. 00:30:09.381 [2024-07-15 15:35:13.035970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-07-15 15:35:13.036011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.381 qpair failed and we were unable to recover it. 00:30:09.381 [2024-07-15 15:35:13.036304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-07-15 15:35:13.036345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.381 qpair failed and we were unable to recover it. 00:30:09.381 [2024-07-15 15:35:13.036708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-07-15 15:35:13.036790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.381 qpair failed and we were unable to recover it. 00:30:09.381 [2024-07-15 15:35:13.037236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-07-15 15:35:13.037282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.381 qpair failed and we were unable to recover it. 00:30:09.381 [2024-07-15 15:35:13.037693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-07-15 15:35:13.037736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.381 qpair failed and we were unable to recover it. 00:30:09.381 [2024-07-15 15:35:13.038131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-07-15 15:35:13.038174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.381 qpair failed and we were unable to recover it. 00:30:09.381 [2024-07-15 15:35:13.038565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-07-15 15:35:13.038605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.381 qpair failed and we were unable to recover it. 00:30:09.381 [2024-07-15 15:35:13.038999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-07-15 15:35:13.039042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.381 qpair failed and we were unable to recover it. 00:30:09.381 [2024-07-15 15:35:13.039386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-07-15 15:35:13.039427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.381 qpair failed and we were unable to recover it. 00:30:09.381 [2024-07-15 15:35:13.039813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-07-15 15:35:13.039836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.381 qpair failed and we were unable to recover it. 00:30:09.381 [2024-07-15 15:35:13.040151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-07-15 15:35:13.040169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.381 qpair failed and we were unable to recover it. 00:30:09.381 [2024-07-15 15:35:13.040493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-07-15 15:35:13.040534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.381 qpair failed and we were unable to recover it. 00:30:09.381 [2024-07-15 15:35:13.040898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-07-15 15:35:13.040938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.381 qpair failed and we were unable to recover it. 00:30:09.381 [2024-07-15 15:35:13.041329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-07-15 15:35:13.041370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.381 qpair failed and we were unable to recover it. 00:30:09.381 [2024-07-15 15:35:13.041692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-07-15 15:35:13.041733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.381 qpair failed and we were unable to recover it. 00:30:09.381 [2024-07-15 15:35:13.042064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-07-15 15:35:13.042086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.381 qpair failed and we were unable to recover it. 00:30:09.381 [2024-07-15 15:35:13.042425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.381 [2024-07-15 15:35:13.042465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.381 qpair failed and we were unable to recover it. 00:30:09.382 [2024-07-15 15:35:13.042844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-07-15 15:35:13.042886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.382 qpair failed and we were unable to recover it. 00:30:09.382 [2024-07-15 15:35:13.043212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-07-15 15:35:13.043252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.382 qpair failed and we were unable to recover it. 00:30:09.382 [2024-07-15 15:35:13.043642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-07-15 15:35:13.043682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.382 qpair failed and we were unable to recover it. 00:30:09.382 [2024-07-15 15:35:13.044009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-07-15 15:35:13.044051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.382 qpair failed and we were unable to recover it. 00:30:09.382 [2024-07-15 15:35:13.044373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-07-15 15:35:13.044414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.382 qpair failed and we were unable to recover it. 00:30:09.382 [2024-07-15 15:35:13.044774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-07-15 15:35:13.044793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.382 qpair failed and we were unable to recover it. 00:30:09.382 [2024-07-15 15:35:13.045039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-07-15 15:35:13.045057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.382 qpair failed and we were unable to recover it. 00:30:09.382 [2024-07-15 15:35:13.045353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-07-15 15:35:13.045393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.382 qpair failed and we were unable to recover it. 00:30:09.382 [2024-07-15 15:35:13.045709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-07-15 15:35:13.045749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.382 qpair failed and we were unable to recover it. 00:30:09.382 [2024-07-15 15:35:13.046144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-07-15 15:35:13.046185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.382 qpair failed and we were unable to recover it. 00:30:09.382 [2024-07-15 15:35:13.046495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-07-15 15:35:13.046538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.382 qpair failed and we were unable to recover it. 00:30:09.382 [2024-07-15 15:35:13.046877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-07-15 15:35:13.046919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.382 qpair failed and we were unable to recover it. 00:30:09.382 [2024-07-15 15:35:13.047320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-07-15 15:35:13.047361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.382 qpair failed and we were unable to recover it. 00:30:09.382 [2024-07-15 15:35:13.047752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-07-15 15:35:13.047793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.382 qpair failed and we were unable to recover it. 00:30:09.382 [2024-07-15 15:35:13.048076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-07-15 15:35:13.048095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.382 qpair failed and we were unable to recover it. 00:30:09.382 [2024-07-15 15:35:13.048300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-07-15 15:35:13.048318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.382 qpair failed and we were unable to recover it. 00:30:09.382 [2024-07-15 15:35:13.048657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-07-15 15:35:13.048675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.382 qpair failed and we were unable to recover it. 00:30:09.382 [2024-07-15 15:35:13.048996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-07-15 15:35:13.049038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.382 qpair failed and we were unable to recover it. 00:30:09.382 [2024-07-15 15:35:13.049408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-07-15 15:35:13.049449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.382 qpair failed and we were unable to recover it. 00:30:09.382 [2024-07-15 15:35:13.049829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-07-15 15:35:13.049877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.382 qpair failed and we were unable to recover it. 00:30:09.382 [2024-07-15 15:35:13.050266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-07-15 15:35:13.050307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.382 qpair failed and we were unable to recover it. 00:30:09.382 [2024-07-15 15:35:13.050699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-07-15 15:35:13.050740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.382 qpair failed and we were unable to recover it. 00:30:09.382 [2024-07-15 15:35:13.051126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-07-15 15:35:13.051169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.382 qpair failed and we were unable to recover it. 00:30:09.382 [2024-07-15 15:35:13.051496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-07-15 15:35:13.051536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.382 qpair failed and we were unable to recover it. 00:30:09.382 [2024-07-15 15:35:13.051931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-07-15 15:35:13.051972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.382 qpair failed and we were unable to recover it. 00:30:09.382 [2024-07-15 15:35:13.052346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-07-15 15:35:13.052388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.382 qpair failed and we were unable to recover it. 00:30:09.382 [2024-07-15 15:35:13.052701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-07-15 15:35:13.052742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.382 qpair failed and we were unable to recover it. 00:30:09.382 [2024-07-15 15:35:13.053064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-07-15 15:35:13.053083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.382 qpair failed and we were unable to recover it. 00:30:09.382 [2024-07-15 15:35:13.053362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-07-15 15:35:13.053402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.382 qpair failed and we were unable to recover it. 00:30:09.382 [2024-07-15 15:35:13.053801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.382 [2024-07-15 15:35:13.053853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.382 qpair failed and we were unable to recover it. 00:30:09.383 [2024-07-15 15:35:13.054135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-07-15 15:35:13.054153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.383 qpair failed and we were unable to recover it. 00:30:09.383 [2024-07-15 15:35:13.054477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-07-15 15:35:13.054518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.383 qpair failed and we were unable to recover it. 00:30:09.383 [2024-07-15 15:35:13.054912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-07-15 15:35:13.054954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.383 qpair failed and we were unable to recover it. 00:30:09.383 [2024-07-15 15:35:13.055347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-07-15 15:35:13.055388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.383 qpair failed and we were unable to recover it. 00:30:09.383 [2024-07-15 15:35:13.055780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-07-15 15:35:13.055820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.383 qpair failed and we were unable to recover it. 00:30:09.383 [2024-07-15 15:35:13.056221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-07-15 15:35:13.056239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.383 qpair failed and we were unable to recover it. 00:30:09.383 [2024-07-15 15:35:13.056510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-07-15 15:35:13.056550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.383 qpair failed and we were unable to recover it. 00:30:09.383 [2024-07-15 15:35:13.056896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-07-15 15:35:13.056939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.383 qpair failed and we were unable to recover it. 00:30:09.383 [2024-07-15 15:35:13.057323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-07-15 15:35:13.057370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.383 qpair failed and we were unable to recover it. 00:30:09.383 [2024-07-15 15:35:13.057768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-07-15 15:35:13.057808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.383 qpair failed and we were unable to recover it. 00:30:09.383 [2024-07-15 15:35:13.058116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-07-15 15:35:13.058155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.383 qpair failed and we were unable to recover it. 00:30:09.383 [2024-07-15 15:35:13.058549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-07-15 15:35:13.058590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.383 qpair failed and we were unable to recover it. 00:30:09.383 [2024-07-15 15:35:13.058913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-07-15 15:35:13.058955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.383 qpair failed and we were unable to recover it. 00:30:09.383 [2024-07-15 15:35:13.059322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-07-15 15:35:13.059363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.383 qpair failed and we were unable to recover it. 00:30:09.383 [2024-07-15 15:35:13.059756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-07-15 15:35:13.059797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.383 qpair failed and we were unable to recover it. 00:30:09.383 [2024-07-15 15:35:13.060178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-07-15 15:35:13.060219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.383 qpair failed and we were unable to recover it. 00:30:09.383 [2024-07-15 15:35:13.060520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-07-15 15:35:13.060561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.383 qpair failed and we were unable to recover it. 00:30:09.383 [2024-07-15 15:35:13.060931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-07-15 15:35:13.060974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.383 qpair failed and we were unable to recover it. 00:30:09.383 [2024-07-15 15:35:13.061343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-07-15 15:35:13.061384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.383 qpair failed and we were unable to recover it. 00:30:09.383 [2024-07-15 15:35:13.061776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-07-15 15:35:13.061817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.383 qpair failed and we were unable to recover it. 00:30:09.383 [2024-07-15 15:35:13.062157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-07-15 15:35:13.062215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.383 qpair failed and we were unable to recover it. 00:30:09.383 [2024-07-15 15:35:13.062541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-07-15 15:35:13.062581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.383 qpair failed and we were unable to recover it. 00:30:09.383 [2024-07-15 15:35:13.062910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-07-15 15:35:13.062929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.383 qpair failed and we were unable to recover it. 00:30:09.383 [2024-07-15 15:35:13.063270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-07-15 15:35:13.063288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.383 qpair failed and we were unable to recover it. 00:30:09.383 [2024-07-15 15:35:13.063630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-07-15 15:35:13.063648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.383 qpair failed and we were unable to recover it. 00:30:09.383 [2024-07-15 15:35:13.064003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-07-15 15:35:13.064044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.383 qpair failed and we were unable to recover it. 00:30:09.383 [2024-07-15 15:35:13.064349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-07-15 15:35:13.064390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.383 qpair failed and we were unable to recover it. 00:30:09.383 [2024-07-15 15:35:13.064775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-07-15 15:35:13.064816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.383 qpair failed and we were unable to recover it. 00:30:09.383 [2024-07-15 15:35:13.065135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.383 [2024-07-15 15:35:13.065154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.383 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-15 15:35:13.065505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-15 15:35:13.065545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-15 15:35:13.065946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-15 15:35:13.065988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-15 15:35:13.066379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-15 15:35:13.066420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-15 15:35:13.066815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-15 15:35:13.066866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-15 15:35:13.067261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-15 15:35:13.067302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-15 15:35:13.067604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-15 15:35:13.067644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-15 15:35:13.068030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-15 15:35:13.068048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-15 15:35:13.068398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-15 15:35:13.068439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-15 15:35:13.068760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-15 15:35:13.068801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-15 15:35:13.069201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-15 15:35:13.069220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-15 15:35:13.069420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-15 15:35:13.069438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-15 15:35:13.069697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-15 15:35:13.069737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-15 15:35:13.069977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-15 15:35:13.069996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-15 15:35:13.070247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-15 15:35:13.070287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-15 15:35:13.070619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-15 15:35:13.070659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-15 15:35:13.071003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-15 15:35:13.071045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-15 15:35:13.071416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-15 15:35:13.071457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-15 15:35:13.071784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-15 15:35:13.071825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-15 15:35:13.072237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-15 15:35:13.072279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-15 15:35:13.072679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-15 15:35:13.072725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-15 15:35:13.073117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-15 15:35:13.073160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-15 15:35:13.073500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-15 15:35:13.073540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-15 15:35:13.073914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-15 15:35:13.073957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-15 15:35:13.074277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-15 15:35:13.074318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-15 15:35:13.074634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-15 15:35:13.074674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-15 15:35:13.074909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-15 15:35:13.074927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-15 15:35:13.075257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-15 15:35:13.075299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-15 15:35:13.075543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-15 15:35:13.075584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-15 15:35:13.075848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-15 15:35:13.075890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-15 15:35:13.076284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-15 15:35:13.076324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-15 15:35:13.076652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-15 15:35:13.076693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.384 qpair failed and we were unable to recover it. 00:30:09.384 [2024-07-15 15:35:13.077020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.384 [2024-07-15 15:35:13.077063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-15 15:35:13.077458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-15 15:35:13.077498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-15 15:35:13.077876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-15 15:35:13.077918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-15 15:35:13.078249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-15 15:35:13.078290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-15 15:35:13.078627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-15 15:35:13.078667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-15 15:35:13.078988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-15 15:35:13.079007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-15 15:35:13.079260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-15 15:35:13.079278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-15 15:35:13.079625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-15 15:35:13.079665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-15 15:35:13.079998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-15 15:35:13.080040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-15 15:35:13.080435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3225893 Killed "${NVMF_APP[@]}" "$@" 00:30:09.385 [2024-07-15 15:35:13.080476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-15 15:35:13.080803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-15 15:35:13.080821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-15 15:35:13.081168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-15 15:35:13.081187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 15:35:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:30:09.385 [2024-07-15 15:35:13.081440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-15 15:35:13.081458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 15:35:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:09.385 [2024-07-15 15:35:13.081800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-15 15:35:13.081819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 15:35:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:09.385 [2024-07-15 15:35:13.082101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-15 15:35:13.082121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 15:35:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:09.385 [2024-07-15 15:35:13.082484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-15 15:35:13.082503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 15:35:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:09.385 [2024-07-15 15:35:13.082842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-15 15:35:13.082861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-15 15:35:13.083205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-15 15:35:13.083223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-15 15:35:13.083563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-15 15:35:13.083581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-15 15:35:13.083900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-15 15:35:13.083919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-15 15:35:13.084285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-15 15:35:13.084303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-15 15:35:13.084635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-15 15:35:13.084654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-15 15:35:13.085019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-15 15:35:13.085038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-15 15:35:13.085369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-15 15:35:13.085388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-15 15:35:13.085642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-15 15:35:13.085661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-15 15:35:13.085936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-15 15:35:13.085955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-15 15:35:13.086299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-15 15:35:13.086318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-15 15:35:13.086637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-15 15:35:13.086655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-15 15:35:13.086976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-15 15:35:13.086994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-15 15:35:13.087356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.385 [2024-07-15 15:35:13.087373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.385 qpair failed and we were unable to recover it. 00:30:09.385 [2024-07-15 15:35:13.087715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-15 15:35:13.087733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-07-15 15:35:13.087998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-15 15:35:13.088016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-07-15 15:35:13.088357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-15 15:35:13.088375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-07-15 15:35:13.088652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-15 15:35:13.088671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-07-15 15:35:13.089010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-15 15:35:13.089028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-07-15 15:35:13.089346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-15 15:35:13.089365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-07-15 15:35:13.089640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-15 15:35:13.089658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-07-15 15:35:13.089908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-15 15:35:13.089927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-07-15 15:35:13.090190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-15 15:35:13.090208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-07-15 15:35:13.090525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-15 15:35:13.090546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-07-15 15:35:13.090865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-15 15:35:13.090884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 15:35:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3226710 00:30:09.386 [2024-07-15 15:35:13.091206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 15:35:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3226710 00:30:09.386 [2024-07-15 15:35:13.091225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 15:35:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:09.386 [2024-07-15 15:35:13.091484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-15 15:35:13.091503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 15:35:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3226710 ']' 00:30:09.386 [2024-07-15 15:35:13.091855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-15 15:35:13.091877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 15:35:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:09.386 [2024-07-15 15:35:13.092148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-15 15:35:13.092166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 15:35:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:09.386 [2024-07-15 15:35:13.092446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-15 15:35:13.092465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.386 15:35:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:09.386 15:35:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:09.386 [2024-07-15 15:35:13.092810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-15 15:35:13.092829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 15:35:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:09.386 [2024-07-15 15:35:13.093110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-15 15:35:13.093129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-07-15 15:35:13.093327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-15 15:35:13.093348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-07-15 15:35:13.093689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-15 15:35:13.093707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-07-15 15:35:13.093901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-15 15:35:13.093920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-07-15 15:35:13.094236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-15 15:35:13.094255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-07-15 15:35:13.094523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-15 15:35:13.094542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-07-15 15:35:13.094863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-15 15:35:13.094881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-07-15 15:35:13.095164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-15 15:35:13.095182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-07-15 15:35:13.095544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-15 15:35:13.095563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-07-15 15:35:13.095830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.386 [2024-07-15 15:35:13.095854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.386 qpair failed and we were unable to recover it. 00:30:09.386 [2024-07-15 15:35:13.096126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-15 15:35:13.096145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-15 15:35:13.096489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-15 15:35:13.096508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-15 15:35:13.096782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-15 15:35:13.096800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-15 15:35:13.097062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-15 15:35:13.097081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-15 15:35:13.097342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-15 15:35:13.097361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-15 15:35:13.097730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-15 15:35:13.097748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-15 15:35:13.098055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-15 15:35:13.098073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-15 15:35:13.098341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-15 15:35:13.098359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-15 15:35:13.098640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-15 15:35:13.098658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-15 15:35:13.098905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-15 15:35:13.098924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-15 15:35:13.099243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-15 15:35:13.099261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-15 15:35:13.099579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-15 15:35:13.099597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-15 15:35:13.099868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-15 15:35:13.099887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-15 15:35:13.100254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-15 15:35:13.100272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-15 15:35:13.100611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-15 15:35:13.100629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-15 15:35:13.100889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-15 15:35:13.100908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-15 15:35:13.101159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-15 15:35:13.101178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-15 15:35:13.101496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-15 15:35:13.101514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-15 15:35:13.101885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-15 15:35:13.101904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-15 15:35:13.102220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-15 15:35:13.102238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-15 15:35:13.102489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-15 15:35:13.102508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-15 15:35:13.102791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-15 15:35:13.102809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-15 15:35:13.103049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-15 15:35:13.103068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-15 15:35:13.103391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-15 15:35:13.103408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-15 15:35:13.103774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-15 15:35:13.103793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-15 15:35:13.104125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.387 [2024-07-15 15:35:13.104143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.387 qpair failed and we were unable to recover it. 00:30:09.387 [2024-07-15 15:35:13.104404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-15 15:35:13.104422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-15 15:35:13.104782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-15 15:35:13.104800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-15 15:35:13.105065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-15 15:35:13.105084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-15 15:35:13.105352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-15 15:35:13.105370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-15 15:35:13.105671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-15 15:35:13.105689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-15 15:35:13.106026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-15 15:35:13.106048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-15 15:35:13.106366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-15 15:35:13.106384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-15 15:35:13.106650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-15 15:35:13.106668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-15 15:35:13.106918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-15 15:35:13.106936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-15 15:35:13.107225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-15 15:35:13.107243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-15 15:35:13.107492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-15 15:35:13.107510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-15 15:35:13.107772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-15 15:35:13.107790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-15 15:35:13.108131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-15 15:35:13.108149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-15 15:35:13.108423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-15 15:35:13.108442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-15 15:35:13.108783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-15 15:35:13.108801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-15 15:35:13.109127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-15 15:35:13.109145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-15 15:35:13.109459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-15 15:35:13.109477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-15 15:35:13.109815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-15 15:35:13.109842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-15 15:35:13.110162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-15 15:35:13.110180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-15 15:35:13.110498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-15 15:35:13.110516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-15 15:35:13.110881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-15 15:35:13.110900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-15 15:35:13.111228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-15 15:35:13.111246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-15 15:35:13.111585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-15 15:35:13.111603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-15 15:35:13.111957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-15 15:35:13.111976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-15 15:35:13.112293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-15 15:35:13.112311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-15 15:35:13.112558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-15 15:35:13.112576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-15 15:35:13.112852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-15 15:35:13.112870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-15 15:35:13.113222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-15 15:35:13.113239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-15 15:35:13.113584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-15 15:35:13.113603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-15 15:35:13.113932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-15 15:35:13.113950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-15 15:35:13.114266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-15 15:35:13.114284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-15 15:35:13.114666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-15 15:35:13.114684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-15 15:35:13.115020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-15 15:35:13.115039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-15 15:35:13.115314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.388 [2024-07-15 15:35:13.115332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.388 qpair failed and we were unable to recover it. 00:30:09.388 [2024-07-15 15:35:13.115573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-15 15:35:13.115591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-15 15:35:13.115906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-15 15:35:13.115925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-15 15:35:13.116265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-15 15:35:13.116289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-15 15:35:13.116651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-15 15:35:13.116669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-15 15:35:13.116942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-15 15:35:13.116961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-15 15:35:13.117308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-15 15:35:13.117326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-15 15:35:13.117672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-15 15:35:13.117690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-15 15:35:13.118009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-15 15:35:13.118027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-15 15:35:13.118242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-15 15:35:13.118261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-15 15:35:13.118578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-15 15:35:13.118596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-15 15:35:13.118914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-15 15:35:13.118932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-15 15:35:13.119176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-15 15:35:13.119197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-15 15:35:13.119486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-15 15:35:13.119503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-15 15:35:13.119843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-15 15:35:13.119861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-15 15:35:13.120127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-15 15:35:13.120145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-15 15:35:13.120403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-15 15:35:13.120421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-15 15:35:13.120759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-15 15:35:13.120778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-15 15:35:13.121053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-15 15:35:13.121071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-15 15:35:13.121389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-15 15:35:13.121407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-15 15:35:13.121722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-15 15:35:13.121740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-15 15:35:13.122013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-15 15:35:13.122032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-15 15:35:13.122373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-15 15:35:13.122391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-15 15:35:13.122633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-15 15:35:13.122651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-15 15:35:13.122998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-15 15:35:13.123016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-15 15:35:13.123227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-15 15:35:13.123245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-15 15:35:13.123569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-15 15:35:13.123587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-15 15:35:13.123796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-15 15:35:13.123815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-15 15:35:13.124140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-15 15:35:13.124158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-15 15:35:13.124450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-15 15:35:13.124468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-15 15:35:13.124739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-15 15:35:13.124756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-15 15:35:13.125024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.389 [2024-07-15 15:35:13.125042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.389 qpair failed and we were unable to recover it. 00:30:09.389 [2024-07-15 15:35:13.125378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-15 15:35:13.125395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-15 15:35:13.125754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-15 15:35:13.125772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-15 15:35:13.126112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-15 15:35:13.126131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-15 15:35:13.126447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-15 15:35:13.126464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-15 15:35:13.126798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-15 15:35:13.126816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-15 15:35:13.127164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-15 15:35:13.127183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-15 15:35:13.127498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-15 15:35:13.127516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-15 15:35:13.127883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-15 15:35:13.127902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-15 15:35:13.128144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-15 15:35:13.128163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-15 15:35:13.128506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-15 15:35:13.128524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-15 15:35:13.128792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-15 15:35:13.128810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-15 15:35:13.129107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-15 15:35:13.129126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-15 15:35:13.129370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-15 15:35:13.129388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-15 15:35:13.129581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-15 15:35:13.129598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-15 15:35:13.129870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-15 15:35:13.129889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-15 15:35:13.130154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-15 15:35:13.130172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-15 15:35:13.130444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-15 15:35:13.130462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-15 15:35:13.130721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-15 15:35:13.130740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-15 15:35:13.130997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-15 15:35:13.131015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-15 15:35:13.131328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-15 15:35:13.131346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-15 15:35:13.131619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-15 15:35:13.131640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-15 15:35:13.131900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-15 15:35:13.131919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-15 15:35:13.132205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-15 15:35:13.132223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-15 15:35:13.132405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-15 15:35:13.132423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-15 15:35:13.132691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-15 15:35:13.132708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-15 15:35:13.133043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-15 15:35:13.133062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-15 15:35:13.133333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-15 15:35:13.133351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-15 15:35:13.133649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-15 15:35:13.133667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-15 15:35:13.133993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-15 15:35:13.134011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-15 15:35:13.134300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-15 15:35:13.134318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-15 15:35:13.134563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-15 15:35:13.134581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.390 qpair failed and we were unable to recover it. 00:30:09.390 [2024-07-15 15:35:13.134939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.390 [2024-07-15 15:35:13.134958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-15 15:35:13.135203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-15 15:35:13.135221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-15 15:35:13.135555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-15 15:35:13.135574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-15 15:35:13.135824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-15 15:35:13.135848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-15 15:35:13.136131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-15 15:35:13.136149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-15 15:35:13.136484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-15 15:35:13.136502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-15 15:35:13.136858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-15 15:35:13.136876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-15 15:35:13.137219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-15 15:35:13.137237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-15 15:35:13.137525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-15 15:35:13.137544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-15 15:35:13.137816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-15 15:35:13.137847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-15 15:35:13.138099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-15 15:35:13.138117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-15 15:35:13.138451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-15 15:35:13.138469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-15 15:35:13.138713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-15 15:35:13.138731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-15 15:35:13.139052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-15 15:35:13.139071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-15 15:35:13.139425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-15 15:35:13.139443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-15 15:35:13.139683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-15 15:35:13.139701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-15 15:35:13.139818] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:30:09.391 [2024-07-15 15:35:13.139873] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:09.391 [2024-07-15 15:35:13.139971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-15 15:35:13.139988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-15 15:35:13.140318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-15 15:35:13.140333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-15 15:35:13.140591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-15 15:35:13.140606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-15 15:35:13.140944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-15 15:35:13.140963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-15 15:35:13.141278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-15 15:35:13.141295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-15 15:35:13.141556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-15 15:35:13.141574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-15 15:35:13.141924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-15 15:35:13.141943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-15 15:35:13.142287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-15 15:35:13.142305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-15 15:35:13.142651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-15 15:35:13.142668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-15 15:35:13.142938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-15 15:35:13.142957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-15 15:35:13.143214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-15 15:35:13.143232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-15 15:35:13.143491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-15 15:35:13.143509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-15 15:35:13.143714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-15 15:35:13.143755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-15 15:35:13.144146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-15 15:35:13.144185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-15 15:35:13.144476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.391 [2024-07-15 15:35:13.144508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.391 qpair failed and we were unable to recover it. 00:30:09.391 [2024-07-15 15:35:13.144873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.392 [2024-07-15 15:35:13.144888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.392 qpair failed and we were unable to recover it. 00:30:09.392 [2024-07-15 15:35:13.145145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.392 [2024-07-15 15:35:13.145160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.392 qpair failed and we were unable to recover it. 00:30:09.392 [2024-07-15 15:35:13.145489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.392 [2024-07-15 15:35:13.145503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.392 qpair failed and we were unable to recover it. 00:30:09.392 [2024-07-15 15:35:13.145774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.392 [2024-07-15 15:35:13.145788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.392 qpair failed and we were unable to recover it. 00:30:09.392 [2024-07-15 15:35:13.146051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.392 [2024-07-15 15:35:13.146065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.392 qpair failed and we were unable to recover it. 00:30:09.392 [2024-07-15 15:35:13.146313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.392 [2024-07-15 15:35:13.146327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.392 qpair failed and we were unable to recover it. 00:30:09.392 [2024-07-15 15:35:13.146580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.392 [2024-07-15 15:35:13.146594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.392 qpair failed and we were unable to recover it. 00:30:09.392 [2024-07-15 15:35:13.146922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.392 [2024-07-15 15:35:13.146936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.392 qpair failed and we were unable to recover it. 00:30:09.392 [2024-07-15 15:35:13.147103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.392 [2024-07-15 15:35:13.147117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.392 qpair failed and we were unable to recover it. 00:30:09.392 [2024-07-15 15:35:13.147350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.392 [2024-07-15 15:35:13.147363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.392 qpair failed and we were unable to recover it. 00:30:09.392 [2024-07-15 15:35:13.147692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.392 [2024-07-15 15:35:13.147709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.392 qpair failed and we were unable to recover it. 00:30:09.392 [2024-07-15 15:35:13.148013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.392 [2024-07-15 15:35:13.148027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.392 qpair failed and we were unable to recover it. 00:30:09.392 [2024-07-15 15:35:13.148263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.392 [2024-07-15 15:35:13.148277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.392 qpair failed and we were unable to recover it. 00:30:09.392 [2024-07-15 15:35:13.148471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.392 [2024-07-15 15:35:13.148484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.392 qpair failed and we were unable to recover it. 00:30:09.392 [2024-07-15 15:35:13.148745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.392 [2024-07-15 15:35:13.148759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.392 qpair failed and we were unable to recover it. 00:30:09.392 [2024-07-15 15:35:13.149031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.392 [2024-07-15 15:35:13.149044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.392 qpair failed and we were unable to recover it. 00:30:09.392 [2024-07-15 15:35:13.149372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.392 [2024-07-15 15:35:13.149385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.392 qpair failed and we were unable to recover it. 00:30:09.392 [2024-07-15 15:35:13.149639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.392 [2024-07-15 15:35:13.149652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.392 qpair failed and we were unable to recover it. 00:30:09.392 [2024-07-15 15:35:13.149908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.392 [2024-07-15 15:35:13.149922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.392 qpair failed and we were unable to recover it. 00:30:09.392 [2024-07-15 15:35:13.150248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.392 [2024-07-15 15:35:13.150262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.392 qpair failed and we were unable to recover it. 00:30:09.392 [2024-07-15 15:35:13.150521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.392 [2024-07-15 15:35:13.150535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.392 qpair failed and we were unable to recover it. 00:30:09.392 [2024-07-15 15:35:13.150840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.392 [2024-07-15 15:35:13.150853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.392 qpair failed and we were unable to recover it. 00:30:09.392 [2024-07-15 15:35:13.151175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.392 [2024-07-15 15:35:13.151188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.392 qpair failed and we were unable to recover it. 00:30:09.392 [2024-07-15 15:35:13.151426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.392 [2024-07-15 15:35:13.151440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.392 qpair failed and we were unable to recover it. 00:30:09.393 [2024-07-15 15:35:13.151681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.393 [2024-07-15 15:35:13.151695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.393 qpair failed and we were unable to recover it. 00:30:09.393 [2024-07-15 15:35:13.152026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.393 [2024-07-15 15:35:13.152040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.393 qpair failed and we were unable to recover it. 00:30:09.393 [2024-07-15 15:35:13.152274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.393 [2024-07-15 15:35:13.152288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.393 qpair failed and we were unable to recover it. 00:30:09.393 [2024-07-15 15:35:13.152521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.393 [2024-07-15 15:35:13.152534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.393 qpair failed and we were unable to recover it. 00:30:09.393 [2024-07-15 15:35:13.152782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.393 [2024-07-15 15:35:13.152796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.393 qpair failed and we were unable to recover it. 00:30:09.393 [2024-07-15 15:35:13.153124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.393 [2024-07-15 15:35:13.153138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.393 qpair failed and we were unable to recover it. 00:30:09.393 [2024-07-15 15:35:13.153464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.393 [2024-07-15 15:35:13.153477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.393 qpair failed and we were unable to recover it. 00:30:09.393 [2024-07-15 15:35:13.153809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.393 [2024-07-15 15:35:13.153822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.393 qpair failed and we were unable to recover it. 00:30:09.393 [2024-07-15 15:35:13.154101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.393 [2024-07-15 15:35:13.154115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.393 qpair failed and we were unable to recover it. 00:30:09.393 [2024-07-15 15:35:13.154438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.393 [2024-07-15 15:35:13.154452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.393 qpair failed and we were unable to recover it. 00:30:09.393 [2024-07-15 15:35:13.154771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.393 [2024-07-15 15:35:13.154785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.393 qpair failed and we were unable to recover it. 00:30:09.393 [2024-07-15 15:35:13.155060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.393 [2024-07-15 15:35:13.155073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.393 qpair failed and we were unable to recover it. 00:30:09.393 [2024-07-15 15:35:13.155285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.393 [2024-07-15 15:35:13.155299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.393 qpair failed and we were unable to recover it. 00:30:09.393 [2024-07-15 15:35:13.155602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.393 [2024-07-15 15:35:13.155616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.393 qpair failed and we were unable to recover it. 00:30:09.393 [2024-07-15 15:35:13.155918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.393 [2024-07-15 15:35:13.155931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.393 qpair failed and we were unable to recover it. 00:30:09.393 [2024-07-15 15:35:13.156258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.393 [2024-07-15 15:35:13.156272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.393 qpair failed and we were unable to recover it. 00:30:09.393 [2024-07-15 15:35:13.156598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.393 [2024-07-15 15:35:13.156611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.393 qpair failed and we were unable to recover it. 00:30:09.393 [2024-07-15 15:35:13.156864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.393 [2024-07-15 15:35:13.156877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.393 qpair failed and we were unable to recover it. 00:30:09.393 [2024-07-15 15:35:13.157139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.393 [2024-07-15 15:35:13.157153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.393 qpair failed and we were unable to recover it. 00:30:09.393 [2024-07-15 15:35:13.157400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.393 [2024-07-15 15:35:13.157414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.393 qpair failed and we were unable to recover it. 00:30:09.393 [2024-07-15 15:35:13.157754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.393 [2024-07-15 15:35:13.157768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.393 qpair failed and we were unable to recover it. 00:30:09.393 [2024-07-15 15:35:13.158078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.393 [2024-07-15 15:35:13.158092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.393 qpair failed and we were unable to recover it. 00:30:09.393 [2024-07-15 15:35:13.158429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.393 [2024-07-15 15:35:13.158444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.393 qpair failed and we were unable to recover it. 00:30:09.393 [2024-07-15 15:35:13.158789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.393 [2024-07-15 15:35:13.158803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.393 qpair failed and we were unable to recover it. 00:30:09.393 [2024-07-15 15:35:13.159052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.393 [2024-07-15 15:35:13.159066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.393 qpair failed and we were unable to recover it. 00:30:09.393 [2024-07-15 15:35:13.159394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.393 [2024-07-15 15:35:13.159407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.393 qpair failed and we were unable to recover it. 00:30:09.393 [2024-07-15 15:35:13.159734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.393 [2024-07-15 15:35:13.159750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.393 qpair failed and we were unable to recover it. 00:30:09.393 [2024-07-15 15:35:13.159988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.393 [2024-07-15 15:35:13.160001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.393 qpair failed and we were unable to recover it. 00:30:09.393 [2024-07-15 15:35:13.160265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.393 [2024-07-15 15:35:13.160278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.393 qpair failed and we were unable to recover it. 00:30:09.393 [2024-07-15 15:35:13.160534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.393 [2024-07-15 15:35:13.160547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.393 qpair failed and we were unable to recover it. 00:30:09.393 [2024-07-15 15:35:13.160778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.393 [2024-07-15 15:35:13.160792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.393 qpair failed and we were unable to recover it. 00:30:09.393 [2024-07-15 15:35:13.161053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.393 [2024-07-15 15:35:13.161067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.393 qpair failed and we were unable to recover it. 00:30:09.393 [2024-07-15 15:35:13.161261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.394 [2024-07-15 15:35:13.161275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-07-15 15:35:13.161518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.394 [2024-07-15 15:35:13.161532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-07-15 15:35:13.161782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.394 [2024-07-15 15:35:13.161796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-07-15 15:35:13.162120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.394 [2024-07-15 15:35:13.162134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-07-15 15:35:13.162379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.394 [2024-07-15 15:35:13.162392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-07-15 15:35:13.162758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.394 [2024-07-15 15:35:13.162771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-07-15 15:35:13.163104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.394 [2024-07-15 15:35:13.163118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-07-15 15:35:13.163444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.394 [2024-07-15 15:35:13.163457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-07-15 15:35:13.163705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.394 [2024-07-15 15:35:13.163719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-07-15 15:35:13.164000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.394 [2024-07-15 15:35:13.164014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-07-15 15:35:13.164355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.394 [2024-07-15 15:35:13.164369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-07-15 15:35:13.164717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.394 [2024-07-15 15:35:13.164731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-07-15 15:35:13.164989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.394 [2024-07-15 15:35:13.165003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-07-15 15:35:13.165348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.394 [2024-07-15 15:35:13.165362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-07-15 15:35:13.165605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.394 [2024-07-15 15:35:13.165619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-07-15 15:35:13.165855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.394 [2024-07-15 15:35:13.165869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-07-15 15:35:13.166201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.394 [2024-07-15 15:35:13.166215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-07-15 15:35:13.166570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.394 [2024-07-15 15:35:13.166583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-07-15 15:35:13.166861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.394 [2024-07-15 15:35:13.166874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-07-15 15:35:13.167108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.394 [2024-07-15 15:35:13.167122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-07-15 15:35:13.167446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.394 [2024-07-15 15:35:13.167460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-07-15 15:35:13.167642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.394 [2024-07-15 15:35:13.167656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-07-15 15:35:13.167908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.394 [2024-07-15 15:35:13.167922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-07-15 15:35:13.168252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.394 [2024-07-15 15:35:13.168266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-07-15 15:35:13.168637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.394 [2024-07-15 15:35:13.168650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-07-15 15:35:13.168980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.394 [2024-07-15 15:35:13.168994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-07-15 15:35:13.169319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.394 [2024-07-15 15:35:13.169332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-07-15 15:35:13.169589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.394 [2024-07-15 15:35:13.169602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-07-15 15:35:13.169803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.394 [2024-07-15 15:35:13.169816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-07-15 15:35:13.170057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.394 [2024-07-15 15:35:13.170071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-07-15 15:35:13.170318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.394 [2024-07-15 15:35:13.170332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.394 qpair failed and we were unable to recover it. 00:30:09.394 [2024-07-15 15:35:13.170592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.394 [2024-07-15 15:35:13.170606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-07-15 15:35:13.170935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.395 [2024-07-15 15:35:13.170949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-07-15 15:35:13.171225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.395 [2024-07-15 15:35:13.171239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-07-15 15:35:13.171562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.395 [2024-07-15 15:35:13.171577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-07-15 15:35:13.171855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.395 [2024-07-15 15:35:13.171869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-07-15 15:35:13.172113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.395 [2024-07-15 15:35:13.172127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-07-15 15:35:13.172471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.395 [2024-07-15 15:35:13.172484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-07-15 15:35:13.172722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.395 [2024-07-15 15:35:13.172736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-07-15 15:35:13.173059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.395 [2024-07-15 15:35:13.173073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-07-15 15:35:13.173307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.395 [2024-07-15 15:35:13.173321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-07-15 15:35:13.173642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.395 [2024-07-15 15:35:13.173655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-07-15 15:35:13.173975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.395 [2024-07-15 15:35:13.173989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-07-15 15:35:13.174328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.395 [2024-07-15 15:35:13.174342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-07-15 15:35:13.174687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.395 [2024-07-15 15:35:13.174700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-07-15 15:35:13.175004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.395 [2024-07-15 15:35:13.175018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-07-15 15:35:13.175256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.395 [2024-07-15 15:35:13.175269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-07-15 15:35:13.175590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.395 [2024-07-15 15:35:13.175604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-07-15 15:35:13.175884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.395 [2024-07-15 15:35:13.175897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-07-15 15:35:13.176157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.395 [2024-07-15 15:35:13.176170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-07-15 15:35:13.176472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.395 [2024-07-15 15:35:13.176485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-07-15 15:35:13.176839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.395 [2024-07-15 15:35:13.176852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-07-15 15:35:13.177156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.395 [2024-07-15 15:35:13.177170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-07-15 15:35:13.177514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.395 [2024-07-15 15:35:13.177528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 EAL: No free 2048 kB hugepages reported on node 1 00:30:09.395 [2024-07-15 15:35:13.177882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.395 [2024-07-15 15:35:13.177896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-07-15 15:35:13.178246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.395 [2024-07-15 15:35:13.178260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-07-15 15:35:13.178444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.395 [2024-07-15 15:35:13.178457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-07-15 15:35:13.178783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.395 [2024-07-15 15:35:13.178797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-07-15 15:35:13.179052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.395 [2024-07-15 15:35:13.179066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-07-15 15:35:13.179404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.395 [2024-07-15 15:35:13.179418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-07-15 15:35:13.179653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.395 [2024-07-15 15:35:13.179666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-07-15 15:35:13.179991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.395 [2024-07-15 15:35:13.180004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-07-15 15:35:13.180192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.395 [2024-07-15 15:35:13.180206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.395 qpair failed and we were unable to recover it. 00:30:09.395 [2024-07-15 15:35:13.180463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.396 [2024-07-15 15:35:13.180477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.396 qpair failed and we were unable to recover it. 00:30:09.396 [2024-07-15 15:35:13.180801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.396 [2024-07-15 15:35:13.180814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.396 qpair failed and we were unable to recover it. 00:30:09.396 [2024-07-15 15:35:13.181141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.396 [2024-07-15 15:35:13.181155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.396 qpair failed and we were unable to recover it. 00:30:09.396 [2024-07-15 15:35:13.181396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.396 [2024-07-15 15:35:13.181410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.396 qpair failed and we were unable to recover it. 00:30:09.396 [2024-07-15 15:35:13.181732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.396 [2024-07-15 15:35:13.181745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.396 qpair failed and we were unable to recover it. 00:30:09.396 [2024-07-15 15:35:13.182055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.396 [2024-07-15 15:35:13.182070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.396 qpair failed and we were unable to recover it. 00:30:09.396 [2024-07-15 15:35:13.182304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.396 [2024-07-15 15:35:13.182319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.396 qpair failed and we were unable to recover it. 00:30:09.396 [2024-07-15 15:35:13.182622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.396 [2024-07-15 15:35:13.182636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.396 qpair failed and we were unable to recover it. 00:30:09.396 [2024-07-15 15:35:13.182877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.396 [2024-07-15 15:35:13.182891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.396 qpair failed and we were unable to recover it. 00:30:09.396 [2024-07-15 15:35:13.183214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.396 [2024-07-15 15:35:13.183227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.396 qpair failed and we were unable to recover it. 00:30:09.396 [2024-07-15 15:35:13.183485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.396 [2024-07-15 15:35:13.183498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.396 qpair failed and we were unable to recover it. 00:30:09.396 [2024-07-15 15:35:13.183823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.396 [2024-07-15 15:35:13.183842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.396 qpair failed and we were unable to recover it. 00:30:09.396 [2024-07-15 15:35:13.184143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.396 [2024-07-15 15:35:13.184157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.396 qpair failed and we were unable to recover it. 00:30:09.396 [2024-07-15 15:35:13.184500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.396 [2024-07-15 15:35:13.184513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.396 qpair failed and we were unable to recover it. 00:30:09.396 [2024-07-15 15:35:13.184863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.396 [2024-07-15 15:35:13.184877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.396 qpair failed and we were unable to recover it. 00:30:09.396 [2024-07-15 15:35:13.185124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.396 [2024-07-15 15:35:13.185138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.396 qpair failed and we were unable to recover it. 00:30:09.396 [2024-07-15 15:35:13.185380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.396 [2024-07-15 15:35:13.185393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.396 qpair failed and we were unable to recover it. 00:30:09.396 [2024-07-15 15:35:13.185653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.396 [2024-07-15 15:35:13.185667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.396 qpair failed and we were unable to recover it. 00:30:09.396 [2024-07-15 15:35:13.185920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.396 [2024-07-15 15:35:13.185933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.396 qpair failed and we were unable to recover it. 00:30:09.396 [2024-07-15 15:35:13.186259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.396 [2024-07-15 15:35:13.186273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.396 qpair failed and we were unable to recover it. 00:30:09.396 [2024-07-15 15:35:13.186479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.396 [2024-07-15 15:35:13.186493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.396 qpair failed and we were unable to recover it. 00:30:09.396 [2024-07-15 15:35:13.186817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.396 [2024-07-15 15:35:13.186830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.396 qpair failed and we were unable to recover it. 00:30:09.396 [2024-07-15 15:35:13.187196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.396 [2024-07-15 15:35:13.187210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.396 qpair failed and we were unable to recover it. 00:30:09.396 [2024-07-15 15:35:13.187445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.396 [2024-07-15 15:35:13.187459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.396 qpair failed and we were unable to recover it. 00:30:09.396 [2024-07-15 15:35:13.187783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.396 [2024-07-15 15:35:13.187797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.396 qpair failed and we were unable to recover it. 00:30:09.396 [2024-07-15 15:35:13.188113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.396 [2024-07-15 15:35:13.188126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.396 qpair failed and we were unable to recover it. 00:30:09.396 [2024-07-15 15:35:13.188378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.396 [2024-07-15 15:35:13.188391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.396 qpair failed and we were unable to recover it. 00:30:09.396 [2024-07-15 15:35:13.188715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.396 [2024-07-15 15:35:13.188729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.396 qpair failed and we were unable to recover it. 00:30:09.396 [2024-07-15 15:35:13.189052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.396 [2024-07-15 15:35:13.189065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.396 qpair failed and we were unable to recover it. 00:30:09.396 [2024-07-15 15:35:13.189311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.396 [2024-07-15 15:35:13.189325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.396 qpair failed and we were unable to recover it. 00:30:09.396 [2024-07-15 15:35:13.189646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.396 [2024-07-15 15:35:13.189659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.396 qpair failed and we were unable to recover it. 00:30:09.396 [2024-07-15 15:35:13.189968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.396 [2024-07-15 15:35:13.189982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.396 qpair failed and we were unable to recover it. 00:30:09.396 [2024-07-15 15:35:13.190307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.396 [2024-07-15 15:35:13.190321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.397 qpair failed and we were unable to recover it. 00:30:09.397 [2024-07-15 15:35:13.190646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.397 [2024-07-15 15:35:13.190660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.397 qpair failed and we were unable to recover it. 00:30:09.397 [2024-07-15 15:35:13.190897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.397 [2024-07-15 15:35:13.190910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.397 qpair failed and we were unable to recover it. 00:30:09.397 [2024-07-15 15:35:13.191161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.397 [2024-07-15 15:35:13.191174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.397 qpair failed and we were unable to recover it. 00:30:09.397 [2024-07-15 15:35:13.191500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.397 [2024-07-15 15:35:13.191513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.397 qpair failed and we were unable to recover it. 00:30:09.397 [2024-07-15 15:35:13.191698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.397 [2024-07-15 15:35:13.191711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.397 qpair failed and we were unable to recover it. 00:30:09.397 [2024-07-15 15:35:13.192012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.397 [2024-07-15 15:35:13.192025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.397 qpair failed and we were unable to recover it. 00:30:09.397 [2024-07-15 15:35:13.192352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.397 [2024-07-15 15:35:13.192366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.397 qpair failed and we were unable to recover it. 00:30:09.397 [2024-07-15 15:35:13.192666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.397 [2024-07-15 15:35:13.192680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.397 qpair failed and we were unable to recover it. 00:30:09.397 [2024-07-15 15:35:13.193004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.397 [2024-07-15 15:35:13.193017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.397 qpair failed and we were unable to recover it. 00:30:09.397 [2024-07-15 15:35:13.193256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.397 [2024-07-15 15:35:13.193269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.397 qpair failed and we were unable to recover it. 00:30:09.397 [2024-07-15 15:35:13.193576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.397 [2024-07-15 15:35:13.193590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.397 qpair failed and we were unable to recover it. 00:30:09.397 [2024-07-15 15:35:13.193826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.397 [2024-07-15 15:35:13.193846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.397 qpair failed and we were unable to recover it. 00:30:09.397 [2024-07-15 15:35:13.194099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.397 [2024-07-15 15:35:13.194113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.397 qpair failed and we were unable to recover it. 00:30:09.397 [2024-07-15 15:35:13.194414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.397 [2024-07-15 15:35:13.194428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.397 qpair failed and we were unable to recover it. 00:30:09.397 [2024-07-15 15:35:13.194768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.397 [2024-07-15 15:35:13.194781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.397 qpair failed and we were unable to recover it. 00:30:09.397 [2024-07-15 15:35:13.195133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.397 [2024-07-15 15:35:13.195147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.397 qpair failed and we were unable to recover it. 00:30:09.397 [2024-07-15 15:35:13.195393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.397 [2024-07-15 15:35:13.195407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.397 qpair failed and we were unable to recover it. 00:30:09.397 [2024-07-15 15:35:13.195684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.397 [2024-07-15 15:35:13.195697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.397 qpair failed and we were unable to recover it. 00:30:09.397 [2024-07-15 15:35:13.195949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.397 [2024-07-15 15:35:13.195964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.397 qpair failed and we were unable to recover it. 00:30:09.397 [2024-07-15 15:35:13.196208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.397 [2024-07-15 15:35:13.196222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.397 qpair failed and we were unable to recover it. 00:30:09.397 [2024-07-15 15:35:13.196480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.397 [2024-07-15 15:35:13.196493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.397 qpair failed and we were unable to recover it. 00:30:09.397 [2024-07-15 15:35:13.196749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.397 [2024-07-15 15:35:13.196762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.397 qpair failed and we were unable to recover it. 00:30:09.397 [2024-07-15 15:35:13.197085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.397 [2024-07-15 15:35:13.197099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.397 qpair failed and we were unable to recover it. 00:30:09.397 [2024-07-15 15:35:13.197409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.397 [2024-07-15 15:35:13.197423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.397 qpair failed and we were unable to recover it. 00:30:09.397 [2024-07-15 15:35:13.197683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.397 [2024-07-15 15:35:13.197696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.397 qpair failed and we were unable to recover it. 00:30:09.397 [2024-07-15 15:35:13.197886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.397 [2024-07-15 15:35:13.197899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.397 qpair failed and we were unable to recover it. 00:30:09.397 [2024-07-15 15:35:13.198224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.397 [2024-07-15 15:35:13.198237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.397 qpair failed and we were unable to recover it. 00:30:09.397 [2024-07-15 15:35:13.198483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.397 [2024-07-15 15:35:13.198496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.397 qpair failed and we were unable to recover it. 00:30:09.397 [2024-07-15 15:35:13.198762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.397 [2024-07-15 15:35:13.198775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.397 qpair failed and we were unable to recover it. 00:30:09.397 [2024-07-15 15:35:13.199034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.397 [2024-07-15 15:35:13.199048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.397 qpair failed and we were unable to recover it. 00:30:09.397 [2024-07-15 15:35:13.199351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.397 [2024-07-15 15:35:13.199364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.397 qpair failed and we were unable to recover it. 00:30:09.397 [2024-07-15 15:35:13.199632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.397 [2024-07-15 15:35:13.199645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.398 qpair failed and we were unable to recover it. 00:30:09.398 [2024-07-15 15:35:13.199923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.398 [2024-07-15 15:35:13.199936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.398 qpair failed and we were unable to recover it. 00:30:09.398 [2024-07-15 15:35:13.200259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.398 [2024-07-15 15:35:13.200273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.398 qpair failed and we were unable to recover it. 00:30:09.398 [2024-07-15 15:35:13.200525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.398 [2024-07-15 15:35:13.200539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.398 qpair failed and we were unable to recover it. 00:30:09.398 [2024-07-15 15:35:13.200856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.398 [2024-07-15 15:35:13.200869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.398 qpair failed and we were unable to recover it. 00:30:09.398 [2024-07-15 15:35:13.201140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.398 [2024-07-15 15:35:13.201154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.398 qpair failed and we were unable to recover it. 00:30:09.398 [2024-07-15 15:35:13.201482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.398 [2024-07-15 15:35:13.201495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.398 qpair failed and we were unable to recover it. 00:30:09.398 [2024-07-15 15:35:13.201758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.398 [2024-07-15 15:35:13.201772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.398 qpair failed and we were unable to recover it. 00:30:09.398 [2024-07-15 15:35:13.202082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.398 [2024-07-15 15:35:13.202095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.398 qpair failed and we were unable to recover it. 00:30:09.398 [2024-07-15 15:35:13.202421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.398 [2024-07-15 15:35:13.202434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.398 qpair failed and we were unable to recover it. 00:30:09.398 [2024-07-15 15:35:13.202681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.398 [2024-07-15 15:35:13.202694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.398 qpair failed and we were unable to recover it. 00:30:09.398 [2024-07-15 15:35:13.203038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.398 [2024-07-15 15:35:13.203051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.398 qpair failed and we were unable to recover it. 00:30:09.398 [2024-07-15 15:35:13.203316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.398 [2024-07-15 15:35:13.203330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.398 qpair failed and we were unable to recover it. 00:30:09.398 [2024-07-15 15:35:13.203584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.398 [2024-07-15 15:35:13.203598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.398 qpair failed and we were unable to recover it. 00:30:09.398 [2024-07-15 15:35:13.203940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.398 [2024-07-15 15:35:13.203954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.398 qpair failed and we were unable to recover it. 00:30:09.398 [2024-07-15 15:35:13.204215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.398 [2024-07-15 15:35:13.204229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.398 qpair failed and we were unable to recover it. 00:30:09.398 [2024-07-15 15:35:13.204409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.398 [2024-07-15 15:35:13.204422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.398 qpair failed and we were unable to recover it. 00:30:09.398 [2024-07-15 15:35:13.204616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.398 [2024-07-15 15:35:13.204630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.398 qpair failed and we were unable to recover it. 00:30:09.398 [2024-07-15 15:35:13.204943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.398 [2024-07-15 15:35:13.204956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.398 qpair failed and we were unable to recover it. 00:30:09.398 [2024-07-15 15:35:13.205258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.398 [2024-07-15 15:35:13.205271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.398 qpair failed and we were unable to recover it. 00:30:09.398 [2024-07-15 15:35:13.205530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.398 [2024-07-15 15:35:13.205544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.398 qpair failed and we were unable to recover it. 00:30:09.398 [2024-07-15 15:35:13.205858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.398 [2024-07-15 15:35:13.205872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.398 qpair failed and we were unable to recover it. 00:30:09.398 [2024-07-15 15:35:13.206132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.398 [2024-07-15 15:35:13.206146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.398 qpair failed and we were unable to recover it. 00:30:09.398 [2024-07-15 15:35:13.206418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.398 [2024-07-15 15:35:13.206432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.398 qpair failed and we were unable to recover it. 00:30:09.398 [2024-07-15 15:35:13.206734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.398 [2024-07-15 15:35:13.206748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.398 qpair failed and we were unable to recover it. 00:30:09.398 [2024-07-15 15:35:13.206992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.398 [2024-07-15 15:35:13.207005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.398 qpair failed and we were unable to recover it. 00:30:09.398 [2024-07-15 15:35:13.207331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.398 [2024-07-15 15:35:13.207344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.398 qpair failed and we were unable to recover it. 00:30:09.398 [2024-07-15 15:35:13.207574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.399 [2024-07-15 15:35:13.207590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.399 qpair failed and we were unable to recover it. 00:30:09.399 [2024-07-15 15:35:13.207846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.399 [2024-07-15 15:35:13.207860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.399 qpair failed and we were unable to recover it. 00:30:09.399 [2024-07-15 15:35:13.208113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.399 [2024-07-15 15:35:13.208127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.399 qpair failed and we were unable to recover it. 00:30:09.399 [2024-07-15 15:35:13.208361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.399 [2024-07-15 15:35:13.208374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.399 qpair failed and we were unable to recover it. 00:30:09.399 [2024-07-15 15:35:13.208643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.399 [2024-07-15 15:35:13.208656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.399 qpair failed and we were unable to recover it. 00:30:09.399 [2024-07-15 15:35:13.208954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.399 [2024-07-15 15:35:13.208967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.399 qpair failed and we were unable to recover it. 00:30:09.399 [2024-07-15 15:35:13.209301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.399 [2024-07-15 15:35:13.209314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.399 qpair failed and we were unable to recover it. 00:30:09.399 [2024-07-15 15:35:13.209616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.399 [2024-07-15 15:35:13.209629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.399 qpair failed and we were unable to recover it. 00:30:09.399 [2024-07-15 15:35:13.209941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.399 [2024-07-15 15:35:13.209955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.399 qpair failed and we were unable to recover it. 00:30:09.399 [2024-07-15 15:35:13.210256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.399 [2024-07-15 15:35:13.210269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.399 qpair failed and we were unable to recover it. 00:30:09.399 [2024-07-15 15:35:13.210580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.399 [2024-07-15 15:35:13.210594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.399 qpair failed and we were unable to recover it. 00:30:09.399 [2024-07-15 15:35:13.210920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.399 [2024-07-15 15:35:13.210934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.399 qpair failed and we were unable to recover it. 00:30:09.399 [2024-07-15 15:35:13.211183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.399 [2024-07-15 15:35:13.211197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.399 qpair failed and we were unable to recover it. 00:30:09.399 [2024-07-15 15:35:13.211522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.399 [2024-07-15 15:35:13.211536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.399 qpair failed and we were unable to recover it. 00:30:09.399 [2024-07-15 15:35:13.211860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.399 [2024-07-15 15:35:13.211874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.399 qpair failed and we were unable to recover it. 00:30:09.399 [2024-07-15 15:35:13.212200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.399 [2024-07-15 15:35:13.212213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.399 qpair failed and we were unable to recover it. 00:30:09.399 [2024-07-15 15:35:13.212538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.399 [2024-07-15 15:35:13.212552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.399 qpair failed and we were unable to recover it. 00:30:09.399 [2024-07-15 15:35:13.212881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.399 [2024-07-15 15:35:13.212895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.399 qpair failed and we were unable to recover it. 00:30:09.399 [2024-07-15 15:35:13.213198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.399 [2024-07-15 15:35:13.213211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.399 qpair failed and we were unable to recover it. 00:30:09.399 [2024-07-15 15:35:13.213560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.399 [2024-07-15 15:35:13.213573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.399 qpair failed and we were unable to recover it. 00:30:09.399 [2024-07-15 15:35:13.213808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.399 [2024-07-15 15:35:13.213822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.399 qpair failed and we were unable to recover it. 00:30:09.399 [2024-07-15 15:35:13.214150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.399 [2024-07-15 15:35:13.214163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.399 qpair failed and we were unable to recover it. 00:30:09.399 [2024-07-15 15:35:13.214492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.399 [2024-07-15 15:35:13.214506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.399 qpair failed and we were unable to recover it. 00:30:09.399 [2024-07-15 15:35:13.214824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.399 [2024-07-15 15:35:13.214841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.399 qpair failed and we were unable to recover it. 00:30:09.399 [2024-07-15 15:35:13.215167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.399 [2024-07-15 15:35:13.215180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.399 qpair failed and we were unable to recover it. 00:30:09.399 [2024-07-15 15:35:13.215506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.399 [2024-07-15 15:35:13.215520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.399 qpair failed and we were unable to recover it. 00:30:09.399 [2024-07-15 15:35:13.215847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.399 [2024-07-15 15:35:13.215861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.399 qpair failed and we were unable to recover it. 00:30:09.399 [2024-07-15 15:35:13.216187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.399 [2024-07-15 15:35:13.216201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.399 qpair failed and we were unable to recover it. 00:30:09.399 [2024-07-15 15:35:13.216527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.399 [2024-07-15 15:35:13.216541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.399 qpair failed and we were unable to recover it. 00:30:09.399 [2024-07-15 15:35:13.216867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.399 [2024-07-15 15:35:13.216880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.399 qpair failed and we were unable to recover it. 00:30:09.399 [2024-07-15 15:35:13.217185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.400 [2024-07-15 15:35:13.217198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.400 qpair failed and we were unable to recover it. 00:30:09.400 [2024-07-15 15:35:13.217468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.400 [2024-07-15 15:35:13.217481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.400 qpair failed and we were unable to recover it. 00:30:09.400 [2024-07-15 15:35:13.217739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.400 [2024-07-15 15:35:13.217753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.400 qpair failed and we were unable to recover it. 00:30:09.400 [2024-07-15 15:35:13.218024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.400 [2024-07-15 15:35:13.218037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.400 qpair failed and we were unable to recover it. 00:30:09.400 [2024-07-15 15:35:13.218364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.400 [2024-07-15 15:35:13.218377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.400 qpair failed and we were unable to recover it. 00:30:09.400 [2024-07-15 15:35:13.218740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.400 [2024-07-15 15:35:13.218753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.400 qpair failed and we were unable to recover it. 00:30:09.400 [2024-07-15 15:35:13.219055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.400 [2024-07-15 15:35:13.219069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.400 qpair failed and we were unable to recover it. 00:30:09.400 [2024-07-15 15:35:13.219325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.400 [2024-07-15 15:35:13.219339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.400 qpair failed and we were unable to recover it. 00:30:09.400 [2024-07-15 15:35:13.219640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.400 [2024-07-15 15:35:13.219654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.400 qpair failed and we were unable to recover it. 00:30:09.400 [2024-07-15 15:35:13.219958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.400 [2024-07-15 15:35:13.219971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.400 qpair failed and we were unable to recover it. 00:30:09.400 [2024-07-15 15:35:13.220309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.400 [2024-07-15 15:35:13.220324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.400 qpair failed and we were unable to recover it. 00:30:09.400 [2024-07-15 15:35:13.220561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.400 [2024-07-15 15:35:13.220574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.400 qpair failed and we were unable to recover it. 00:30:09.400 [2024-07-15 15:35:13.220827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.400 [2024-07-15 15:35:13.220845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.400 qpair failed and we were unable to recover it. 00:30:09.400 [2024-07-15 15:35:13.221169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.400 [2024-07-15 15:35:13.221182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.400 qpair failed and we were unable to recover it. 00:30:09.400 [2024-07-15 15:35:13.221507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.400 [2024-07-15 15:35:13.221521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.400 qpair failed and we were unable to recover it. 00:30:09.400 [2024-07-15 15:35:13.221846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.400 [2024-07-15 15:35:13.221860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.400 qpair failed and we were unable to recover it. 00:30:09.400 [2024-07-15 15:35:13.222057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.400 [2024-07-15 15:35:13.222070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.400 qpair failed and we were unable to recover it. 00:30:09.400 [2024-07-15 15:35:13.222418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.400 [2024-07-15 15:35:13.222431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.400 qpair failed and we were unable to recover it. 00:30:09.400 [2024-07-15 15:35:13.222709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.400 [2024-07-15 15:35:13.222722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.400 qpair failed and we were unable to recover it. 00:30:09.400 [2024-07-15 15:35:13.222968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.400 [2024-07-15 15:35:13.222982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.400 qpair failed and we were unable to recover it. 00:30:09.400 [2024-07-15 15:35:13.223164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.400 [2024-07-15 15:35:13.223177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.400 qpair failed and we were unable to recover it. 00:30:09.400 [2024-07-15 15:35:13.223417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.400 [2024-07-15 15:35:13.223431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.400 qpair failed and we were unable to recover it. 00:30:09.400 [2024-07-15 15:35:13.223732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.400 [2024-07-15 15:35:13.223745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.400 qpair failed and we were unable to recover it. 00:30:09.400 [2024-07-15 15:35:13.223913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.400 [2024-07-15 15:35:13.223927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.400 qpair failed and we were unable to recover it. 00:30:09.400 [2024-07-15 15:35:13.224109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.400 [2024-07-15 15:35:13.224122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.400 qpair failed and we were unable to recover it. 00:30:09.400 [2024-07-15 15:35:13.224445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.400 [2024-07-15 15:35:13.224459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.400 qpair failed and we were unable to recover it. 00:30:09.400 [2024-07-15 15:35:13.224694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.400 [2024-07-15 15:35:13.224707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.400 qpair failed and we were unable to recover it. 00:30:09.400 [2024-07-15 15:35:13.225027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.400 [2024-07-15 15:35:13.225041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.400 qpair failed and we were unable to recover it. 00:30:09.400 [2024-07-15 15:35:13.225276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.400 [2024-07-15 15:35:13.225290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.400 qpair failed and we were unable to recover it. 00:30:09.400 [2024-07-15 15:35:13.225564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.400 [2024-07-15 15:35:13.225578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.400 qpair failed and we were unable to recover it. 00:30:09.400 [2024-07-15 15:35:13.225911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.400 [2024-07-15 15:35:13.225925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.400 qpair failed and we were unable to recover it. 00:30:09.400 [2024-07-15 15:35:13.226198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.400 [2024-07-15 15:35:13.226211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.400 qpair failed and we were unable to recover it. 00:30:09.400 [2024-07-15 15:35:13.226482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.400 [2024-07-15 15:35:13.226496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.400 qpair failed and we were unable to recover it. 00:30:09.400 [2024-07-15 15:35:13.226820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.400 [2024-07-15 15:35:13.226836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.400 qpair failed and we were unable to recover it. 00:30:09.401 [2024-07-15 15:35:13.227095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.401 [2024-07-15 15:35:13.227109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.401 qpair failed and we were unable to recover it. 00:30:09.401 [2024-07-15 15:35:13.227432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.401 [2024-07-15 15:35:13.227446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.401 qpair failed and we were unable to recover it. 00:30:09.401 [2024-07-15 15:35:13.227772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.401 [2024-07-15 15:35:13.227785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.401 qpair failed and we were unable to recover it. 00:30:09.401 [2024-07-15 15:35:13.228038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.401 [2024-07-15 15:35:13.228051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.401 qpair failed and we were unable to recover it. 00:30:09.401 [2024-07-15 15:35:13.228315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.401 [2024-07-15 15:35:13.228329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.401 qpair failed and we were unable to recover it. 00:30:09.401 [2024-07-15 15:35:13.228562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.401 [2024-07-15 15:35:13.228575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.401 qpair failed and we were unable to recover it. 00:30:09.401 [2024-07-15 15:35:13.228826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.401 [2024-07-15 15:35:13.228842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.401 qpair failed and we were unable to recover it. 00:30:09.401 [2024-07-15 15:35:13.229164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.401 [2024-07-15 15:35:13.229177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.401 qpair failed and we were unable to recover it. 00:30:09.401 [2024-07-15 15:35:13.229495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.401 [2024-07-15 15:35:13.229509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.401 qpair failed and we were unable to recover it. 00:30:09.401 [2024-07-15 15:35:13.229830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.401 [2024-07-15 15:35:13.229854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.401 qpair failed and we were unable to recover it. 00:30:09.401 [2024-07-15 15:35:13.230107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.401 [2024-07-15 15:35:13.230121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.401 qpair failed and we were unable to recover it. 00:30:09.401 [2024-07-15 15:35:13.230368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.401 [2024-07-15 15:35:13.230381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.401 qpair failed and we were unable to recover it. 00:30:09.401 [2024-07-15 15:35:13.230645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.401 [2024-07-15 15:35:13.230659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.401 qpair failed and we were unable to recover it. 00:30:09.401 [2024-07-15 15:35:13.230989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.401 [2024-07-15 15:35:13.231002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.401 qpair failed and we were unable to recover it. 00:30:09.401 [2024-07-15 15:35:13.231255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.401 [2024-07-15 15:35:13.231269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.401 qpair failed and we were unable to recover it. 00:30:09.401 [2024-07-15 15:35:13.231557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.401 [2024-07-15 15:35:13.231570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.401 qpair failed and we were unable to recover it. 00:30:09.401 [2024-07-15 15:35:13.231840] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:09.401 [2024-07-15 15:35:13.231898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.401 [2024-07-15 15:35:13.231911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.401 qpair failed and we were unable to recover it. 00:30:09.401 [2024-07-15 15:35:13.232232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.401 [2024-07-15 15:35:13.232246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.401 qpair failed and we were unable to recover it. 00:30:09.401 [2024-07-15 15:35:13.232523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.401 [2024-07-15 15:35:13.232538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.401 qpair failed and we were unable to recover it. 00:30:09.401 [2024-07-15 15:35:13.232741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.401 [2024-07-15 15:35:13.232755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.401 qpair failed and we were unable to recover it. 00:30:09.401 [2024-07-15 15:35:13.233082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.401 [2024-07-15 15:35:13.233096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.401 qpair failed and we were unable to recover it. 00:30:09.401 [2024-07-15 15:35:13.233463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.401 [2024-07-15 15:35:13.233477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.401 qpair failed and we were unable to recover it. 00:30:09.401 [2024-07-15 15:35:13.233736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.401 [2024-07-15 15:35:13.233751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.401 qpair failed and we were unable to recover it. 00:30:09.401 [2024-07-15 15:35:13.234071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.401 [2024-07-15 15:35:13.234085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.401 qpair failed and we were unable to recover it. 00:30:09.401 [2024-07-15 15:35:13.234416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.401 [2024-07-15 15:35:13.234430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.401 qpair failed and we were unable to recover it. 00:30:09.401 [2024-07-15 15:35:13.234709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.401 [2024-07-15 15:35:13.234724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.401 qpair failed and we were unable to recover it. 00:30:09.401 [2024-07-15 15:35:13.235038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.401 [2024-07-15 15:35:13.235052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.401 qpair failed and we were unable to recover it. 00:30:09.401 [2024-07-15 15:35:13.235288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.401 [2024-07-15 15:35:13.235302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.401 qpair failed and we were unable to recover it. 00:30:09.401 [2024-07-15 15:35:13.235569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.401 [2024-07-15 15:35:13.235583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.401 qpair failed and we were unable to recover it. 00:30:09.401 [2024-07-15 15:35:13.235916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.401 [2024-07-15 15:35:13.235931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.401 qpair failed and we were unable to recover it. 00:30:09.402 [2024-07-15 15:35:13.236208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.402 [2024-07-15 15:35:13.236221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.402 qpair failed and we were unable to recover it. 00:30:09.402 [2024-07-15 15:35:13.236548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.402 [2024-07-15 15:35:13.236563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.402 qpair failed and we were unable to recover it. 00:30:09.402 [2024-07-15 15:35:13.236804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.402 [2024-07-15 15:35:13.236818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.402 qpair failed and we were unable to recover it. 00:30:09.402 [2024-07-15 15:35:13.237149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.402 [2024-07-15 15:35:13.237164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.402 qpair failed and we were unable to recover it. 00:30:09.402 [2024-07-15 15:35:13.237492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.402 [2024-07-15 15:35:13.237506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.402 qpair failed and we were unable to recover it. 00:30:09.402 [2024-07-15 15:35:13.237812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.402 [2024-07-15 15:35:13.237825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.402 qpair failed and we were unable to recover it. 00:30:09.402 [2024-07-15 15:35:13.238089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.402 [2024-07-15 15:35:13.238104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.402 qpair failed and we were unable to recover it. 00:30:09.402 [2024-07-15 15:35:13.238355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.402 [2024-07-15 15:35:13.238369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.402 qpair failed and we were unable to recover it. 00:30:09.402 [2024-07-15 15:35:13.238671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.402 [2024-07-15 15:35:13.238686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.402 qpair failed and we were unable to recover it. 00:30:09.402 [2024-07-15 15:35:13.239014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.402 [2024-07-15 15:35:13.239029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.402 qpair failed and we were unable to recover it. 00:30:09.402 [2024-07-15 15:35:13.239357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.402 [2024-07-15 15:35:13.239372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.402 qpair failed and we were unable to recover it. 00:30:09.402 [2024-07-15 15:35:13.239609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.402 [2024-07-15 15:35:13.239624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.402 qpair failed and we were unable to recover it. 00:30:09.402 [2024-07-15 15:35:13.239927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.402 [2024-07-15 15:35:13.239942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.402 qpair failed and we were unable to recover it. 00:30:09.402 [2024-07-15 15:35:13.240274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.402 [2024-07-15 15:35:13.240289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.402 qpair failed and we were unable to recover it. 00:30:09.402 [2024-07-15 15:35:13.240593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.402 [2024-07-15 15:35:13.240609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.402 qpair failed and we were unable to recover it. 00:30:09.402 [2024-07-15 15:35:13.240860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.402 [2024-07-15 15:35:13.240875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.402 qpair failed and we were unable to recover it. 00:30:09.402 [2024-07-15 15:35:13.241205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.402 [2024-07-15 15:35:13.241220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.402 qpair failed and we were unable to recover it. 00:30:09.402 [2024-07-15 15:35:13.241497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.402 [2024-07-15 15:35:13.241513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.402 qpair failed and we were unable to recover it. 00:30:09.402 [2024-07-15 15:35:13.241794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.402 [2024-07-15 15:35:13.241810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.402 qpair failed and we were unable to recover it. 00:30:09.402 [2024-07-15 15:35:13.242054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.402 [2024-07-15 15:35:13.242068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.402 qpair failed and we were unable to recover it. 00:30:09.402 [2024-07-15 15:35:13.242394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.402 [2024-07-15 15:35:13.242408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.402 qpair failed and we were unable to recover it. 00:30:09.402 [2024-07-15 15:35:13.242640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.402 [2024-07-15 15:35:13.242654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.402 qpair failed and we were unable to recover it. 00:30:09.402 [2024-07-15 15:35:13.242956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.402 [2024-07-15 15:35:13.242970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.402 qpair failed and we were unable to recover it. 00:30:09.402 [2024-07-15 15:35:13.243226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.402 [2024-07-15 15:35:13.243239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.402 qpair failed and we were unable to recover it. 00:30:09.402 [2024-07-15 15:35:13.243490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.402 [2024-07-15 15:35:13.243504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.402 qpair failed and we were unable to recover it. 00:30:09.402 [2024-07-15 15:35:13.243838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.402 [2024-07-15 15:35:13.243852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.402 qpair failed and we were unable to recover it. 00:30:09.402 [2024-07-15 15:35:13.244094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.402 [2024-07-15 15:35:13.244111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.402 qpair failed and we were unable to recover it. 00:30:09.402 [2024-07-15 15:35:13.244390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.402 [2024-07-15 15:35:13.244403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.402 qpair failed and we were unable to recover it. 00:30:09.402 [2024-07-15 15:35:13.244658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.402 [2024-07-15 15:35:13.244672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.402 qpair failed and we were unable to recover it. 00:30:09.402 [2024-07-15 15:35:13.244849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.402 [2024-07-15 15:35:13.244863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.402 qpair failed and we were unable to recover it. 00:30:09.402 [2024-07-15 15:35:13.245106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.402 [2024-07-15 15:35:13.245120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.402 qpair failed and we were unable to recover it. 00:30:09.402 [2024-07-15 15:35:13.245377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.402 [2024-07-15 15:35:13.245390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.402 qpair failed and we were unable to recover it. 00:30:09.402 [2024-07-15 15:35:13.245717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.403 [2024-07-15 15:35:13.245731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.403 qpair failed and we were unable to recover it. 00:30:09.403 [2024-07-15 15:35:13.246065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.403 [2024-07-15 15:35:13.246079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.403 qpair failed and we were unable to recover it. 00:30:09.403 [2024-07-15 15:35:13.246429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.403 [2024-07-15 15:35:13.246442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.403 qpair failed and we were unable to recover it. 00:30:09.403 [2024-07-15 15:35:13.246620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.403 [2024-07-15 15:35:13.246634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.403 qpair failed and we were unable to recover it. 00:30:09.403 [2024-07-15 15:35:13.246961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.403 [2024-07-15 15:35:13.246975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.403 qpair failed and we were unable to recover it. 00:30:09.403 [2024-07-15 15:35:13.247299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.403 [2024-07-15 15:35:13.247313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.403 qpair failed and we were unable to recover it. 00:30:09.403 [2024-07-15 15:35:13.247639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.403 [2024-07-15 15:35:13.247653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.403 qpair failed and we were unable to recover it. 00:30:09.403 [2024-07-15 15:35:13.247981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.403 [2024-07-15 15:35:13.247995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.403 qpair failed and we were unable to recover it. 00:30:09.403 [2024-07-15 15:35:13.248325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.403 [2024-07-15 15:35:13.248339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.403 qpair failed and we were unable to recover it. 00:30:09.403 [2024-07-15 15:35:13.248580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.403 [2024-07-15 15:35:13.248593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.403 qpair failed and we were unable to recover it. 00:30:09.403 [2024-07-15 15:35:13.248915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.403 [2024-07-15 15:35:13.248929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.403 qpair failed and we were unable to recover it. 00:30:09.403 [2024-07-15 15:35:13.249191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.403 [2024-07-15 15:35:13.249205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.403 qpair failed and we were unable to recover it. 00:30:09.403 [2024-07-15 15:35:13.249483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.403 [2024-07-15 15:35:13.249496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.403 qpair failed and we were unable to recover it. 00:30:09.403 [2024-07-15 15:35:13.249820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.403 [2024-07-15 15:35:13.249838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.403 qpair failed and we were unable to recover it. 00:30:09.403 [2024-07-15 15:35:13.250083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.403 [2024-07-15 15:35:13.250097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.403 qpair failed and we were unable to recover it. 00:30:09.403 [2024-07-15 15:35:13.250336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.403 [2024-07-15 15:35:13.250349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.403 qpair failed and we were unable to recover it. 00:30:09.403 [2024-07-15 15:35:13.250648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.403 [2024-07-15 15:35:13.250662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.403 qpair failed and we were unable to recover it. 00:30:09.403 [2024-07-15 15:35:13.250984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.403 [2024-07-15 15:35:13.250998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.403 qpair failed and we were unable to recover it. 00:30:09.403 [2024-07-15 15:35:13.251321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.403 [2024-07-15 15:35:13.251335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.403 qpair failed and we were unable to recover it. 00:30:09.403 [2024-07-15 15:35:13.251611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.403 [2024-07-15 15:35:13.251625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.403 qpair failed and we were unable to recover it. 00:30:09.403 [2024-07-15 15:35:13.251948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.403 [2024-07-15 15:35:13.251962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.403 qpair failed and we were unable to recover it. 00:30:09.403 [2024-07-15 15:35:13.252199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.403 [2024-07-15 15:35:13.252213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.403 qpair failed and we were unable to recover it. 00:30:09.403 [2024-07-15 15:35:13.252483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.403 [2024-07-15 15:35:13.252497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.403 qpair failed and we were unable to recover it. 00:30:09.403 [2024-07-15 15:35:13.252826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.403 [2024-07-15 15:35:13.252842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.403 qpair failed and we were unable to recover it. 00:30:09.403 [2024-07-15 15:35:13.253162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.403 [2024-07-15 15:35:13.253176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.403 qpair failed and we were unable to recover it. 00:30:09.403 [2024-07-15 15:35:13.253407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.403 [2024-07-15 15:35:13.253421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.403 qpair failed and we were unable to recover it. 00:30:09.403 [2024-07-15 15:35:13.253726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.403 [2024-07-15 15:35:13.253740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.404 qpair failed and we were unable to recover it. 00:30:09.404 [2024-07-15 15:35:13.254012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.404 [2024-07-15 15:35:13.254025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.404 qpair failed and we were unable to recover it. 00:30:09.404 [2024-07-15 15:35:13.254300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.404 [2024-07-15 15:35:13.254314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.404 qpair failed and we were unable to recover it. 00:30:09.404 [2024-07-15 15:35:13.254638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.404 [2024-07-15 15:35:13.254652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.404 qpair failed and we were unable to recover it. 00:30:09.404 [2024-07-15 15:35:13.254887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.404 [2024-07-15 15:35:13.254901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.404 qpair failed and we were unable to recover it. 00:30:09.404 [2024-07-15 15:35:13.255226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.404 [2024-07-15 15:35:13.255239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.404 qpair failed and we were unable to recover it. 00:30:09.404 [2024-07-15 15:35:13.255601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.404 [2024-07-15 15:35:13.255614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.404 qpair failed and we were unable to recover it. 00:30:09.404 [2024-07-15 15:35:13.255891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.404 [2024-07-15 15:35:13.255904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.404 qpair failed and we were unable to recover it. 00:30:09.404 [2024-07-15 15:35:13.256135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.404 [2024-07-15 15:35:13.256152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.404 qpair failed and we were unable to recover it. 00:30:09.404 [2024-07-15 15:35:13.256397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.404 [2024-07-15 15:35:13.256411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.404 qpair failed and we were unable to recover it. 00:30:09.404 [2024-07-15 15:35:13.256643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.404 [2024-07-15 15:35:13.256656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.404 qpair failed and we were unable to recover it. 00:30:09.404 [2024-07-15 15:35:13.256920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.404 [2024-07-15 15:35:13.256934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.404 qpair failed and we were unable to recover it. 00:30:09.404 [2024-07-15 15:35:13.257129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.404 [2024-07-15 15:35:13.257142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.404 qpair failed and we were unable to recover it. 00:30:09.404 [2024-07-15 15:35:13.257471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.404 [2024-07-15 15:35:13.257484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.404 qpair failed and we were unable to recover it. 00:30:09.404 [2024-07-15 15:35:13.257772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.404 [2024-07-15 15:35:13.257786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.404 qpair failed and we were unable to recover it. 00:30:09.404 [2024-07-15 15:35:13.258089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.404 [2024-07-15 15:35:13.258103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.404 qpair failed and we were unable to recover it. 00:30:09.404 [2024-07-15 15:35:13.258373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.404 [2024-07-15 15:35:13.258387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.404 qpair failed and we were unable to recover it. 00:30:09.404 [2024-07-15 15:35:13.258626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.404 [2024-07-15 15:35:13.258639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.404 qpair failed and we were unable to recover it. 00:30:09.404 [2024-07-15 15:35:13.258883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.404 [2024-07-15 15:35:13.258897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.404 qpair failed and we were unable to recover it. 00:30:09.404 [2024-07-15 15:35:13.259197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.404 [2024-07-15 15:35:13.259210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.404 qpair failed and we were unable to recover it. 00:30:09.404 [2024-07-15 15:35:13.259444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.404 [2024-07-15 15:35:13.259458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.404 qpair failed and we were unable to recover it. 00:30:09.404 [2024-07-15 15:35:13.259705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.404 [2024-07-15 15:35:13.259719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.404 qpair failed and we were unable to recover it. 00:30:09.404 [2024-07-15 15:35:13.260043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.404 [2024-07-15 15:35:13.260057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.404 qpair failed and we were unable to recover it. 00:30:09.404 [2024-07-15 15:35:13.260249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.404 [2024-07-15 15:35:13.260263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.404 qpair failed and we were unable to recover it. 00:30:09.404 [2024-07-15 15:35:13.260512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.404 [2024-07-15 15:35:13.260526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.404 qpair failed and we were unable to recover it. 00:30:09.404 [2024-07-15 15:35:13.260728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.404 [2024-07-15 15:35:13.260741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.404 qpair failed and we were unable to recover it. 00:30:09.404 [2024-07-15 15:35:13.261057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.404 [2024-07-15 15:35:13.261071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.404 qpair failed and we were unable to recover it. 00:30:09.404 [2024-07-15 15:35:13.261318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.404 [2024-07-15 15:35:13.261332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.404 qpair failed and we were unable to recover it. 00:30:09.404 [2024-07-15 15:35:13.261497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.404 [2024-07-15 15:35:13.261511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.404 qpair failed and we were unable to recover it. 00:30:09.404 [2024-07-15 15:35:13.261812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.404 [2024-07-15 15:35:13.261825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.404 qpair failed and we were unable to recover it. 00:30:09.404 [2024-07-15 15:35:13.262077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.404 [2024-07-15 15:35:13.262091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.404 qpair failed and we were unable to recover it. 00:30:09.404 [2024-07-15 15:35:13.262394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.404 [2024-07-15 15:35:13.262407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.404 qpair failed and we were unable to recover it. 00:30:09.404 [2024-07-15 15:35:13.262637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.405 [2024-07-15 15:35:13.262650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.405 qpair failed and we were unable to recover it. 00:30:09.405 [2024-07-15 15:35:13.262958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.405 [2024-07-15 15:35:13.262973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.405 qpair failed and we were unable to recover it. 00:30:09.405 [2024-07-15 15:35:13.263288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.405 [2024-07-15 15:35:13.263302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.405 qpair failed and we were unable to recover it. 00:30:09.405 [2024-07-15 15:35:13.263556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.405 [2024-07-15 15:35:13.263570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.405 qpair failed and we were unable to recover it. 00:30:09.405 [2024-07-15 15:35:13.263823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.405 [2024-07-15 15:35:13.263840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.405 qpair failed and we were unable to recover it. 00:30:09.405 [2024-07-15 15:35:13.264073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.405 [2024-07-15 15:35:13.264086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.405 qpair failed and we were unable to recover it. 00:30:09.405 [2024-07-15 15:35:13.264322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.405 [2024-07-15 15:35:13.264335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.405 qpair failed and we were unable to recover it. 00:30:09.405 [2024-07-15 15:35:13.264606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.405 [2024-07-15 15:35:13.264619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.405 qpair failed and we were unable to recover it. 00:30:09.405 [2024-07-15 15:35:13.264968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.405 [2024-07-15 15:35:13.264983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.405 qpair failed and we were unable to recover it. 00:30:09.405 [2024-07-15 15:35:13.265309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.405 [2024-07-15 15:35:13.265323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.405 qpair failed and we were unable to recover it. 00:30:09.405 [2024-07-15 15:35:13.265510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.405 [2024-07-15 15:35:13.265525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.405 qpair failed and we were unable to recover it. 00:30:09.405 [2024-07-15 15:35:13.265806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.405 [2024-07-15 15:35:13.265820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.405 qpair failed and we were unable to recover it. 00:30:09.405 [2024-07-15 15:35:13.266068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.405 [2024-07-15 15:35:13.266081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.405 qpair failed and we were unable to recover it. 00:30:09.405 [2024-07-15 15:35:13.266351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.405 [2024-07-15 15:35:13.266365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.405 qpair failed and we were unable to recover it. 00:30:09.405 [2024-07-15 15:35:13.266598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.405 [2024-07-15 15:35:13.266612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.405 qpair failed and we were unable to recover it. 00:30:09.405 [2024-07-15 15:35:13.266937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.405 [2024-07-15 15:35:13.266951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.405 qpair failed and we were unable to recover it. 00:30:09.405 [2024-07-15 15:35:13.267276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.405 [2024-07-15 15:35:13.267292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.405 qpair failed and we were unable to recover it. 00:30:09.405 [2024-07-15 15:35:13.267544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.405 [2024-07-15 15:35:13.267558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.405 qpair failed and we were unable to recover it. 00:30:09.405 [2024-07-15 15:35:13.267806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.405 [2024-07-15 15:35:13.267819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.405 qpair failed and we were unable to recover it. 00:30:09.405 [2024-07-15 15:35:13.267990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.405 [2024-07-15 15:35:13.268004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.405 qpair failed and we were unable to recover it. 00:30:09.405 [2024-07-15 15:35:13.268329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.405 [2024-07-15 15:35:13.268342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.405 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-15 15:35:13.268666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-15 15:35:13.268680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-15 15:35:13.268965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-15 15:35:13.268981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-15 15:35:13.269310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-15 15:35:13.269327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-15 15:35:13.269599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-15 15:35:13.269616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-15 15:35:13.269873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-15 15:35:13.269888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-15 15:35:13.270129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-15 15:35:13.270144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-15 15:35:13.270422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-15 15:35:13.270438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-15 15:35:13.270764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-15 15:35:13.270780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-15 15:35:13.271054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-15 15:35:13.271069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-15 15:35:13.271397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-15 15:35:13.271412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-15 15:35:13.271715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-15 15:35:13.271729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-15 15:35:13.271994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-15 15:35:13.272008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-15 15:35:13.272333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-15 15:35:13.272347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-15 15:35:13.272554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-15 15:35:13.272569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-15 15:35:13.272819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-15 15:35:13.272838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-15 15:35:13.273082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-15 15:35:13.273096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-15 15:35:13.273275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-15 15:35:13.273289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-15 15:35:13.273619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-15 15:35:13.273634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-15 15:35:13.273892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-15 15:35:13.273907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-15 15:35:13.274139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-15 15:35:13.274153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-15 15:35:13.274403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-15 15:35:13.274418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-15 15:35:13.274742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-15 15:35:13.274757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-15 15:35:13.275074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-15 15:35:13.275089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-15 15:35:13.275406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-15 15:35:13.275420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-15 15:35:13.275683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-15 15:35:13.275698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-15 15:35:13.275937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-15 15:35:13.275951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-15 15:35:13.276276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.679 [2024-07-15 15:35:13.276290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.679 qpair failed and we were unable to recover it. 00:30:09.679 [2024-07-15 15:35:13.276520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-15 15:35:13.276535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-15 15:35:13.276845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-15 15:35:13.276859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-15 15:35:13.277165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-15 15:35:13.277179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-15 15:35:13.277505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-15 15:35:13.277519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-15 15:35:13.277820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-15 15:35:13.277839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-15 15:35:13.278096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-15 15:35:13.278110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-15 15:35:13.278383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-15 15:35:13.278397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-15 15:35:13.278637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-15 15:35:13.278651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-15 15:35:13.278930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-15 15:35:13.278948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-15 15:35:13.279181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-15 15:35:13.279195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-15 15:35:13.279447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-15 15:35:13.279460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-15 15:35:13.279718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-15 15:35:13.279731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-15 15:35:13.280047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-15 15:35:13.280061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-15 15:35:13.280387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-15 15:35:13.280401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-15 15:35:13.280667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-15 15:35:13.280680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-15 15:35:13.281032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-15 15:35:13.281046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-15 15:35:13.281320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-15 15:35:13.281334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-15 15:35:13.281598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-15 15:35:13.281611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-15 15:35:13.281936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-15 15:35:13.281950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-15 15:35:13.282133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-15 15:35:13.282148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-15 15:35:13.282422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-15 15:35:13.282435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-15 15:35:13.282742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-15 15:35:13.282756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-15 15:35:13.283013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-15 15:35:13.283027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-15 15:35:13.283263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-15 15:35:13.283276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-15 15:35:13.283529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-15 15:35:13.283543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-15 15:35:13.283867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-15 15:35:13.283881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-15 15:35:13.284170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-15 15:35:13.284184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-15 15:35:13.284509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-15 15:35:13.284522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-15 15:35:13.284710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-15 15:35:13.284723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-15 15:35:13.284982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-15 15:35:13.284996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-15 15:35:13.285238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-15 15:35:13.285251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-15 15:35:13.285599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-15 15:35:13.285613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-15 15:35:13.285962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-15 15:35:13.285975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-15 15:35:13.286232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-15 15:35:13.286246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-15 15:35:13.286545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-15 15:35:13.286558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-15 15:35:13.286816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-15 15:35:13.286830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.680 [2024-07-15 15:35:13.287139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.680 [2024-07-15 15:35:13.287153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.680 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-15 15:35:13.287343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-15 15:35:13.287357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-15 15:35:13.287682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-15 15:35:13.287695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-15 15:35:13.287862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-15 15:35:13.287876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-15 15:35:13.288198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-15 15:35:13.288212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-15 15:35:13.288537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-15 15:35:13.288551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-15 15:35:13.288881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-15 15:35:13.288895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-15 15:35:13.289127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-15 15:35:13.289141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-15 15:35:13.289453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-15 15:35:13.289467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-15 15:35:13.289700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-15 15:35:13.289713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-15 15:35:13.289985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-15 15:35:13.289999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-15 15:35:13.290327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-15 15:35:13.290340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-15 15:35:13.290715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-15 15:35:13.290731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-15 15:35:13.290974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-15 15:35:13.290987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-15 15:35:13.291235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-15 15:35:13.291249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-15 15:35:13.291572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-15 15:35:13.291586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-15 15:35:13.291912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-15 15:35:13.291926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-15 15:35:13.292245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-15 15:35:13.292258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-15 15:35:13.292562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-15 15:35:13.292576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-15 15:35:13.292827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-15 15:35:13.292845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-15 15:35:13.293097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-15 15:35:13.293111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-15 15:35:13.293385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-15 15:35:13.293399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-15 15:35:13.293655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-15 15:35:13.293668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-15 15:35:13.294003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-15 15:35:13.294016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-15 15:35:13.294320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-15 15:35:13.294334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-15 15:35:13.294655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-15 15:35:13.294669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-15 15:35:13.294995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-15 15:35:13.295010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-15 15:35:13.295334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-15 15:35:13.295348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-15 15:35:13.295598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-15 15:35:13.295612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-15 15:35:13.295889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-15 15:35:13.295903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-15 15:35:13.296138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-15 15:35:13.296152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-15 15:35:13.296476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-15 15:35:13.296490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-15 15:35:13.296722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-15 15:35:13.296736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-15 15:35:13.297008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-15 15:35:13.297022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-15 15:35:13.297278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-15 15:35:13.297292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-15 15:35:13.297632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-15 15:35:13.297646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-15 15:35:13.297998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-15 15:35:13.298012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-15 15:35:13.298273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.681 [2024-07-15 15:35:13.298287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.681 qpair failed and we were unable to recover it. 00:30:09.681 [2024-07-15 15:35:13.298590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-15 15:35:13.298603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-15 15:35:13.298778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-15 15:35:13.298792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-15 15:35:13.298958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-15 15:35:13.298971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-15 15:35:13.299138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-15 15:35:13.299151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-15 15:35:13.299475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-15 15:35:13.299489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-15 15:35:13.299802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-15 15:35:13.299816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-15 15:35:13.300133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-15 15:35:13.300147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-15 15:35:13.300383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-15 15:35:13.300397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-15 15:35:13.300656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-15 15:35:13.300670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-15 15:35:13.300922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-15 15:35:13.300937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-15 15:35:13.301202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-15 15:35:13.301216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-15 15:35:13.301540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-15 15:35:13.301554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-15 15:35:13.301856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-15 15:35:13.301871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-15 15:35:13.302049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-15 15:35:13.302063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-15 15:35:13.302365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-15 15:35:13.302381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-15 15:35:13.302652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-15 15:35:13.302666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-15 15:35:13.302987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-15 15:35:13.303001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-15 15:35:13.303208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-15 15:35:13.303222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-15 15:35:13.303480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-15 15:35:13.303494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-15 15:35:13.303755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-15 15:35:13.303769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-15 15:35:13.304022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-15 15:35:13.304037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-15 15:35:13.304316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-15 15:35:13.304330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-15 15:35:13.304606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-15 15:35:13.304619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-15 15:35:13.304946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-15 15:35:13.304961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-15 15:35:13.305239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-15 15:35:13.305254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-15 15:35:13.305563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-15 15:35:13.305578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-15 15:35:13.305923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-15 15:35:13.305938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-15 15:35:13.306111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-15 15:35:13.306125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-15 15:35:13.306335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-15 15:35:13.306349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-15 15:35:13.306375] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:09.682 [2024-07-15 15:35:13.306407] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:09.682 [2024-07-15 15:35:13.306417] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:09.682 [2024-07-15 15:35:13.306427] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:09.682 [2024-07-15 15:35:13.306435] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:09.682 [2024-07-15 15:35:13.306604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-15 15:35:13.306617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-15 15:35:13.306556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:30:09.682 [2024-07-15 15:35:13.306665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:30:09.682 [2024-07-15 15:35:13.306774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:30:09.682 [2024-07-15 15:35:13.306775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:30:09.682 [2024-07-15 15:35:13.306942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-15 15:35:13.306955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-15 15:35:13.307216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-15 15:35:13.307230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-15 15:35:13.307538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-15 15:35:13.307552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-15 15:35:13.307869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-15 15:35:13.307883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-15 15:35:13.308214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-15 15:35:13.308228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-15 15:35:13.308552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.682 [2024-07-15 15:35:13.308566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.682 qpair failed and we were unable to recover it. 00:30:09.682 [2024-07-15 15:35:13.308811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-15 15:35:13.308825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-15 15:35:13.309137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-15 15:35:13.309151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-15 15:35:13.309340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-15 15:35:13.309354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-15 15:35:13.309634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-15 15:35:13.309647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-15 15:35:13.309974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-15 15:35:13.309987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-15 15:35:13.310267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-15 15:35:13.310281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-15 15:35:13.310458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-15 15:35:13.310472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-15 15:35:13.310728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-15 15:35:13.310742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-15 15:35:13.311049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-15 15:35:13.311064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-15 15:35:13.311297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-15 15:35:13.311310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-15 15:35:13.311547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-15 15:35:13.311561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-15 15:35:13.311807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-15 15:35:13.311821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-15 15:35:13.312143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-15 15:35:13.312158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-15 15:35:13.312477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-15 15:35:13.312490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-15 15:35:13.312775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-15 15:35:13.312789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-15 15:35:13.313035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-15 15:35:13.313077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-15 15:35:13.313425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-15 15:35:13.313460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-15 15:35:13.313683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-15 15:35:13.313720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-15 15:35:13.313845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-15 15:35:13.313859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-15 15:35:13.314193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-15 15:35:13.314207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-15 15:35:13.314447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-15 15:35:13.314460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-15 15:35:13.314804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-15 15:35:13.314817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-15 15:35:13.315067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-15 15:35:13.315081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-15 15:35:13.315311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-15 15:35:13.315324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-15 15:35:13.315591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-15 15:35:13.315605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-15 15:35:13.315794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-15 15:35:13.315807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-15 15:35:13.316151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-15 15:35:13.316164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-15 15:35:13.316400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-15 15:35:13.316414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-15 15:35:13.316744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-15 15:35:13.316757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.683 qpair failed and we were unable to recover it. 00:30:09.683 [2024-07-15 15:35:13.317011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.683 [2024-07-15 15:35:13.317025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-15 15:35:13.317258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-15 15:35:13.317271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-15 15:35:13.317465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-15 15:35:13.317478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-15 15:35:13.317726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-15 15:35:13.317740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-15 15:35:13.318011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-15 15:35:13.318025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-15 15:35:13.318190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-15 15:35:13.318203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-15 15:35:13.318361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-15 15:35:13.318374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-15 15:35:13.318558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-15 15:35:13.318571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-15 15:35:13.318738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-15 15:35:13.318751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-15 15:35:13.319077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-15 15:35:13.319091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-15 15:35:13.319362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-15 15:35:13.319376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-15 15:35:13.319563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-15 15:35:13.319577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-15 15:35:13.319743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-15 15:35:13.319757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-15 15:35:13.319962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-15 15:35:13.319976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-15 15:35:13.320154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-15 15:35:13.320167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-15 15:35:13.320495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-15 15:35:13.320508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-15 15:35:13.320796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-15 15:35:13.320809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-15 15:35:13.321012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-15 15:35:13.321025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-15 15:35:13.321212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-15 15:35:13.321226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-15 15:35:13.321417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-15 15:35:13.321430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-15 15:35:13.321614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-15 15:35:13.321628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-15 15:35:13.321863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-15 15:35:13.321877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-15 15:35:13.322138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-15 15:35:13.322151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-15 15:35:13.322394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-15 15:35:13.322408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-15 15:35:13.322612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-15 15:35:13.322625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-15 15:35:13.322723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-15 15:35:13.322736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-15 15:35:13.323060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-15 15:35:13.323076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-15 15:35:13.323244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-15 15:35:13.323258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-15 15:35:13.323438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-15 15:35:13.323452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-15 15:35:13.323687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-15 15:35:13.323701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-15 15:35:13.323933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-15 15:35:13.323947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-15 15:35:13.324194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-15 15:35:13.324207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-15 15:35:13.324542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-15 15:35:13.324555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-15 15:35:13.324807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-15 15:35:13.324821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-15 15:35:13.325070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-15 15:35:13.325084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-15 15:35:13.325437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-15 15:35:13.325451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-15 15:35:13.325731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-15 15:35:13.325744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-15 15:35:13.325948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-15 15:35:13.325961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-15 15:35:13.326190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-15 15:35:13.326204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.684 qpair failed and we were unable to recover it. 00:30:09.684 [2024-07-15 15:35:13.326385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.684 [2024-07-15 15:35:13.326400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-15 15:35:13.326709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-15 15:35:13.326724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-15 15:35:13.326984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-15 15:35:13.326998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-15 15:35:13.327280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-15 15:35:13.327295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-15 15:35:13.327534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-15 15:35:13.327548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-15 15:35:13.327740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-15 15:35:13.327755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-15 15:35:13.327949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-15 15:35:13.327963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-15 15:35:13.328271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-15 15:35:13.328285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-15 15:35:13.328450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-15 15:35:13.328464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-15 15:35:13.328660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-15 15:35:13.328675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-15 15:35:13.328933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-15 15:35:13.328947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-15 15:35:13.329193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-15 15:35:13.329207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-15 15:35:13.329446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-15 15:35:13.329461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-15 15:35:13.329646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-15 15:35:13.329661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-15 15:35:13.329912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-15 15:35:13.329928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-15 15:35:13.330114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-15 15:35:13.330129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-15 15:35:13.330312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-15 15:35:13.330329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-15 15:35:13.330530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-15 15:35:13.330546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-15 15:35:13.330712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-15 15:35:13.330729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-15 15:35:13.331042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-15 15:35:13.331058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-15 15:35:13.331324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-15 15:35:13.331342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-15 15:35:13.331589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-15 15:35:13.331605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-15 15:35:13.331851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-15 15:35:13.331868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-15 15:35:13.332116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-15 15:35:13.332130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-15 15:35:13.332382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-15 15:35:13.332397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-15 15:35:13.332613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-15 15:35:13.332629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-15 15:35:13.332883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-15 15:35:13.332900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-15 15:35:13.333096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-15 15:35:13.333114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-15 15:35:13.333460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-15 15:35:13.333478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-15 15:35:13.333763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-15 15:35:13.333777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-15 15:35:13.334055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-15 15:35:13.334070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-15 15:35:13.334328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-15 15:35:13.334343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-15 15:35:13.334591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-15 15:35:13.334606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-15 15:35:13.334795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-15 15:35:13.334811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-15 15:35:13.335120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-15 15:35:13.335135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-15 15:35:13.335370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-15 15:35:13.335384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-15 15:35:13.335627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-15 15:35:13.335642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-15 15:35:13.335915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-15 15:35:13.335931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-15 15:35:13.336237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-15 15:35:13.336252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-15 15:35:13.336556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-15 15:35:13.336571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-15 15:35:13.336821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-15 15:35:13.336840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-15 15:35:13.337042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-15 15:35:13.337057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-15 15:35:13.337306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.685 [2024-07-15 15:35:13.337320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.685 qpair failed and we were unable to recover it. 00:30:09.685 [2024-07-15 15:35:13.337569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-15 15:35:13.337585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-15 15:35:13.337785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-15 15:35:13.337802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-15 15:35:13.338074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-15 15:35:13.338092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-15 15:35:13.338340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-15 15:35:13.338354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-15 15:35:13.338596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-15 15:35:13.338610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-15 15:35:13.338872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-15 15:35:13.338888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-15 15:35:13.339198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-15 15:35:13.339212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-15 15:35:13.339340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-15 15:35:13.339355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-15 15:35:13.339530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-15 15:35:13.339545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-15 15:35:13.339879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-15 15:35:13.339894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-15 15:35:13.340199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-15 15:35:13.340213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-15 15:35:13.340488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-15 15:35:13.340502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-15 15:35:13.340849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-15 15:35:13.340864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-15 15:35:13.341117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-15 15:35:13.341130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-15 15:35:13.341314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-15 15:35:13.341329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-15 15:35:13.341604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-15 15:35:13.341618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-15 15:35:13.341920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-15 15:35:13.341933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-15 15:35:13.342132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-15 15:35:13.342147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-15 15:35:13.342459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-15 15:35:13.342473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-15 15:35:13.342773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-15 15:35:13.342788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-15 15:35:13.342980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-15 15:35:13.342994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-15 15:35:13.343189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-15 15:35:13.343202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-15 15:35:13.343419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-15 15:35:13.343432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-15 15:35:13.343778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-15 15:35:13.343791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-15 15:35:13.343967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-15 15:35:13.343983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-15 15:35:13.344295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-15 15:35:13.344309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-15 15:35:13.344591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-15 15:35:13.344604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-15 15:35:13.344896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-15 15:35:13.344910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-15 15:35:13.345153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-15 15:35:13.345167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-15 15:35:13.345469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-15 15:35:13.345482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-15 15:35:13.345740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-15 15:35:13.345754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-15 15:35:13.346080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-15 15:35:13.346095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-15 15:35:13.346297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-15 15:35:13.346311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-15 15:35:13.346447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-15 15:35:13.346460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-15 15:35:13.346659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-15 15:35:13.346673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-15 15:35:13.346910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-15 15:35:13.346924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-15 15:35:13.347198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-15 15:35:13.347213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-15 15:35:13.347398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-15 15:35:13.347413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-15 15:35:13.347651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-15 15:35:13.347665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-15 15:35:13.347929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-15 15:35:13.347944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-15 15:35:13.348223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-15 15:35:13.348238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-15 15:35:13.348419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-15 15:35:13.348433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-15 15:35:13.348782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-15 15:35:13.348796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.686 [2024-07-15 15:35:13.349136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.686 [2024-07-15 15:35:13.349150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.686 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-15 15:35:13.349395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-15 15:35:13.349409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-15 15:35:13.349795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-15 15:35:13.349808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-15 15:35:13.350064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-15 15:35:13.350079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-15 15:35:13.350380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-15 15:35:13.350395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-15 15:35:13.350574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-15 15:35:13.350588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-15 15:35:13.350926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-15 15:35:13.350940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-15 15:35:13.351128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-15 15:35:13.351142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-15 15:35:13.351314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-15 15:35:13.351328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-15 15:35:13.351559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-15 15:35:13.351574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-15 15:35:13.351830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-15 15:35:13.351847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-15 15:35:13.352149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-15 15:35:13.352164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-15 15:35:13.352490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-15 15:35:13.352505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-15 15:35:13.352767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-15 15:35:13.352782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-15 15:35:13.353041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-15 15:35:13.353057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-15 15:35:13.353366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-15 15:35:13.353383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-15 15:35:13.353646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-15 15:35:13.353661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-15 15:35:13.353926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-15 15:35:13.353940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-15 15:35:13.354247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-15 15:35:13.354262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-15 15:35:13.354470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-15 15:35:13.354484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-15 15:35:13.354791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-15 15:35:13.354806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-15 15:35:13.355062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-15 15:35:13.355081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-15 15:35:13.355426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-15 15:35:13.355440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-15 15:35:13.355753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-15 15:35:13.355768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-15 15:35:13.356005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-15 15:35:13.356019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-15 15:35:13.356208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-15 15:35:13.356222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-15 15:35:13.356456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-15 15:35:13.356469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-15 15:35:13.356731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-15 15:35:13.356745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-15 15:35:13.356999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-15 15:35:13.357014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-15 15:35:13.357203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-15 15:35:13.357217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-15 15:35:13.357459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-15 15:35:13.357473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-15 15:35:13.357792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-15 15:35:13.357806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-15 15:35:13.358080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-15 15:35:13.358094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-15 15:35:13.358274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-15 15:35:13.358288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-15 15:35:13.358492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-15 15:35:13.358507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-15 15:35:13.358842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-15 15:35:13.358856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-15 15:35:13.359111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-15 15:35:13.359126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-15 15:35:13.359323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-15 15:35:13.359337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-15 15:35:13.359520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-15 15:35:13.359534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-15 15:35:13.359853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-15 15:35:13.359867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-15 15:35:13.360048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-15 15:35:13.360062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-15 15:35:13.360296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-15 15:35:13.360309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-15 15:35:13.360539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.687 [2024-07-15 15:35:13.360552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.687 qpair failed and we were unable to recover it. 00:30:09.687 [2024-07-15 15:35:13.360792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-15 15:35:13.360806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-15 15:35:13.361075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-15 15:35:13.361089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-15 15:35:13.361389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-15 15:35:13.361404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-15 15:35:13.361745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-15 15:35:13.361760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-15 15:35:13.362019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-15 15:35:13.362032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-15 15:35:13.362294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-15 15:35:13.362308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-15 15:35:13.362638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-15 15:35:13.362651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-15 15:35:13.362963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-15 15:35:13.362976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-15 15:35:13.363312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-15 15:35:13.363326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-15 15:35:13.363668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-15 15:35:13.363681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-15 15:35:13.364035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-15 15:35:13.364050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-15 15:35:13.364305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-15 15:35:13.364318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-15 15:35:13.364620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-15 15:35:13.364633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-15 15:35:13.364889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-15 15:35:13.364903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-15 15:35:13.365204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-15 15:35:13.365217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-15 15:35:13.365480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-15 15:35:13.365494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-15 15:35:13.365676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-15 15:35:13.365689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-15 15:35:13.365994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-15 15:35:13.366007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-15 15:35:13.366330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-15 15:35:13.366345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-15 15:35:13.366581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-15 15:35:13.366594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-15 15:35:13.366881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-15 15:35:13.366895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-15 15:35:13.367081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-15 15:35:13.367094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-15 15:35:13.367416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-15 15:35:13.367429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-15 15:35:13.367712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-15 15:35:13.367726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-15 15:35:13.367898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-15 15:35:13.367912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-15 15:35:13.368234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-15 15:35:13.368247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-15 15:35:13.368561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-15 15:35:13.368574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-15 15:35:13.368839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-15 15:35:13.368853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-15 15:35:13.369181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-15 15:35:13.369194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-15 15:35:13.369451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-15 15:35:13.369465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-15 15:35:13.369738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-15 15:35:13.369751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-15 15:35:13.370081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-15 15:35:13.370095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-15 15:35:13.370348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-15 15:35:13.370361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-15 15:35:13.370626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-15 15:35:13.370639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-15 15:35:13.370947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-15 15:35:13.370961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-15 15:35:13.371284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-15 15:35:13.371298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-15 15:35:13.371619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-15 15:35:13.371632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-15 15:35:13.371883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-15 15:35:13.371897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-15 15:35:13.372220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.688 [2024-07-15 15:35:13.372233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.688 qpair failed and we were unable to recover it. 00:30:09.688 [2024-07-15 15:35:13.372515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-15 15:35:13.372529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-15 15:35:13.372829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-15 15:35:13.372847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-15 15:35:13.373118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-15 15:35:13.373131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-15 15:35:13.373451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-15 15:35:13.373464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-15 15:35:13.373770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-15 15:35:13.373783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-15 15:35:13.374096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-15 15:35:13.374109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-15 15:35:13.374346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-15 15:35:13.374360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-15 15:35:13.374667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-15 15:35:13.374680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-15 15:35:13.375007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-15 15:35:13.375021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-15 15:35:13.375278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-15 15:35:13.375292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-15 15:35:13.375607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-15 15:35:13.375621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-15 15:35:13.375925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-15 15:35:13.375939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-15 15:35:13.376239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-15 15:35:13.376252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-15 15:35:13.376512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-15 15:35:13.376526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-15 15:35:13.376778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-15 15:35:13.376792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-15 15:35:13.377112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-15 15:35:13.377125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-15 15:35:13.377379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-15 15:35:13.377392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-15 15:35:13.377760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-15 15:35:13.377774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-15 15:35:13.377980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-15 15:35:13.377993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-15 15:35:13.378228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-15 15:35:13.378243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-15 15:35:13.378425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-15 15:35:13.378439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-15 15:35:13.378690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-15 15:35:13.378703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-15 15:35:13.378937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-15 15:35:13.378951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-15 15:35:13.379189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-15 15:35:13.379202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-15 15:35:13.379476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-15 15:35:13.379489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-15 15:35:13.379789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-15 15:35:13.379802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-15 15:35:13.380022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-15 15:35:13.380036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-15 15:35:13.380318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-15 15:35:13.380331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-15 15:35:13.380597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-15 15:35:13.380611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-15 15:35:13.380870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-15 15:35:13.380884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-15 15:35:13.381124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-15 15:35:13.381137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-15 15:35:13.381443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-15 15:35:13.381456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-15 15:35:13.381690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-15 15:35:13.381703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-15 15:35:13.381950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-15 15:35:13.381964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-15 15:35:13.382247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-15 15:35:13.382260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-15 15:35:13.382591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-15 15:35:13.382605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-15 15:35:13.382931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-15 15:35:13.382944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-15 15:35:13.383176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-15 15:35:13.383190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.689 [2024-07-15 15:35:13.383469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.689 [2024-07-15 15:35:13.383483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.689 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-15 15:35:13.383734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-15 15:35:13.383747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-15 15:35:13.384004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-15 15:35:13.384018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-15 15:35:13.384276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-15 15:35:13.384289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-15 15:35:13.384524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-15 15:35:13.384537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-15 15:35:13.384783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-15 15:35:13.384796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-15 15:35:13.384977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-15 15:35:13.384990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-15 15:35:13.385293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-15 15:35:13.385307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-15 15:35:13.385543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-15 15:35:13.385557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-15 15:35:13.385886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-15 15:35:13.385900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-15 15:35:13.386151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-15 15:35:13.386164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-15 15:35:13.386418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-15 15:35:13.386431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-15 15:35:13.386659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-15 15:35:13.386672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-15 15:35:13.386937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-15 15:35:13.386950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-15 15:35:13.387227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-15 15:35:13.387241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-15 15:35:13.387565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-15 15:35:13.387579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-15 15:35:13.387810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-15 15:35:13.387824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-15 15:35:13.388033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-15 15:35:13.388047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-15 15:35:13.388301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-15 15:35:13.388315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-15 15:35:13.388499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-15 15:35:13.388512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-15 15:35:13.388840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-15 15:35:13.388854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-15 15:35:13.389178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-15 15:35:13.389193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-15 15:35:13.389382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-15 15:35:13.389395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-15 15:35:13.389696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-15 15:35:13.389709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-15 15:35:13.390028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-15 15:35:13.390042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-15 15:35:13.390295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-15 15:35:13.390308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-15 15:35:13.390634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-15 15:35:13.390647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-15 15:35:13.390882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-15 15:35:13.390896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-15 15:35:13.391152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-15 15:35:13.391166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-15 15:35:13.391328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-15 15:35:13.391341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-15 15:35:13.391607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-15 15:35:13.391620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-15 15:35:13.391862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-15 15:35:13.391876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-15 15:35:13.392067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-15 15:35:13.392081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-15 15:35:13.392338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-15 15:35:13.392351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-15 15:35:13.392695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-15 15:35:13.392707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-15 15:35:13.392972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-15 15:35:13.392986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-15 15:35:13.393322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-15 15:35:13.393335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-15 15:35:13.393640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-15 15:35:13.393653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-15 15:35:13.393818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-15 15:35:13.393835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-15 15:35:13.394069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.690 [2024-07-15 15:35:13.394082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.690 qpair failed and we were unable to recover it. 00:30:09.690 [2024-07-15 15:35:13.394407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-15 15:35:13.394421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-15 15:35:13.394747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-15 15:35:13.394760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-15 15:35:13.395017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-15 15:35:13.395031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-15 15:35:13.395369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-15 15:35:13.395382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-15 15:35:13.395685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-15 15:35:13.395699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-15 15:35:13.396049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-15 15:35:13.396063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-15 15:35:13.396410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-15 15:35:13.396423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-15 15:35:13.396680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-15 15:35:13.396693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-15 15:35:13.396938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-15 15:35:13.396952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-15 15:35:13.397272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-15 15:35:13.397285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-15 15:35:13.397516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-15 15:35:13.397530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-15 15:35:13.397766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-15 15:35:13.397779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-15 15:35:13.398048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-15 15:35:13.398061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-15 15:35:13.398325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-15 15:35:13.398338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-15 15:35:13.398589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-15 15:35:13.398603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-15 15:35:13.398881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-15 15:35:13.398895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-15 15:35:13.399196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-15 15:35:13.399209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-15 15:35:13.399456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-15 15:35:13.399469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-15 15:35:13.399736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-15 15:35:13.399749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-15 15:35:13.400069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-15 15:35:13.400082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-15 15:35:13.400425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-15 15:35:13.400438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-15 15:35:13.400713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-15 15:35:13.400728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-15 15:35:13.400918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-15 15:35:13.400931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-15 15:35:13.401098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-15 15:35:13.401112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-15 15:35:13.401436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-15 15:35:13.401450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-15 15:35:13.401705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-15 15:35:13.401719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-15 15:35:13.402050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-15 15:35:13.402063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-15 15:35:13.402334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-15 15:35:13.402347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-15 15:35:13.402590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-15 15:35:13.402603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-15 15:35:13.402941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-15 15:35:13.402955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-15 15:35:13.403204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-15 15:35:13.403217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-15 15:35:13.403540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-15 15:35:13.403553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-15 15:35:13.403897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-15 15:35:13.403910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-15 15:35:13.404266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-15 15:35:13.404280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-15 15:35:13.404627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-15 15:35:13.404641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-15 15:35:13.404808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-15 15:35:13.404822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-15 15:35:13.405130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-15 15:35:13.405143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-15 15:35:13.405421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-15 15:35:13.405434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-15 15:35:13.405752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-15 15:35:13.405765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.691 [2024-07-15 15:35:13.406097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.691 [2024-07-15 15:35:13.406111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.691 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-15 15:35:13.406381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-15 15:35:13.406394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-15 15:35:13.406635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-15 15:35:13.406649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-15 15:35:13.406955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-15 15:35:13.406968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-15 15:35:13.407219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-15 15:35:13.407232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-15 15:35:13.407487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-15 15:35:13.407500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-15 15:35:13.407678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-15 15:35:13.407691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-15 15:35:13.407946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-15 15:35:13.407960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-15 15:35:13.408304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-15 15:35:13.408317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-15 15:35:13.408646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-15 15:35:13.408683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-15 15:35:13.408955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-15 15:35:13.408973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-15 15:35:13.409257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-15 15:35:13.409274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-15 15:35:13.409486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-15 15:35:13.409504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-15 15:35:13.409773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-15 15:35:13.409791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-15 15:35:13.410129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-15 15:35:13.410147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-15 15:35:13.410389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-15 15:35:13.410407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-15 15:35:13.410743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-15 15:35:13.410760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-15 15:35:13.411025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-15 15:35:13.411045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-15 15:35:13.411397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-15 15:35:13.411411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-15 15:35:13.411681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-15 15:35:13.411695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-15 15:35:13.412015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-15 15:35:13.412028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-15 15:35:13.412356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-15 15:35:13.412369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-15 15:35:13.412575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-15 15:35:13.412588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-15 15:35:13.412919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-15 15:35:13.412933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-15 15:35:13.413118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-15 15:35:13.413131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-15 15:35:13.413332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-15 15:35:13.413346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-15 15:35:13.413665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-15 15:35:13.413678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-15 15:35:13.413986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-15 15:35:13.413999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-15 15:35:13.414275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-15 15:35:13.414289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-15 15:35:13.414596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-15 15:35:13.414609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-15 15:35:13.414881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-15 15:35:13.414895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-15 15:35:13.415219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-15 15:35:13.415233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-15 15:35:13.415466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-15 15:35:13.415479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-15 15:35:13.415778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-15 15:35:13.415791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-15 15:35:13.416026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-15 15:35:13.416039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.692 [2024-07-15 15:35:13.416306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.692 [2024-07-15 15:35:13.416319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.692 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-15 15:35:13.416565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-15 15:35:13.416579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-15 15:35:13.416840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-15 15:35:13.416854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-15 15:35:13.417101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-15 15:35:13.417115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-15 15:35:13.417444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-15 15:35:13.417457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-15 15:35:13.417712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-15 15:35:13.417725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-15 15:35:13.418058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-15 15:35:13.418072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-15 15:35:13.418322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-15 15:35:13.418335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-15 15:35:13.418659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-15 15:35:13.418673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-15 15:35:13.418919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-15 15:35:13.418932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-15 15:35:13.419216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-15 15:35:13.419229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-15 15:35:13.419556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-15 15:35:13.419569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-15 15:35:13.419895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-15 15:35:13.419909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-15 15:35:13.420164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-15 15:35:13.420177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-15 15:35:13.420476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-15 15:35:13.420491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-15 15:35:13.420816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-15 15:35:13.420830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-15 15:35:13.421091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-15 15:35:13.421105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-15 15:35:13.421358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-15 15:35:13.421371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-15 15:35:13.421605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-15 15:35:13.421618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-15 15:35:13.421945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-15 15:35:13.421959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-15 15:35:13.422282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-15 15:35:13.422296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-15 15:35:13.422533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-15 15:35:13.422546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-15 15:35:13.422856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-15 15:35:13.422870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-15 15:35:13.423170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-15 15:35:13.423184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-15 15:35:13.423416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-15 15:35:13.423429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-15 15:35:13.423755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-15 15:35:13.423768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-15 15:35:13.424002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-15 15:35:13.424016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-15 15:35:13.424329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-15 15:35:13.424342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-15 15:35:13.424668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-15 15:35:13.424681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-15 15:35:13.424864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-15 15:35:13.424878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-15 15:35:13.425182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-15 15:35:13.425196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-15 15:35:13.425382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-15 15:35:13.425395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-15 15:35:13.425719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-15 15:35:13.425732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-15 15:35:13.425989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-15 15:35:13.426003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-15 15:35:13.426339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-15 15:35:13.426353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-15 15:35:13.426669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-15 15:35:13.426683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-15 15:35:13.426930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-15 15:35:13.426943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-15 15:35:13.427267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-15 15:35:13.427281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-15 15:35:13.427582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-15 15:35:13.427596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-15 15:35:13.427922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-15 15:35:13.427936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.693 [2024-07-15 15:35:13.428166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.693 [2024-07-15 15:35:13.428179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.693 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-15 15:35:13.428495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-15 15:35:13.428509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-15 15:35:13.428745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-15 15:35:13.428758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-15 15:35:13.429087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-15 15:35:13.429100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-15 15:35:13.429332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-15 15:35:13.429345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-15 15:35:13.429656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-15 15:35:13.429670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-15 15:35:13.429995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-15 15:35:13.430009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-15 15:35:13.430266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-15 15:35:13.430279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-15 15:35:13.430602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-15 15:35:13.430615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-15 15:35:13.430852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-15 15:35:13.430865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-15 15:35:13.431174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-15 15:35:13.431187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-15 15:35:13.431501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-15 15:35:13.431514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-15 15:35:13.431817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-15 15:35:13.431830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-15 15:35:13.432105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-15 15:35:13.432118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-15 15:35:13.432460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-15 15:35:13.432475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-15 15:35:13.432641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-15 15:35:13.432655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-15 15:35:13.432957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-15 15:35:13.432971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-15 15:35:13.433212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-15 15:35:13.433225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-15 15:35:13.433498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-15 15:35:13.433511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-15 15:35:13.433838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-15 15:35:13.433852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-15 15:35:13.434095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-15 15:35:13.434108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-15 15:35:13.434360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-15 15:35:13.434373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-15 15:35:13.434632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-15 15:35:13.434645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-15 15:35:13.434969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-15 15:35:13.434982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-15 15:35:13.435287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-15 15:35:13.435300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-15 15:35:13.435649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-15 15:35:13.435663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-15 15:35:13.435932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-15 15:35:13.435946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-15 15:35:13.436198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-15 15:35:13.436212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-15 15:35:13.436550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-15 15:35:13.436563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-15 15:35:13.436886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-15 15:35:13.436900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-15 15:35:13.437263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-15 15:35:13.437276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-15 15:35:13.437520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-15 15:35:13.437534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-15 15:35:13.437769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-15 15:35:13.437782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-15 15:35:13.438033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-15 15:35:13.438046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-15 15:35:13.438221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-15 15:35:13.438234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-15 15:35:13.438502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-15 15:35:13.438515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-15 15:35:13.438763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-15 15:35:13.438777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-15 15:35:13.439105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-15 15:35:13.439118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-15 15:35:13.439422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-15 15:35:13.439435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-15 15:35:13.439756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-15 15:35:13.439769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-15 15:35:13.440001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.694 [2024-07-15 15:35:13.440015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.694 qpair failed and we were unable to recover it. 00:30:09.694 [2024-07-15 15:35:13.440260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-15 15:35:13.440273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-15 15:35:13.440526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-15 15:35:13.440539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-15 15:35:13.440861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-15 15:35:13.440875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-15 15:35:13.441116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-15 15:35:13.441130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-15 15:35:13.441437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-15 15:35:13.441451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-15 15:35:13.441750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-15 15:35:13.441764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-15 15:35:13.442088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-15 15:35:13.442102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-15 15:35:13.442429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-15 15:35:13.442443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-15 15:35:13.442716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-15 15:35:13.442729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-15 15:35:13.443050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-15 15:35:13.443064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-15 15:35:13.443373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-15 15:35:13.443387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-15 15:35:13.443726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-15 15:35:13.443739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-15 15:35:13.444089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-15 15:35:13.444102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-15 15:35:13.444452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-15 15:35:13.444468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-15 15:35:13.444699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-15 15:35:13.444712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-15 15:35:13.444966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-15 15:35:13.444979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-15 15:35:13.445246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-15 15:35:13.445259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-15 15:35:13.445549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-15 15:35:13.445562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-15 15:35:13.445813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-15 15:35:13.445826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-15 15:35:13.446095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-15 15:35:13.446109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-15 15:35:13.446366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-15 15:35:13.446379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-15 15:35:13.446631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-15 15:35:13.446644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-15 15:35:13.446878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-15 15:35:13.446892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-15 15:35:13.447192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-15 15:35:13.447206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-15 15:35:13.447530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-15 15:35:13.447543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-15 15:35:13.447731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-15 15:35:13.447744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-15 15:35:13.448049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-15 15:35:13.448063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-15 15:35:13.448367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-15 15:35:13.448380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-15 15:35:13.448702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-15 15:35:13.448716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-15 15:35:13.449052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-15 15:35:13.449066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-15 15:35:13.449303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-15 15:35:13.449315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-15 15:35:13.449629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-15 15:35:13.449642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-15 15:35:13.449983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-15 15:35:13.449996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-15 15:35:13.450247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-15 15:35:13.450260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-15 15:35:13.450582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-15 15:35:13.450595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-15 15:35:13.450839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-15 15:35:13.450853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-15 15:35:13.451127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-15 15:35:13.451140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-15 15:35:13.451449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-15 15:35:13.451462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.695 qpair failed and we were unable to recover it. 00:30:09.695 [2024-07-15 15:35:13.451787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.695 [2024-07-15 15:35:13.451800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.696 qpair failed and we were unable to recover it. 00:30:09.696 [2024-07-15 15:35:13.452081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.696 [2024-07-15 15:35:13.452095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.696 qpair failed and we were unable to recover it. 00:30:09.696 [2024-07-15 15:35:13.452398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.696 [2024-07-15 15:35:13.452411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.696 qpair failed and we were unable to recover it. 00:30:09.696 [2024-07-15 15:35:13.452758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.696 [2024-07-15 15:35:13.452772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.696 qpair failed and we were unable to recover it. 00:30:09.696 [2024-07-15 15:35:13.453121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.696 [2024-07-15 15:35:13.453134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.696 qpair failed and we were unable to recover it. 00:30:09.696 [2024-07-15 15:35:13.453484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.696 [2024-07-15 15:35:13.453497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.696 qpair failed and we were unable to recover it. 00:30:09.696 [2024-07-15 15:35:13.453845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.696 [2024-07-15 15:35:13.453858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.696 qpair failed and we were unable to recover it. 00:30:09.696 [2024-07-15 15:35:13.454203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.696 [2024-07-15 15:35:13.454217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.696 qpair failed and we were unable to recover it. 00:30:09.696 [2024-07-15 15:35:13.454562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.696 [2024-07-15 15:35:13.454576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.696 qpair failed and we were unable to recover it. 00:30:09.696 [2024-07-15 15:35:13.454924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.696 [2024-07-15 15:35:13.454938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.696 qpair failed and we were unable to recover it. 00:30:09.696 [2024-07-15 15:35:13.455286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.696 [2024-07-15 15:35:13.455299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.696 qpair failed and we were unable to recover it. 00:30:09.696 [2024-07-15 15:35:13.455557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.696 [2024-07-15 15:35:13.455571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.696 qpair failed and we were unable to recover it. 00:30:09.696 [2024-07-15 15:35:13.455874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.696 [2024-07-15 15:35:13.455887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.696 qpair failed and we were unable to recover it. 00:30:09.696 [2024-07-15 15:35:13.456129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.696 [2024-07-15 15:35:13.456143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.696 qpair failed and we were unable to recover it. 00:30:09.696 [2024-07-15 15:35:13.456320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.696 [2024-07-15 15:35:13.456334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.696 qpair failed and we were unable to recover it. 00:30:09.696 [2024-07-15 15:35:13.456657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.696 [2024-07-15 15:35:13.456672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.696 qpair failed and we were unable to recover it. 00:30:09.696 [2024-07-15 15:35:13.456988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.696 [2024-07-15 15:35:13.457002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.696 qpair failed and we were unable to recover it. 00:30:09.696 [2024-07-15 15:35:13.457331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.696 [2024-07-15 15:35:13.457344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.696 qpair failed and we were unable to recover it. 00:30:09.696 [2024-07-15 15:35:13.457584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.696 [2024-07-15 15:35:13.457597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.696 qpair failed and we were unable to recover it. 00:30:09.696 [2024-07-15 15:35:13.457871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.696 [2024-07-15 15:35:13.457884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.696 qpair failed and we were unable to recover it. 00:30:09.696 [2024-07-15 15:35:13.458118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.696 [2024-07-15 15:35:13.458131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.696 qpair failed and we were unable to recover it. 00:30:09.696 [2024-07-15 15:35:13.458310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.696 [2024-07-15 15:35:13.458324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.696 qpair failed and we were unable to recover it. 00:30:09.696 [2024-07-15 15:35:13.458649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.696 [2024-07-15 15:35:13.458663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.696 qpair failed and we were unable to recover it. 00:30:09.696 [2024-07-15 15:35:13.458974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.696 [2024-07-15 15:35:13.458987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.696 qpair failed and we were unable to recover it. 00:30:09.696 [2024-07-15 15:35:13.459328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.696 [2024-07-15 15:35:13.459341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.696 qpair failed and we were unable to recover it. 00:30:09.696 [2024-07-15 15:35:13.459576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.696 [2024-07-15 15:35:13.459589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.696 qpair failed and we were unable to recover it. 00:30:09.696 [2024-07-15 15:35:13.459851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.696 [2024-07-15 15:35:13.459865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.696 qpair failed and we were unable to recover it. 00:30:09.696 [2024-07-15 15:35:13.460192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.696 [2024-07-15 15:35:13.460206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.696 qpair failed and we were unable to recover it. 00:30:09.696 [2024-07-15 15:35:13.460376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.696 [2024-07-15 15:35:13.460389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.696 qpair failed and we were unable to recover it. 00:30:09.696 [2024-07-15 15:35:13.460646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.696 [2024-07-15 15:35:13.460659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.696 qpair failed and we were unable to recover it. 00:30:09.696 [2024-07-15 15:35:13.460922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.696 [2024-07-15 15:35:13.460935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.696 qpair failed and we were unable to recover it. 00:30:09.696 [2024-07-15 15:35:13.461237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.696 [2024-07-15 15:35:13.461250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.696 qpair failed and we were unable to recover it. 00:30:09.696 [2024-07-15 15:35:13.461503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.696 [2024-07-15 15:35:13.461516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.696 qpair failed and we were unable to recover it. 00:30:09.696 [2024-07-15 15:35:13.461771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.696 [2024-07-15 15:35:13.461784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.696 qpair failed and we were unable to recover it. 00:30:09.696 [2024-07-15 15:35:13.462108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.696 [2024-07-15 15:35:13.462121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.696 qpair failed and we were unable to recover it. 00:30:09.696 [2024-07-15 15:35:13.462455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.696 [2024-07-15 15:35:13.462468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.696 qpair failed and we were unable to recover it. 00:30:09.697 [2024-07-15 15:35:13.462791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.697 [2024-07-15 15:35:13.462805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.697 qpair failed and we were unable to recover it. 00:30:09.697 [2024-07-15 15:35:13.463065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.697 [2024-07-15 15:35:13.463079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.697 qpair failed and we were unable to recover it. 00:30:09.697 [2024-07-15 15:35:13.463315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.697 [2024-07-15 15:35:13.463328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.697 qpair failed and we were unable to recover it. 00:30:09.697 [2024-07-15 15:35:13.463601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.697 [2024-07-15 15:35:13.463615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.697 qpair failed and we were unable to recover it. 00:30:09.697 [2024-07-15 15:35:13.463868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.697 [2024-07-15 15:35:13.463881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.697 qpair failed and we were unable to recover it. 00:30:09.697 [2024-07-15 15:35:13.464207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.697 [2024-07-15 15:35:13.464221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.697 qpair failed and we were unable to recover it. 00:30:09.697 [2024-07-15 15:35:13.464548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.697 [2024-07-15 15:35:13.464561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.697 qpair failed and we were unable to recover it. 00:30:09.697 [2024-07-15 15:35:13.464889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.697 [2024-07-15 15:35:13.464902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.697 qpair failed and we were unable to recover it. 00:30:09.697 [2024-07-15 15:35:13.465173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.697 [2024-07-15 15:35:13.465187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.697 qpair failed and we were unable to recover it. 00:30:09.697 [2024-07-15 15:35:13.465501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.697 [2024-07-15 15:35:13.465514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.697 qpair failed and we were unable to recover it. 00:30:09.697 [2024-07-15 15:35:13.465852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.697 [2024-07-15 15:35:13.465866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.697 qpair failed and we were unable to recover it. 00:30:09.697 [2024-07-15 15:35:13.466098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.697 [2024-07-15 15:35:13.466112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.697 qpair failed and we were unable to recover it. 00:30:09.697 [2024-07-15 15:35:13.466303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.697 [2024-07-15 15:35:13.466317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.697 qpair failed and we were unable to recover it. 00:30:09.697 [2024-07-15 15:35:13.466638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.697 [2024-07-15 15:35:13.466651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.697 qpair failed and we were unable to recover it. 00:30:09.697 [2024-07-15 15:35:13.466975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.697 [2024-07-15 15:35:13.466988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.697 qpair failed and we were unable to recover it. 00:30:09.697 [2024-07-15 15:35:13.467312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.697 [2024-07-15 15:35:13.467325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.697 qpair failed and we were unable to recover it. 00:30:09.697 [2024-07-15 15:35:13.467579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.697 [2024-07-15 15:35:13.467592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.697 qpair failed and we were unable to recover it. 00:30:09.697 [2024-07-15 15:35:13.467935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.697 [2024-07-15 15:35:13.467950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.697 qpair failed and we were unable to recover it. 00:30:09.697 [2024-07-15 15:35:13.468181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.697 [2024-07-15 15:35:13.468194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.697 qpair failed and we were unable to recover it. 00:30:09.697 [2024-07-15 15:35:13.468365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.697 [2024-07-15 15:35:13.468383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.697 qpair failed and we were unable to recover it. 00:30:09.697 [2024-07-15 15:35:13.468685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.697 [2024-07-15 15:35:13.468699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.697 qpair failed and we were unable to recover it. 00:30:09.697 [2024-07-15 15:35:13.468974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.697 [2024-07-15 15:35:13.468988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.697 qpair failed and we were unable to recover it. 00:30:09.697 [2024-07-15 15:35:13.469330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.697 [2024-07-15 15:35:13.469342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.697 qpair failed and we were unable to recover it. 00:30:09.697 [2024-07-15 15:35:13.469645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.697 [2024-07-15 15:35:13.469658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.697 qpair failed and we were unable to recover it. 00:30:09.697 [2024-07-15 15:35:13.469958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.697 [2024-07-15 15:35:13.469972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.697 qpair failed and we were unable to recover it. 00:30:09.697 [2024-07-15 15:35:13.470296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.697 [2024-07-15 15:35:13.470309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.697 qpair failed and we were unable to recover it. 00:30:09.697 [2024-07-15 15:35:13.470618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.697 [2024-07-15 15:35:13.470632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.697 qpair failed and we were unable to recover it. 00:30:09.697 [2024-07-15 15:35:13.470882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.697 [2024-07-15 15:35:13.470896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.697 qpair failed and we were unable to recover it. 00:30:09.697 [2024-07-15 15:35:13.471137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.697 [2024-07-15 15:35:13.471151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.697 qpair failed and we were unable to recover it. 00:30:09.697 [2024-07-15 15:35:13.471344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.697 [2024-07-15 15:35:13.471358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.697 qpair failed and we were unable to recover it. 00:30:09.697 [2024-07-15 15:35:13.471615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.697 [2024-07-15 15:35:13.471628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.697 qpair failed and we were unable to recover it. 00:30:09.697 [2024-07-15 15:35:13.471880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.697 [2024-07-15 15:35:13.471894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.697 qpair failed and we were unable to recover it. 00:30:09.697 [2024-07-15 15:35:13.472219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.697 [2024-07-15 15:35:13.472233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.697 qpair failed and we were unable to recover it. 00:30:09.697 [2024-07-15 15:35:13.472504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.697 [2024-07-15 15:35:13.472517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.697 qpair failed and we were unable to recover it. 00:30:09.697 [2024-07-15 15:35:13.472773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.697 [2024-07-15 15:35:13.472787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.697 qpair failed and we were unable to recover it. 00:30:09.697 [2024-07-15 15:35:13.473136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.697 [2024-07-15 15:35:13.473150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.697 qpair failed and we were unable to recover it. 00:30:09.697 [2024-07-15 15:35:13.473500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.697 [2024-07-15 15:35:13.473513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.697 qpair failed and we were unable to recover it. 00:30:09.697 [2024-07-15 15:35:13.473866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.697 [2024-07-15 15:35:13.473879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.697 qpair failed and we were unable to recover it. 00:30:09.697 [2024-07-15 15:35:13.474111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.697 [2024-07-15 15:35:13.474124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.697 qpair failed and we were unable to recover it. 00:30:09.697 [2024-07-15 15:35:13.474407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.697 [2024-07-15 15:35:13.474421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.697 qpair failed and we were unable to recover it. 00:30:09.698 [2024-07-15 15:35:13.474587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.698 [2024-07-15 15:35:13.474601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.698 qpair failed and we were unable to recover it. 00:30:09.698 [2024-07-15 15:35:13.474943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.698 [2024-07-15 15:35:13.474956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.698 qpair failed and we were unable to recover it. 00:30:09.698 [2024-07-15 15:35:13.475279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.698 [2024-07-15 15:35:13.475293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.698 qpair failed and we were unable to recover it. 00:30:09.698 [2024-07-15 15:35:13.475592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.698 [2024-07-15 15:35:13.475606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.698 qpair failed and we were unable to recover it. 00:30:09.698 [2024-07-15 15:35:13.475773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.698 [2024-07-15 15:35:13.475787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.698 qpair failed and we were unable to recover it. 00:30:09.698 [2024-07-15 15:35:13.476138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.698 [2024-07-15 15:35:13.476152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.698 qpair failed and we were unable to recover it. 00:30:09.698 [2024-07-15 15:35:13.476403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.698 [2024-07-15 15:35:13.476416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.698 qpair failed and we were unable to recover it. 00:30:09.698 [2024-07-15 15:35:13.476647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.698 [2024-07-15 15:35:13.476661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.698 qpair failed and we were unable to recover it. 00:30:09.698 [2024-07-15 15:35:13.476930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.698 [2024-07-15 15:35:13.476943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.698 qpair failed and we were unable to recover it. 00:30:09.698 [2024-07-15 15:35:13.477247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.698 [2024-07-15 15:35:13.477260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.698 qpair failed and we were unable to recover it. 00:30:09.698 [2024-07-15 15:35:13.477581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.698 [2024-07-15 15:35:13.477594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.698 qpair failed and we were unable to recover it. 00:30:09.698 [2024-07-15 15:35:13.477903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.698 [2024-07-15 15:35:13.477916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.698 qpair failed and we were unable to recover it. 00:30:09.698 [2024-07-15 15:35:13.478240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.698 [2024-07-15 15:35:13.478253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.698 qpair failed and we were unable to recover it. 00:30:09.698 [2024-07-15 15:35:13.478580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.698 [2024-07-15 15:35:13.478593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.698 qpair failed and we were unable to recover it. 00:30:09.698 [2024-07-15 15:35:13.478895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.698 [2024-07-15 15:35:13.478908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.698 qpair failed and we were unable to recover it. 00:30:09.698 [2024-07-15 15:35:13.479234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.698 [2024-07-15 15:35:13.479247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.698 qpair failed and we were unable to recover it. 00:30:09.698 [2024-07-15 15:35:13.479547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.698 [2024-07-15 15:35:13.479560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.698 qpair failed and we were unable to recover it. 00:30:09.698 [2024-07-15 15:35:13.479796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.698 [2024-07-15 15:35:13.479809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.698 qpair failed and we were unable to recover it. 00:30:09.698 [2024-07-15 15:35:13.480140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.698 [2024-07-15 15:35:13.480154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.698 qpair failed and we were unable to recover it. 00:30:09.698 [2024-07-15 15:35:13.480432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.698 [2024-07-15 15:35:13.480448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.698 qpair failed and we were unable to recover it. 00:30:09.698 [2024-07-15 15:35:13.480771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.698 [2024-07-15 15:35:13.480785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.698 qpair failed and we were unable to recover it. 00:30:09.698 [2024-07-15 15:35:13.481061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.698 [2024-07-15 15:35:13.481075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.698 qpair failed and we were unable to recover it. 00:30:09.698 [2024-07-15 15:35:13.481352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.698 [2024-07-15 15:35:13.481365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.698 qpair failed and we were unable to recover it. 00:30:09.698 [2024-07-15 15:35:13.481601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.698 [2024-07-15 15:35:13.481615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.698 qpair failed and we were unable to recover it. 00:30:09.698 [2024-07-15 15:35:13.481941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.698 [2024-07-15 15:35:13.481955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.698 qpair failed and we were unable to recover it. 00:30:09.698 [2024-07-15 15:35:13.482258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.698 [2024-07-15 15:35:13.482271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.698 qpair failed and we were unable to recover it. 00:30:09.698 [2024-07-15 15:35:13.482615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.698 [2024-07-15 15:35:13.482628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.698 qpair failed and we were unable to recover it. 00:30:09.698 [2024-07-15 15:35:13.482870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.698 [2024-07-15 15:35:13.482884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.698 qpair failed and we were unable to recover it. 00:30:09.698 [2024-07-15 15:35:13.483116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.698 [2024-07-15 15:35:13.483130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.698 qpair failed and we were unable to recover it. 00:30:09.698 [2024-07-15 15:35:13.483432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.698 [2024-07-15 15:35:13.483445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.698 qpair failed and we were unable to recover it. 00:30:09.698 [2024-07-15 15:35:13.483771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.698 [2024-07-15 15:35:13.483784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.698 qpair failed and we were unable to recover it. 00:30:09.698 [2024-07-15 15:35:13.484031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.698 [2024-07-15 15:35:13.484045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.698 qpair failed and we were unable to recover it. 00:30:09.698 [2024-07-15 15:35:13.484279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.698 [2024-07-15 15:35:13.484292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.698 qpair failed and we were unable to recover it. 00:30:09.698 [2024-07-15 15:35:13.484617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.698 [2024-07-15 15:35:13.484631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.698 qpair failed and we were unable to recover it. 00:30:09.698 [2024-07-15 15:35:13.484916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.698 [2024-07-15 15:35:13.484930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.698 qpair failed and we were unable to recover it. 00:30:09.698 [2024-07-15 15:35:13.485253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.698 [2024-07-15 15:35:13.485267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.698 qpair failed and we were unable to recover it. 00:30:09.698 [2024-07-15 15:35:13.485594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.698 [2024-07-15 15:35:13.485607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.698 qpair failed and we were unable to recover it. 00:30:09.698 [2024-07-15 15:35:13.485877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.698 [2024-07-15 15:35:13.485890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.698 qpair failed and we were unable to recover it. 00:30:09.698 [2024-07-15 15:35:13.486057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.698 [2024-07-15 15:35:13.486070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.698 qpair failed and we were unable to recover it. 00:30:09.698 [2024-07-15 15:35:13.486373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.698 [2024-07-15 15:35:13.486386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.698 qpair failed and we were unable to recover it. 00:30:09.698 [2024-07-15 15:35:13.486652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.699 [2024-07-15 15:35:13.486665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.699 qpair failed and we were unable to recover it. 00:30:09.699 [2024-07-15 15:35:13.486988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.699 [2024-07-15 15:35:13.487002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.699 qpair failed and we were unable to recover it. 00:30:09.699 [2024-07-15 15:35:13.487250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.699 [2024-07-15 15:35:13.487263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.699 qpair failed and we were unable to recover it. 00:30:09.699 [2024-07-15 15:35:13.487590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.699 [2024-07-15 15:35:13.487604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.699 qpair failed and we were unable to recover it. 00:30:09.699 [2024-07-15 15:35:13.487872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.699 [2024-07-15 15:35:13.487886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.699 qpair failed and we were unable to recover it. 00:30:09.699 [2024-07-15 15:35:13.488206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.699 [2024-07-15 15:35:13.488220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.699 qpair failed and we were unable to recover it. 00:30:09.699 [2024-07-15 15:35:13.488468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.699 [2024-07-15 15:35:13.488482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.699 qpair failed and we were unable to recover it. 00:30:09.699 [2024-07-15 15:35:13.488831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.699 [2024-07-15 15:35:13.488849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.699 qpair failed and we were unable to recover it. 00:30:09.699 [2024-07-15 15:35:13.489196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.699 [2024-07-15 15:35:13.489209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.699 qpair failed and we were unable to recover it. 00:30:09.699 [2024-07-15 15:35:13.489556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.699 [2024-07-15 15:35:13.489569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.699 qpair failed and we were unable to recover it. 00:30:09.699 [2024-07-15 15:35:13.489870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.699 [2024-07-15 15:35:13.489883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.699 qpair failed and we were unable to recover it. 00:30:09.699 [2024-07-15 15:35:13.490069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.699 [2024-07-15 15:35:13.490082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.699 qpair failed and we were unable to recover it. 00:30:09.699 [2024-07-15 15:35:13.490343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.699 [2024-07-15 15:35:13.490356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.699 qpair failed and we were unable to recover it. 00:30:09.699 [2024-07-15 15:35:13.490591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.699 [2024-07-15 15:35:13.490604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.699 qpair failed and we were unable to recover it. 00:30:09.699 [2024-07-15 15:35:13.490916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.699 [2024-07-15 15:35:13.490929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.699 qpair failed and we were unable to recover it. 00:30:09.699 [2024-07-15 15:35:13.491251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.699 [2024-07-15 15:35:13.491264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.699 qpair failed and we were unable to recover it. 00:30:09.699 [2024-07-15 15:35:13.491516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.699 [2024-07-15 15:35:13.491529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.699 qpair failed and we were unable to recover it. 00:30:09.699 [2024-07-15 15:35:13.491859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.699 [2024-07-15 15:35:13.491872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.699 qpair failed and we were unable to recover it. 00:30:09.699 [2024-07-15 15:35:13.492200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.699 [2024-07-15 15:35:13.492213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.699 qpair failed and we were unable to recover it. 00:30:09.699 [2024-07-15 15:35:13.492540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.699 [2024-07-15 15:35:13.492555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.699 qpair failed and we were unable to recover it. 00:30:09.699 [2024-07-15 15:35:13.492810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.699 [2024-07-15 15:35:13.492824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.699 qpair failed and we were unable to recover it. 00:30:09.699 [2024-07-15 15:35:13.493136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.699 [2024-07-15 15:35:13.493150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.699 qpair failed and we were unable to recover it. 00:30:09.699 [2024-07-15 15:35:13.493480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.699 [2024-07-15 15:35:13.493493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.699 qpair failed and we were unable to recover it. 00:30:09.699 [2024-07-15 15:35:13.493817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.699 [2024-07-15 15:35:13.493830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.699 qpair failed and we were unable to recover it. 00:30:09.699 [2024-07-15 15:35:13.494135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.699 [2024-07-15 15:35:13.494148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.699 qpair failed and we were unable to recover it. 00:30:09.699 [2024-07-15 15:35:13.494402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.699 [2024-07-15 15:35:13.494415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.699 qpair failed and we were unable to recover it. 00:30:09.699 [2024-07-15 15:35:13.494648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.699 [2024-07-15 15:35:13.494662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.699 qpair failed and we were unable to recover it. 00:30:09.699 [2024-07-15 15:35:13.494930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.699 [2024-07-15 15:35:13.494943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.699 qpair failed and we were unable to recover it. 00:30:09.699 [2024-07-15 15:35:13.495256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.699 [2024-07-15 15:35:13.495270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.699 qpair failed and we were unable to recover it. 00:30:09.699 [2024-07-15 15:35:13.495614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.699 [2024-07-15 15:35:13.495627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.699 qpair failed and we were unable to recover it. 00:30:09.699 [2024-07-15 15:35:13.495793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.699 [2024-07-15 15:35:13.495807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.699 qpair failed and we were unable to recover it. 00:30:09.699 [2024-07-15 15:35:13.496061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.699 [2024-07-15 15:35:13.496075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.699 qpair failed and we were unable to recover it. 00:30:09.699 [2024-07-15 15:35:13.496397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.699 [2024-07-15 15:35:13.496411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.699 qpair failed and we were unable to recover it. 00:30:09.699 [2024-07-15 15:35:13.496655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.699 [2024-07-15 15:35:13.496668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.699 qpair failed and we were unable to recover it. 00:30:09.699 [2024-07-15 15:35:13.496998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.699 [2024-07-15 15:35:13.497012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.699 qpair failed and we were unable to recover it. 00:30:09.699 [2024-07-15 15:35:13.497197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.699 [2024-07-15 15:35:13.497211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.699 qpair failed and we were unable to recover it. 00:30:09.699 [2024-07-15 15:35:13.497526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.699 [2024-07-15 15:35:13.497539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.699 qpair failed and we were unable to recover it. 00:30:09.699 [2024-07-15 15:35:13.497801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.699 [2024-07-15 15:35:13.497815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.699 qpair failed and we were unable to recover it. 00:30:09.699 [2024-07-15 15:35:13.498057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.699 [2024-07-15 15:35:13.498070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.699 qpair failed and we were unable to recover it. 00:30:09.699 [2024-07-15 15:35:13.498317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.699 [2024-07-15 15:35:13.498331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.699 qpair failed and we were unable to recover it. 00:30:09.699 [2024-07-15 15:35:13.498580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.699 [2024-07-15 15:35:13.498593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.700 qpair failed and we were unable to recover it. 00:30:09.700 [2024-07-15 15:35:13.498916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.700 [2024-07-15 15:35:13.498930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.700 qpair failed and we were unable to recover it. 00:30:09.700 [2024-07-15 15:35:13.499174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.700 [2024-07-15 15:35:13.499187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.700 qpair failed and we were unable to recover it. 00:30:09.700 [2024-07-15 15:35:13.499502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.700 [2024-07-15 15:35:13.499516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.700 qpair failed and we were unable to recover it. 00:30:09.700 [2024-07-15 15:35:13.499771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.700 [2024-07-15 15:35:13.499784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.700 qpair failed and we were unable to recover it. 00:30:09.700 [2024-07-15 15:35:13.500120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.700 [2024-07-15 15:35:13.500134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.700 qpair failed and we were unable to recover it. 00:30:09.700 [2024-07-15 15:35:13.500392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.700 [2024-07-15 15:35:13.500406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.700 qpair failed and we were unable to recover it. 00:30:09.700 [2024-07-15 15:35:13.500736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.700 [2024-07-15 15:35:13.500749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.700 qpair failed and we were unable to recover it. 00:30:09.700 [2024-07-15 15:35:13.501001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.700 [2024-07-15 15:35:13.501014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.700 qpair failed and we were unable to recover it. 00:30:09.700 [2024-07-15 15:35:13.501336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.700 [2024-07-15 15:35:13.501349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.700 qpair failed and we were unable to recover it. 00:30:09.700 [2024-07-15 15:35:13.501677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.700 [2024-07-15 15:35:13.501690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.700 qpair failed and we were unable to recover it. 00:30:09.700 [2024-07-15 15:35:13.502069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.700 [2024-07-15 15:35:13.502083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.700 qpair failed and we were unable to recover it. 00:30:09.700 [2024-07-15 15:35:13.502382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.700 [2024-07-15 15:35:13.502396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.700 qpair failed and we were unable to recover it. 00:30:09.700 [2024-07-15 15:35:13.502720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.700 [2024-07-15 15:35:13.502733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.700 qpair failed and we were unable to recover it. 00:30:09.700 [2024-07-15 15:35:13.502937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.700 [2024-07-15 15:35:13.502950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.700 qpair failed and we were unable to recover it. 00:30:09.700 [2024-07-15 15:35:13.503284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.700 [2024-07-15 15:35:13.503298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.700 qpair failed and we were unable to recover it. 00:30:09.700 [2024-07-15 15:35:13.503656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.700 [2024-07-15 15:35:13.503669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.700 qpair failed and we were unable to recover it. 00:30:09.700 [2024-07-15 15:35:13.503999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.700 [2024-07-15 15:35:13.504013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.700 qpair failed and we were unable to recover it. 00:30:09.700 [2024-07-15 15:35:13.504316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.700 [2024-07-15 15:35:13.504330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.700 qpair failed and we were unable to recover it. 00:30:09.700 [2024-07-15 15:35:13.504582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.700 [2024-07-15 15:35:13.504598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.700 qpair failed and we were unable to recover it. 00:30:09.700 [2024-07-15 15:35:13.504854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.700 [2024-07-15 15:35:13.504868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.700 qpair failed and we were unable to recover it. 00:30:09.700 [2024-07-15 15:35:13.505125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.700 [2024-07-15 15:35:13.505138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.700 qpair failed and we were unable to recover it. 00:30:09.700 [2024-07-15 15:35:13.505302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.700 [2024-07-15 15:35:13.505315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.700 qpair failed and we were unable to recover it. 00:30:09.700 [2024-07-15 15:35:13.505638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.700 [2024-07-15 15:35:13.505652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.700 qpair failed and we were unable to recover it. 00:30:09.700 [2024-07-15 15:35:13.505969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.700 [2024-07-15 15:35:13.505982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.700 qpair failed and we were unable to recover it. 00:30:09.700 [2024-07-15 15:35:13.506313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.700 [2024-07-15 15:35:13.506326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.700 qpair failed and we were unable to recover it. 00:30:09.700 [2024-07-15 15:35:13.506649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.700 [2024-07-15 15:35:13.506662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.700 qpair failed and we were unable to recover it. 00:30:09.700 [2024-07-15 15:35:13.506846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.700 [2024-07-15 15:35:13.506860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.700 qpair failed and we were unable to recover it. 00:30:09.700 [2024-07-15 15:35:13.507105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.700 [2024-07-15 15:35:13.507118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.700 qpair failed and we were unable to recover it. 00:30:09.700 [2024-07-15 15:35:13.507398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.700 [2024-07-15 15:35:13.507412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.700 qpair failed and we were unable to recover it. 00:30:09.700 [2024-07-15 15:35:13.507749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.700 [2024-07-15 15:35:13.507762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.700 qpair failed and we were unable to recover it. 00:30:09.700 [2024-07-15 15:35:13.508088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.700 [2024-07-15 15:35:13.508101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.700 qpair failed and we were unable to recover it. 00:30:09.700 [2024-07-15 15:35:13.508346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.700 [2024-07-15 15:35:13.508359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.700 qpair failed and we were unable to recover it. 00:30:09.700 [2024-07-15 15:35:13.508616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.700 [2024-07-15 15:35:13.508629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.700 qpair failed and we were unable to recover it. 00:30:09.700 [2024-07-15 15:35:13.508955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.700 [2024-07-15 15:35:13.508968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.700 qpair failed and we were unable to recover it. 00:30:09.700 [2024-07-15 15:35:13.509180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.700 [2024-07-15 15:35:13.509194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.700 qpair failed and we were unable to recover it. 00:30:09.700 [2024-07-15 15:35:13.509426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.700 [2024-07-15 15:35:13.509440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.700 qpair failed and we were unable to recover it. 00:30:09.700 [2024-07-15 15:35:13.509789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.700 [2024-07-15 15:35:13.509803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.700 qpair failed and we were unable to recover it. 00:30:09.700 [2024-07-15 15:35:13.510037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.700 [2024-07-15 15:35:13.510050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.700 qpair failed and we were unable to recover it. 00:30:09.700 [2024-07-15 15:35:13.510374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.700 [2024-07-15 15:35:13.510387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.700 qpair failed and we were unable to recover it. 00:30:09.700 [2024-07-15 15:35:13.510632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.701 [2024-07-15 15:35:13.510646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.701 qpair failed and we were unable to recover it. 00:30:09.701 [2024-07-15 15:35:13.510973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.701 [2024-07-15 15:35:13.510986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.701 qpair failed and we were unable to recover it. 00:30:09.701 [2024-07-15 15:35:13.511216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.701 [2024-07-15 15:35:13.511229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.701 qpair failed and we were unable to recover it. 00:30:09.701 [2024-07-15 15:35:13.511562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.701 [2024-07-15 15:35:13.511575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.701 qpair failed and we were unable to recover it. 00:30:09.701 [2024-07-15 15:35:13.511823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.701 [2024-07-15 15:35:13.511840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.701 qpair failed and we were unable to recover it. 00:30:09.701 [2024-07-15 15:35:13.512101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.701 [2024-07-15 15:35:13.512114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.701 qpair failed and we were unable to recover it. 00:30:09.701 [2024-07-15 15:35:13.512354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.701 [2024-07-15 15:35:13.512367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.701 qpair failed and we were unable to recover it. 00:30:09.701 [2024-07-15 15:35:13.512672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.701 [2024-07-15 15:35:13.512686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.701 qpair failed and we were unable to recover it. 00:30:09.701 [2024-07-15 15:35:13.513009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.701 [2024-07-15 15:35:13.513023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.701 qpair failed and we were unable to recover it. 00:30:09.701 [2024-07-15 15:35:13.513323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.701 [2024-07-15 15:35:13.513336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.701 qpair failed and we were unable to recover it. 00:30:09.701 [2024-07-15 15:35:13.513661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.701 [2024-07-15 15:35:13.513674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.701 qpair failed and we were unable to recover it. 00:30:09.701 [2024-07-15 15:35:13.513998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.701 [2024-07-15 15:35:13.514011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.701 qpair failed and we were unable to recover it. 00:30:09.701 [2024-07-15 15:35:13.514246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.701 [2024-07-15 15:35:13.514259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.701 qpair failed and we were unable to recover it. 00:30:09.701 [2024-07-15 15:35:13.514537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.701 [2024-07-15 15:35:13.514550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.701 qpair failed and we were unable to recover it. 00:30:09.701 [2024-07-15 15:35:13.514880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.701 [2024-07-15 15:35:13.514893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.701 qpair failed and we were unable to recover it. 00:30:09.701 [2024-07-15 15:35:13.515172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.701 [2024-07-15 15:35:13.515185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.701 qpair failed and we were unable to recover it. 00:30:09.701 [2024-07-15 15:35:13.515487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.701 [2024-07-15 15:35:13.515501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.701 qpair failed and we were unable to recover it. 00:30:09.701 [2024-07-15 15:35:13.515824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.701 [2024-07-15 15:35:13.515840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.701 qpair failed and we were unable to recover it. 00:30:09.701 [2024-07-15 15:35:13.516168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.701 [2024-07-15 15:35:13.516181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.701 qpair failed and we were unable to recover it. 00:30:09.701 [2024-07-15 15:35:13.516454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.701 [2024-07-15 15:35:13.516469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.701 qpair failed and we were unable to recover it. 00:30:09.701 [2024-07-15 15:35:13.516802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.701 [2024-07-15 15:35:13.516815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.701 qpair failed and we were unable to recover it. 00:30:09.701 [2024-07-15 15:35:13.517050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.701 [2024-07-15 15:35:13.517064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.701 qpair failed and we were unable to recover it. 00:30:09.701 [2024-07-15 15:35:13.517392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.701 [2024-07-15 15:35:13.517406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.701 qpair failed and we were unable to recover it. 00:30:09.701 [2024-07-15 15:35:13.517716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.701 [2024-07-15 15:35:13.517730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.701 qpair failed and we were unable to recover it. 00:30:09.701 [2024-07-15 15:35:13.517980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.701 [2024-07-15 15:35:13.517993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.701 qpair failed and we were unable to recover it. 00:30:09.701 [2024-07-15 15:35:13.518164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.701 [2024-07-15 15:35:13.518178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.701 qpair failed and we were unable to recover it. 00:30:09.701 [2024-07-15 15:35:13.518498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.701 [2024-07-15 15:35:13.518511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.701 qpair failed and we were unable to recover it. 00:30:09.701 [2024-07-15 15:35:13.518761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.701 [2024-07-15 15:35:13.518774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.701 qpair failed and we were unable to recover it. 00:30:09.701 [2024-07-15 15:35:13.519016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.701 [2024-07-15 15:35:13.519030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.701 qpair failed and we were unable to recover it. 00:30:09.701 [2024-07-15 15:35:13.519356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.701 [2024-07-15 15:35:13.519369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.701 qpair failed and we were unable to recover it. 00:30:09.701 [2024-07-15 15:35:13.519693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.701 [2024-07-15 15:35:13.519707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.701 qpair failed and we were unable to recover it. 00:30:09.701 [2024-07-15 15:35:13.519964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.701 [2024-07-15 15:35:13.519978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.701 qpair failed and we were unable to recover it. 00:30:09.701 [2024-07-15 15:35:13.520237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.701 [2024-07-15 15:35:13.520251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.701 qpair failed and we were unable to recover it. 00:30:09.701 [2024-07-15 15:35:13.520483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.701 [2024-07-15 15:35:13.520496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.701 qpair failed and we were unable to recover it. 00:30:09.701 [2024-07-15 15:35:13.520729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.701 [2024-07-15 15:35:13.520742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.701 qpair failed and we were unable to recover it. 00:30:09.701 [2024-07-15 15:35:13.520989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.702 [2024-07-15 15:35:13.521002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.702 qpair failed and we were unable to recover it. 00:30:09.702 [2024-07-15 15:35:13.521326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.702 [2024-07-15 15:35:13.521340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.702 qpair failed and we were unable to recover it. 00:30:09.702 [2024-07-15 15:35:13.521663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.702 [2024-07-15 15:35:13.521677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.702 qpair failed and we were unable to recover it. 00:30:09.702 [2024-07-15 15:35:13.521980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.702 [2024-07-15 15:35:13.521994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.702 qpair failed and we were unable to recover it. 00:30:09.702 [2024-07-15 15:35:13.522226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.702 [2024-07-15 15:35:13.522240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.702 qpair failed and we were unable to recover it. 00:30:09.702 [2024-07-15 15:35:13.522518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.702 [2024-07-15 15:35:13.522531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.702 qpair failed and we were unable to recover it. 00:30:09.702 [2024-07-15 15:35:13.522831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.702 [2024-07-15 15:35:13.522848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.702 qpair failed and we were unable to recover it. 00:30:09.702 [2024-07-15 15:35:13.523094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.702 [2024-07-15 15:35:13.523108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.702 qpair failed and we were unable to recover it. 00:30:09.702 [2024-07-15 15:35:13.523369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.702 [2024-07-15 15:35:13.523383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.702 qpair failed and we were unable to recover it. 00:30:09.702 [2024-07-15 15:35:13.523690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.702 [2024-07-15 15:35:13.523703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.702 qpair failed and we were unable to recover it. 00:30:09.702 [2024-07-15 15:35:13.523976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.702 [2024-07-15 15:35:13.523990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.702 qpair failed and we were unable to recover it. 00:30:09.702 [2024-07-15 15:35:13.524256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.702 [2024-07-15 15:35:13.524269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.702 qpair failed and we were unable to recover it. 00:30:09.702 [2024-07-15 15:35:13.524437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.702 [2024-07-15 15:35:13.524451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.702 qpair failed and we were unable to recover it. 00:30:09.702 [2024-07-15 15:35:13.524770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.702 [2024-07-15 15:35:13.524783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.702 qpair failed and we were unable to recover it. 00:30:09.702 [2024-07-15 15:35:13.525015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.702 [2024-07-15 15:35:13.525029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.702 qpair failed and we were unable to recover it. 00:30:09.702 [2024-07-15 15:35:13.525291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.702 [2024-07-15 15:35:13.525304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.702 qpair failed and we were unable to recover it. 00:30:09.702 [2024-07-15 15:35:13.525553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.702 [2024-07-15 15:35:13.525566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.702 qpair failed and we were unable to recover it. 00:30:09.702 [2024-07-15 15:35:13.525838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.702 [2024-07-15 15:35:13.525852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.702 qpair failed and we were unable to recover it. 00:30:09.702 [2024-07-15 15:35:13.526198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.702 [2024-07-15 15:35:13.526211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.702 qpair failed and we were unable to recover it. 00:30:09.702 [2024-07-15 15:35:13.526461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.702 [2024-07-15 15:35:13.526474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.702 qpair failed and we were unable to recover it. 00:30:09.702 [2024-07-15 15:35:13.526733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.702 [2024-07-15 15:35:13.526746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.702 qpair failed and we were unable to recover it. 00:30:09.702 [2024-07-15 15:35:13.527019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.702 [2024-07-15 15:35:13.527032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.702 qpair failed and we were unable to recover it. 00:30:09.702 [2024-07-15 15:35:13.527215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.702 [2024-07-15 15:35:13.527229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.702 qpair failed and we were unable to recover it. 00:30:09.702 [2024-07-15 15:35:13.527415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.702 [2024-07-15 15:35:13.527429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.702 qpair failed and we were unable to recover it. 00:30:09.702 [2024-07-15 15:35:13.527730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.702 [2024-07-15 15:35:13.527746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.702 qpair failed and we were unable to recover it. 00:30:09.702 [2024-07-15 15:35:13.527980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.702 [2024-07-15 15:35:13.527994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.702 qpair failed and we were unable to recover it. 00:30:09.702 [2024-07-15 15:35:13.528316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.702 [2024-07-15 15:35:13.528330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.702 qpair failed and we were unable to recover it. 00:30:09.702 [2024-07-15 15:35:13.528564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.702 [2024-07-15 15:35:13.528577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.702 qpair failed and we were unable to recover it. 00:30:09.702 [2024-07-15 15:35:13.528890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.702 [2024-07-15 15:35:13.528903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.702 qpair failed and we were unable to recover it. 00:30:09.702 [2024-07-15 15:35:13.529137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.702 [2024-07-15 15:35:13.529151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.702 qpair failed and we were unable to recover it. 00:30:09.702 [2024-07-15 15:35:13.529478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.702 [2024-07-15 15:35:13.529491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.702 qpair failed and we were unable to recover it. 00:30:09.702 [2024-07-15 15:35:13.529753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.702 [2024-07-15 15:35:13.529767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.702 qpair failed and we were unable to recover it. 00:30:09.702 [2024-07-15 15:35:13.530095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.702 [2024-07-15 15:35:13.530109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.702 qpair failed and we were unable to recover it. 00:30:09.702 [2024-07-15 15:35:13.530431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.702 [2024-07-15 15:35:13.530444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.702 qpair failed and we were unable to recover it. 00:30:09.702 [2024-07-15 15:35:13.530772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.702 [2024-07-15 15:35:13.530785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.702 qpair failed and we were unable to recover it. 00:30:09.702 [2024-07-15 15:35:13.531020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.702 [2024-07-15 15:35:13.531033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.702 qpair failed and we were unable to recover it. 00:30:09.702 [2024-07-15 15:35:13.531283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.702 [2024-07-15 15:35:13.531296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.702 qpair failed and we were unable to recover it. 00:30:09.702 [2024-07-15 15:35:13.531553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.702 [2024-07-15 15:35:13.531566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.702 qpair failed and we were unable to recover it. 00:30:09.702 [2024-07-15 15:35:13.531817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.702 [2024-07-15 15:35:13.531830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.702 qpair failed and we were unable to recover it. 00:30:09.702 [2024-07-15 15:35:13.532174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.702 [2024-07-15 15:35:13.532188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.702 qpair failed and we were unable to recover it. 00:30:09.702 [2024-07-15 15:35:13.532421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.702 [2024-07-15 15:35:13.532434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.702 qpair failed and we were unable to recover it. 00:30:09.703 [2024-07-15 15:35:13.532667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.703 [2024-07-15 15:35:13.532681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.703 qpair failed and we were unable to recover it. 00:30:09.703 [2024-07-15 15:35:13.533009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.703 [2024-07-15 15:35:13.533022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.703 qpair failed and we were unable to recover it. 00:30:09.703 [2024-07-15 15:35:13.533299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.703 [2024-07-15 15:35:13.533312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.703 qpair failed and we were unable to recover it. 00:30:09.703 [2024-07-15 15:35:13.533624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.703 [2024-07-15 15:35:13.533638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.703 qpair failed and we were unable to recover it. 00:30:09.703 [2024-07-15 15:35:13.533978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.703 [2024-07-15 15:35:13.533992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.703 qpair failed and we were unable to recover it. 00:30:09.703 [2024-07-15 15:35:13.534240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.703 [2024-07-15 15:35:13.534253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.703 qpair failed and we were unable to recover it. 00:30:09.703 [2024-07-15 15:35:13.534552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.703 [2024-07-15 15:35:13.534565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.703 qpair failed and we were unable to recover it. 00:30:09.703 [2024-07-15 15:35:13.534865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.703 [2024-07-15 15:35:13.534878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.703 qpair failed and we were unable to recover it. 00:30:09.703 [2024-07-15 15:35:13.535186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.703 [2024-07-15 15:35:13.535198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.703 qpair failed and we were unable to recover it. 00:30:09.703 [2024-07-15 15:35:13.535428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.703 [2024-07-15 15:35:13.535441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.703 qpair failed and we were unable to recover it. 00:30:09.703 [2024-07-15 15:35:13.535720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.703 [2024-07-15 15:35:13.535733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.703 qpair failed and we were unable to recover it. 00:30:09.703 [2024-07-15 15:35:13.535980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.703 [2024-07-15 15:35:13.535994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.703 qpair failed and we were unable to recover it. 00:30:09.703 [2024-07-15 15:35:13.536273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.703 [2024-07-15 15:35:13.536287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.703 qpair failed and we were unable to recover it. 00:30:09.703 [2024-07-15 15:35:13.536458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.703 [2024-07-15 15:35:13.536472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.703 qpair failed and we were unable to recover it. 00:30:09.703 [2024-07-15 15:35:13.536786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.703 [2024-07-15 15:35:13.536800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.703 qpair failed and we were unable to recover it. 00:30:09.703 [2024-07-15 15:35:13.537104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.703 [2024-07-15 15:35:13.537118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.703 qpair failed and we were unable to recover it. 00:30:09.703 [2024-07-15 15:35:13.537440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.703 [2024-07-15 15:35:13.537454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.703 qpair failed and we were unable to recover it. 00:30:09.703 [2024-07-15 15:35:13.537779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.703 [2024-07-15 15:35:13.537793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.703 qpair failed and we were unable to recover it. 00:30:09.703 [2024-07-15 15:35:13.537979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.703 [2024-07-15 15:35:13.537992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.703 qpair failed and we were unable to recover it. 00:30:09.703 [2024-07-15 15:35:13.538229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.703 [2024-07-15 15:35:13.538242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.703 qpair failed and we were unable to recover it. 00:30:09.703 [2024-07-15 15:35:13.538564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.703 [2024-07-15 15:35:13.538577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.703 qpair failed and we were unable to recover it. 00:30:09.703 [2024-07-15 15:35:13.538834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.703 [2024-07-15 15:35:13.538848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.703 qpair failed and we were unable to recover it. 00:30:09.703 [2024-07-15 15:35:13.539129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.703 [2024-07-15 15:35:13.539142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.703 qpair failed and we were unable to recover it. 00:30:09.703 [2024-07-15 15:35:13.539399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.703 [2024-07-15 15:35:13.539414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.703 qpair failed and we were unable to recover it. 00:30:09.703 [2024-07-15 15:35:13.539659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.703 [2024-07-15 15:35:13.539672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.703 qpair failed and we were unable to recover it. 00:30:09.703 [2024-07-15 15:35:13.539935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.703 [2024-07-15 15:35:13.539948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.703 qpair failed and we were unable to recover it. 00:30:09.703 [2024-07-15 15:35:13.540192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.703 [2024-07-15 15:35:13.540204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.703 qpair failed and we were unable to recover it. 00:30:09.703 [2024-07-15 15:35:13.540475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.703 [2024-07-15 15:35:13.540488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.703 qpair failed and we were unable to recover it. 00:30:09.703 [2024-07-15 15:35:13.540810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.703 [2024-07-15 15:35:13.540824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.703 qpair failed and we were unable to recover it. 00:30:09.703 [2024-07-15 15:35:13.541089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.703 [2024-07-15 15:35:13.541102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.703 qpair failed and we were unable to recover it. 00:30:09.703 [2024-07-15 15:35:13.541433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.703 [2024-07-15 15:35:13.541446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.703 qpair failed and we were unable to recover it. 00:30:09.703 [2024-07-15 15:35:13.541680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.703 [2024-07-15 15:35:13.541693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.703 qpair failed and we were unable to recover it. 00:30:09.703 [2024-07-15 15:35:13.542007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.703 [2024-07-15 15:35:13.542020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.703 qpair failed and we were unable to recover it. 00:30:09.703 [2024-07-15 15:35:13.542367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.703 [2024-07-15 15:35:13.542380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.703 qpair failed and we were unable to recover it. 00:30:09.703 [2024-07-15 15:35:13.542656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.703 [2024-07-15 15:35:13.542670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.703 qpair failed and we were unable to recover it. 00:30:09.703 [2024-07-15 15:35:13.542922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.703 [2024-07-15 15:35:13.542935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.703 qpair failed and we were unable to recover it. 00:30:09.703 [2024-07-15 15:35:13.543204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.703 [2024-07-15 15:35:13.543217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.703 qpair failed and we were unable to recover it. 00:30:09.703 [2024-07-15 15:35:13.543454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.703 [2024-07-15 15:35:13.543467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.703 qpair failed and we were unable to recover it. 00:30:09.703 [2024-07-15 15:35:13.543720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.703 [2024-07-15 15:35:13.543734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.703 qpair failed and we were unable to recover it. 00:30:09.703 [2024-07-15 15:35:13.543967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.703 [2024-07-15 15:35:13.543981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.703 qpair failed and we were unable to recover it. 00:30:09.704 [2024-07-15 15:35:13.544311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.704 [2024-07-15 15:35:13.544324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.704 qpair failed and we were unable to recover it. 00:30:09.704 [2024-07-15 15:35:13.544651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.704 [2024-07-15 15:35:13.544665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.704 qpair failed and we were unable to recover it. 00:30:09.704 [2024-07-15 15:35:13.544990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.704 [2024-07-15 15:35:13.545003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.704 qpair failed and we were unable to recover it. 00:30:09.704 [2024-07-15 15:35:13.545237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.704 [2024-07-15 15:35:13.545251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.704 qpair failed and we were unable to recover it. 00:30:09.704 [2024-07-15 15:35:13.545494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.704 [2024-07-15 15:35:13.545508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.704 qpair failed and we were unable to recover it. 00:30:09.704 [2024-07-15 15:35:13.545743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.704 [2024-07-15 15:35:13.545757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.704 qpair failed and we were unable to recover it. 00:30:09.704 [2024-07-15 15:35:13.546079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.704 [2024-07-15 15:35:13.546094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.704 qpair failed and we were unable to recover it. 00:30:09.704 [2024-07-15 15:35:13.546417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.704 [2024-07-15 15:35:13.546431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.704 qpair failed and we were unable to recover it. 00:30:09.704 [2024-07-15 15:35:13.546714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.704 [2024-07-15 15:35:13.546728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.704 qpair failed and we were unable to recover it. 00:30:09.704 [2024-07-15 15:35:13.547030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.704 [2024-07-15 15:35:13.547044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.704 qpair failed and we were unable to recover it. 00:30:09.704 [2024-07-15 15:35:13.547355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.704 [2024-07-15 15:35:13.547368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.704 qpair failed and we were unable to recover it. 00:30:09.704 [2024-07-15 15:35:13.547678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.704 [2024-07-15 15:35:13.547692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.704 qpair failed and we were unable to recover it. 00:30:09.704 [2024-07-15 15:35:13.547954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.704 [2024-07-15 15:35:13.547968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.704 qpair failed and we were unable to recover it. 00:30:09.704 [2024-07-15 15:35:13.548310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.704 [2024-07-15 15:35:13.548323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.704 qpair failed and we were unable to recover it. 00:30:09.704 [2024-07-15 15:35:13.548555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.704 [2024-07-15 15:35:13.548569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.704 qpair failed and we were unable to recover it. 00:30:09.704 [2024-07-15 15:35:13.548846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.704 [2024-07-15 15:35:13.548860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.704 qpair failed and we were unable to recover it. 00:30:09.704 [2024-07-15 15:35:13.549123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.704 [2024-07-15 15:35:13.549137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.704 qpair failed and we were unable to recover it. 00:30:09.704 [2024-07-15 15:35:13.549459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.704 [2024-07-15 15:35:13.549473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.704 qpair failed and we were unable to recover it. 00:30:09.704 [2024-07-15 15:35:13.549656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.704 [2024-07-15 15:35:13.549669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.704 qpair failed and we were unable to recover it. 00:30:09.704 [2024-07-15 15:35:13.549930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.704 [2024-07-15 15:35:13.549944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.704 qpair failed and we were unable to recover it. 00:30:09.704 [2024-07-15 15:35:13.550188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.704 [2024-07-15 15:35:13.550201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.704 qpair failed and we were unable to recover it. 00:30:09.704 [2024-07-15 15:35:13.550527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.704 [2024-07-15 15:35:13.550540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.704 qpair failed and we were unable to recover it. 00:30:09.704 [2024-07-15 15:35:13.550790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.704 [2024-07-15 15:35:13.550803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.704 qpair failed and we were unable to recover it. 00:30:09.704 [2024-07-15 15:35:13.551081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.704 [2024-07-15 15:35:13.551095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.704 qpair failed and we were unable to recover it. 00:30:09.704 [2024-07-15 15:35:13.551368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.704 [2024-07-15 15:35:13.551382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.704 qpair failed and we were unable to recover it. 00:30:09.704 [2024-07-15 15:35:13.551702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.704 [2024-07-15 15:35:13.551715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.704 qpair failed and we were unable to recover it. 00:30:09.704 [2024-07-15 15:35:13.552042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.704 [2024-07-15 15:35:13.552055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.704 qpair failed and we were unable to recover it. 00:30:09.704 [2024-07-15 15:35:13.552382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.704 [2024-07-15 15:35:13.552396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.704 qpair failed and we were unable to recover it. 00:30:09.704 [2024-07-15 15:35:13.552699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.704 [2024-07-15 15:35:13.552712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.704 qpair failed and we were unable to recover it. 00:30:09.704 [2024-07-15 15:35:13.552977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.704 [2024-07-15 15:35:13.552991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.704 qpair failed and we were unable to recover it. 00:30:09.704 [2024-07-15 15:35:13.553317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.704 [2024-07-15 15:35:13.553330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.704 qpair failed and we were unable to recover it. 00:30:09.704 [2024-07-15 15:35:13.553587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.704 [2024-07-15 15:35:13.553601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.704 qpair failed and we were unable to recover it. 00:30:09.704 [2024-07-15 15:35:13.553805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.704 [2024-07-15 15:35:13.553818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.704 qpair failed and we were unable to recover it. 00:30:09.704 [2024-07-15 15:35:13.554131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.704 [2024-07-15 15:35:13.554170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:09.704 qpair failed and we were unable to recover it. 00:30:09.704 [2024-07-15 15:35:13.554554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.704 [2024-07-15 15:35:13.554590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.704 qpair failed and we were unable to recover it. 00:30:09.704 [2024-07-15 15:35:13.554865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.704 [2024-07-15 15:35:13.554891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:09.704 qpair failed and we were unable to recover it. 00:30:09.704 [2024-07-15 15:35:13.555200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.704 [2024-07-15 15:35:13.555214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.704 qpair failed and we were unable to recover it. 00:30:09.704 [2024-07-15 15:35:13.555494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.704 [2024-07-15 15:35:13.555508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.704 qpair failed and we were unable to recover it. 00:30:09.704 [2024-07-15 15:35:13.555846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.704 [2024-07-15 15:35:13.555860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.704 qpair failed and we were unable to recover it. 00:30:09.704 [2024-07-15 15:35:13.556115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.704 [2024-07-15 15:35:13.556129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.704 qpair failed and we were unable to recover it. 00:30:09.704 [2024-07-15 15:35:13.556452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.704 [2024-07-15 15:35:13.556465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.705 qpair failed and we were unable to recover it. 00:30:09.705 [2024-07-15 15:35:13.556774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.705 [2024-07-15 15:35:13.556787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.705 qpair failed and we were unable to recover it. 00:30:09.705 [2024-07-15 15:35:13.557112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.705 [2024-07-15 15:35:13.557126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.705 qpair failed and we were unable to recover it. 00:30:09.705 [2024-07-15 15:35:13.557453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.705 [2024-07-15 15:35:13.557466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.705 qpair failed and we were unable to recover it. 00:30:09.705 [2024-07-15 15:35:13.557789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.705 [2024-07-15 15:35:13.557802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.705 qpair failed and we were unable to recover it. 00:30:09.705 [2024-07-15 15:35:13.558110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.705 [2024-07-15 15:35:13.558123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.705 qpair failed and we were unable to recover it. 00:30:09.705 [2024-07-15 15:35:13.558392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.705 [2024-07-15 15:35:13.558405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.705 qpair failed and we were unable to recover it. 00:30:09.705 [2024-07-15 15:35:13.558674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.705 [2024-07-15 15:35:13.558688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.705 qpair failed and we were unable to recover it. 00:30:09.705 [2024-07-15 15:35:13.558967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.705 [2024-07-15 15:35:13.558981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.705 qpair failed and we were unable to recover it. 00:30:09.705 [2024-07-15 15:35:13.559224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.705 [2024-07-15 15:35:13.559238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.705 qpair failed and we were unable to recover it. 00:30:09.705 [2024-07-15 15:35:13.559560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.705 [2024-07-15 15:35:13.559575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.705 qpair failed and we were unable to recover it. 00:30:09.705 [2024-07-15 15:35:13.559753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.705 [2024-07-15 15:35:13.559766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.705 qpair failed and we were unable to recover it. 00:30:09.705 [2024-07-15 15:35:13.560044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.705 [2024-07-15 15:35:13.560058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.705 qpair failed and we were unable to recover it. 00:30:09.705 [2024-07-15 15:35:13.560317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.705 [2024-07-15 15:35:13.560330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.705 qpair failed and we were unable to recover it. 00:30:09.705 [2024-07-15 15:35:13.560553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.705 [2024-07-15 15:35:13.560567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.705 qpair failed and we were unable to recover it. 00:30:09.705 [2024-07-15 15:35:13.560888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.705 [2024-07-15 15:35:13.560903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.705 qpair failed and we were unable to recover it. 00:30:09.705 [2024-07-15 15:35:13.561138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.705 [2024-07-15 15:35:13.561151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.705 qpair failed and we were unable to recover it. 00:30:09.705 [2024-07-15 15:35:13.561403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.705 [2024-07-15 15:35:13.561416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.705 qpair failed and we were unable to recover it. 00:30:09.705 [2024-07-15 15:35:13.561593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.705 [2024-07-15 15:35:13.561607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.705 qpair failed and we were unable to recover it. 00:30:09.705 [2024-07-15 15:35:13.561784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.705 [2024-07-15 15:35:13.561798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.705 qpair failed and we were unable to recover it. 00:30:09.705 [2024-07-15 15:35:13.562120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.705 [2024-07-15 15:35:13.562134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.705 qpair failed and we were unable to recover it. 00:30:09.705 [2024-07-15 15:35:13.562436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.705 [2024-07-15 15:35:13.562449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.705 qpair failed and we were unable to recover it. 00:30:09.705 [2024-07-15 15:35:13.562797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.705 [2024-07-15 15:35:13.562810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.705 qpair failed and we were unable to recover it. 00:30:09.705 [2024-07-15 15:35:13.563163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.705 [2024-07-15 15:35:13.563177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.705 qpair failed and we were unable to recover it. 00:30:09.705 [2024-07-15 15:35:13.563342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.705 [2024-07-15 15:35:13.563355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.705 qpair failed and we were unable to recover it. 00:30:09.705 [2024-07-15 15:35:13.563681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.705 [2024-07-15 15:35:13.563695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.705 qpair failed and we were unable to recover it. 00:30:09.705 [2024-07-15 15:35:13.563883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.705 [2024-07-15 15:35:13.563897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.705 qpair failed and we were unable to recover it. 00:30:09.705 [2024-07-15 15:35:13.564197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.705 [2024-07-15 15:35:13.564210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.705 qpair failed and we were unable to recover it. 00:30:09.705 [2024-07-15 15:35:13.564457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.705 [2024-07-15 15:35:13.564471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.705 qpair failed and we were unable to recover it. 00:30:09.705 [2024-07-15 15:35:13.564707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.705 [2024-07-15 15:35:13.564720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.705 qpair failed and we were unable to recover it. 00:30:09.705 [2024-07-15 15:35:13.564989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.705 [2024-07-15 15:35:13.565002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.705 qpair failed and we were unable to recover it. 00:30:09.705 [2024-07-15 15:35:13.565325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.705 [2024-07-15 15:35:13.565339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.705 qpair failed and we were unable to recover it. 00:30:09.705 [2024-07-15 15:35:13.565671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.705 [2024-07-15 15:35:13.565684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.705 qpair failed and we were unable to recover it. 00:30:09.705 [2024-07-15 15:35:13.566011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.705 [2024-07-15 15:35:13.566024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.705 qpair failed and we were unable to recover it. 00:30:09.705 [2024-07-15 15:35:13.566334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.705 [2024-07-15 15:35:13.566348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.705 qpair failed and we were unable to recover it. 00:30:09.705 [2024-07-15 15:35:13.566687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.706 [2024-07-15 15:35:13.566700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.706 qpair failed and we were unable to recover it. 00:30:09.706 [2024-07-15 15:35:13.566950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.706 [2024-07-15 15:35:13.566964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.706 qpair failed and we were unable to recover it. 00:30:09.706 [2024-07-15 15:35:13.567309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.706 [2024-07-15 15:35:13.567323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.706 qpair failed and we were unable to recover it. 00:30:09.706 [2024-07-15 15:35:13.567642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.706 [2024-07-15 15:35:13.567655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.706 qpair failed and we were unable to recover it. 00:30:09.706 [2024-07-15 15:35:13.567987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.706 [2024-07-15 15:35:13.568001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.706 qpair failed and we were unable to recover it. 00:30:09.706 [2024-07-15 15:35:13.568310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.706 [2024-07-15 15:35:13.568324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.706 qpair failed and we were unable to recover it. 00:30:09.706 [2024-07-15 15:35:13.568580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.706 [2024-07-15 15:35:13.568593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.706 qpair failed and we were unable to recover it. 00:30:09.706 [2024-07-15 15:35:13.568910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.706 [2024-07-15 15:35:13.568924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.706 qpair failed and we were unable to recover it. 00:30:09.706 [2024-07-15 15:35:13.569178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.706 [2024-07-15 15:35:13.569191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.706 qpair failed and we were unable to recover it. 00:30:09.706 [2024-07-15 15:35:13.569370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.706 [2024-07-15 15:35:13.569383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.706 qpair failed and we were unable to recover it. 00:30:09.706 [2024-07-15 15:35:13.569617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.706 [2024-07-15 15:35:13.569630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.706 qpair failed and we were unable to recover it. 00:30:09.706 [2024-07-15 15:35:13.569795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.706 [2024-07-15 15:35:13.569809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.706 qpair failed and we were unable to recover it. 00:30:09.706 [2024-07-15 15:35:13.570137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.706 [2024-07-15 15:35:13.570150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.706 qpair failed and we were unable to recover it. 00:30:09.706 [2024-07-15 15:35:13.570474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.706 [2024-07-15 15:35:13.570487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.706 qpair failed and we were unable to recover it. 00:30:09.706 [2024-07-15 15:35:13.570800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.706 [2024-07-15 15:35:13.570814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.706 qpair failed and we were unable to recover it. 00:30:09.706 [2024-07-15 15:35:13.571129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.706 [2024-07-15 15:35:13.571144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.706 qpair failed and we were unable to recover it. 00:30:09.706 [2024-07-15 15:35:13.571398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.706 [2024-07-15 15:35:13.571411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.706 qpair failed and we were unable to recover it. 00:30:09.706 [2024-07-15 15:35:13.571734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.706 [2024-07-15 15:35:13.571748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.706 qpair failed and we were unable to recover it. 00:30:09.706 [2024-07-15 15:35:13.571984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.706 [2024-07-15 15:35:13.571998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.706 qpair failed and we were unable to recover it. 00:30:09.706 [2024-07-15 15:35:13.572326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.706 [2024-07-15 15:35:13.572340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.706 qpair failed and we were unable to recover it. 00:30:09.706 [2024-07-15 15:35:13.572574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.706 [2024-07-15 15:35:13.572587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.706 qpair failed and we were unable to recover it. 00:30:09.706 [2024-07-15 15:35:13.572773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.706 [2024-07-15 15:35:13.572787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.706 qpair failed and we were unable to recover it. 00:30:09.706 [2024-07-15 15:35:13.573085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.706 [2024-07-15 15:35:13.573098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.706 qpair failed and we were unable to recover it. 00:30:09.706 [2024-07-15 15:35:13.573353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.706 [2024-07-15 15:35:13.573366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.706 qpair failed and we were unable to recover it. 00:30:09.706 [2024-07-15 15:35:13.573649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.706 [2024-07-15 15:35:13.573662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.706 qpair failed and we were unable to recover it. 00:30:09.706 [2024-07-15 15:35:13.573930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.706 [2024-07-15 15:35:13.573944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.706 qpair failed and we were unable to recover it. 00:30:09.706 [2024-07-15 15:35:13.574248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.706 [2024-07-15 15:35:13.574261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.706 qpair failed and we were unable to recover it. 00:30:09.706 [2024-07-15 15:35:13.574510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.706 [2024-07-15 15:35:13.574524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.706 qpair failed and we were unable to recover it. 00:30:09.706 [2024-07-15 15:35:13.574785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.706 [2024-07-15 15:35:13.574799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.706 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-15 15:35:13.574969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-15 15:35:13.574983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-15 15:35:13.575308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-15 15:35:13.575321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-15 15:35:13.575635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-15 15:35:13.575649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-15 15:35:13.575961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-15 15:35:13.575975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-15 15:35:13.576276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-15 15:35:13.576289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-15 15:35:13.576541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-15 15:35:13.576554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-15 15:35:13.576800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-15 15:35:13.576814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-15 15:35:13.577143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-15 15:35:13.577156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-15 15:35:13.577521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-15 15:35:13.577534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-15 15:35:13.577839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-15 15:35:13.577852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-15 15:35:13.578029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-15 15:35:13.578042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-15 15:35:13.578368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-15 15:35:13.578381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-15 15:35:13.578699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-15 15:35:13.578712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-15 15:35:13.579042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-15 15:35:13.579056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-15 15:35:13.579307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-15 15:35:13.579320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-15 15:35:13.579645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-15 15:35:13.579659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.974 [2024-07-15 15:35:13.579986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.974 [2024-07-15 15:35:13.580000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.974 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-15 15:35:13.580247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-15 15:35:13.580261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-15 15:35:13.580517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-15 15:35:13.580531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-15 15:35:13.580769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-15 15:35:13.580782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-15 15:35:13.581017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-15 15:35:13.581031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-15 15:35:13.581299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-15 15:35:13.581312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-15 15:35:13.581612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-15 15:35:13.581626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-15 15:35:13.581949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-15 15:35:13.581963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-15 15:35:13.582212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-15 15:35:13.582225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-15 15:35:13.582567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-15 15:35:13.582580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-15 15:35:13.582839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-15 15:35:13.582855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-15 15:35:13.583167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-15 15:35:13.583180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-15 15:35:13.583481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-15 15:35:13.583494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-15 15:35:13.583768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-15 15:35:13.583781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-15 15:35:13.584056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-15 15:35:13.584070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-15 15:35:13.584371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-15 15:35:13.584385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-15 15:35:13.584703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-15 15:35:13.584716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-15 15:35:13.585044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-15 15:35:13.585057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-15 15:35:13.585302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-15 15:35:13.585315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-15 15:35:13.585561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-15 15:35:13.585575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-15 15:35:13.585896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-15 15:35:13.585910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-15 15:35:13.586164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-15 15:35:13.586177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-15 15:35:13.586360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-15 15:35:13.586373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-15 15:35:13.586623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-15 15:35:13.586636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-15 15:35:13.586806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-15 15:35:13.586820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-15 15:35:13.587162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-15 15:35:13.587176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-15 15:35:13.587433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-15 15:35:13.587446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-15 15:35:13.587746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-15 15:35:13.587759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-15 15:35:13.588032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-15 15:35:13.588046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-15 15:35:13.588322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-15 15:35:13.588335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-15 15:35:13.588658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-15 15:35:13.588672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-15 15:35:13.589004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-15 15:35:13.589018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-15 15:35:13.589252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-15 15:35:13.589265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-15 15:35:13.589574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-15 15:35:13.589588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-15 15:35:13.589847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-15 15:35:13.589861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-15 15:35:13.590161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-15 15:35:13.590175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-15 15:35:13.590498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-15 15:35:13.590511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-15 15:35:13.590838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-15 15:35:13.590852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-15 15:35:13.591174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-15 15:35:13.591187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-15 15:35:13.591447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-15 15:35:13.591461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-15 15:35:13.591731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-15 15:35:13.591744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-15 15:35:13.592063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-15 15:35:13.592077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-15 15:35:13.592332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-15 15:35:13.592346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-15 15:35:13.592596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-15 15:35:13.592609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-15 15:35:13.592845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-15 15:35:13.592859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-15 15:35:13.593141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-15 15:35:13.593154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-15 15:35:13.593486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-15 15:35:13.593499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-15 15:35:13.593740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-15 15:35:13.593754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-15 15:35:13.593984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-15 15:35:13.593997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-15 15:35:13.594280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-15 15:35:13.594293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-15 15:35:13.594525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.975 [2024-07-15 15:35:13.594540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.975 qpair failed and we were unable to recover it. 00:30:09.975 [2024-07-15 15:35:13.594846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-15 15:35:13.594860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-15 15:35:13.595095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-15 15:35:13.595108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-15 15:35:13.595409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-15 15:35:13.595423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-15 15:35:13.595685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-15 15:35:13.595698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-15 15:35:13.595891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-15 15:35:13.595905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-15 15:35:13.596141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-15 15:35:13.596154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-15 15:35:13.596479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-15 15:35:13.596492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-15 15:35:13.596819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-15 15:35:13.596835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-15 15:35:13.597107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-15 15:35:13.597120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-15 15:35:13.597351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-15 15:35:13.597364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-15 15:35:13.597627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-15 15:35:13.597641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-15 15:35:13.597967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-15 15:35:13.597981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-15 15:35:13.598304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-15 15:35:13.598317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-15 15:35:13.598645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-15 15:35:13.598658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-15 15:35:13.598967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-15 15:35:13.598980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-15 15:35:13.599322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-15 15:35:13.599336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-15 15:35:13.599577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-15 15:35:13.599590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-15 15:35:13.599899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-15 15:35:13.599913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-15 15:35:13.600216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-15 15:35:13.600229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-15 15:35:13.600529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-15 15:35:13.600543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-15 15:35:13.600886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-15 15:35:13.600899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-15 15:35:13.601245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-15 15:35:13.601259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-15 15:35:13.601608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-15 15:35:13.601621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-15 15:35:13.601856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-15 15:35:13.601869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-15 15:35:13.602203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-15 15:35:13.602216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-15 15:35:13.602380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-15 15:35:13.602393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-15 15:35:13.602637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-15 15:35:13.602650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-15 15:35:13.602973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-15 15:35:13.602986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-15 15:35:13.603242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-15 15:35:13.603255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-15 15:35:13.603574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-15 15:35:13.603587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-15 15:35:13.603886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-15 15:35:13.603900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-15 15:35:13.604222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-15 15:35:13.604235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-15 15:35:13.604561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-15 15:35:13.604574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-15 15:35:13.604901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-15 15:35:13.604915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-15 15:35:13.605160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-15 15:35:13.605173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-15 15:35:13.605520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-15 15:35:13.605534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-15 15:35:13.605879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-15 15:35:13.605893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-15 15:35:13.606146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-15 15:35:13.606159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-15 15:35:13.606432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-15 15:35:13.606445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-15 15:35:13.606789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-15 15:35:13.606804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-15 15:35:13.607074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-15 15:35:13.607087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-15 15:35:13.607321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-15 15:35:13.607334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-15 15:35:13.607586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-15 15:35:13.607599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-15 15:35:13.607856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-15 15:35:13.607870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-15 15:35:13.608181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-15 15:35:13.608195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-15 15:35:13.608370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-15 15:35:13.608384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-15 15:35:13.608580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-15 15:35:13.608593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-15 15:35:13.608870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-15 15:35:13.608884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-15 15:35:13.609136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-15 15:35:13.609150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-15 15:35:13.609390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-15 15:35:13.609403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.976 [2024-07-15 15:35:13.609725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.976 [2024-07-15 15:35:13.609738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.976 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-15 15:35:13.609983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-15 15:35:13.609996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-15 15:35:13.610187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-15 15:35:13.610200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-15 15:35:13.610527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-15 15:35:13.610541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-15 15:35:13.610799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-15 15:35:13.610812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-15 15:35:13.611122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-15 15:35:13.611136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-15 15:35:13.611373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-15 15:35:13.611386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-15 15:35:13.611639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-15 15:35:13.611652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-15 15:35:13.611903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-15 15:35:13.611916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-15 15:35:13.612160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-15 15:35:13.612173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-15 15:35:13.612453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-15 15:35:13.612466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-15 15:35:13.612698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-15 15:35:13.612711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-15 15:35:13.612962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-15 15:35:13.612976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-15 15:35:13.613307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-15 15:35:13.613321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-15 15:35:13.613573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-15 15:35:13.613586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-15 15:35:13.613928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-15 15:35:13.613941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-15 15:35:13.614291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-15 15:35:13.614304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-15 15:35:13.614576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-15 15:35:13.614590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-15 15:35:13.614840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-15 15:35:13.614854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-15 15:35:13.615036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-15 15:35:13.615050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-15 15:35:13.615305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-15 15:35:13.615319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-15 15:35:13.615562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-15 15:35:13.615575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-15 15:35:13.615920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-15 15:35:13.615935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-15 15:35:13.616284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-15 15:35:13.616298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-15 15:35:13.616574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-15 15:35:13.616587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-15 15:35:13.616865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-15 15:35:13.616880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-15 15:35:13.617078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-15 15:35:13.617092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-15 15:35:13.617346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-15 15:35:13.617360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-15 15:35:13.617667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-15 15:35:13.617682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-15 15:35:13.617936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-15 15:35:13.617953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-15 15:35:13.618285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-15 15:35:13.618299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-15 15:35:13.618531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-15 15:35:13.618545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-15 15:35:13.618776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-15 15:35:13.618790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-15 15:35:13.619125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-15 15:35:13.619140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-15 15:35:13.619446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-15 15:35:13.619460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-15 15:35:13.619734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-15 15:35:13.619747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-15 15:35:13.619967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-15 15:35:13.619981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-15 15:35:13.620303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-15 15:35:13.620317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-15 15:35:13.620571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-15 15:35:13.620584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-15 15:35:13.620824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-15 15:35:13.620842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-15 15:35:13.621166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-15 15:35:13.621180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-15 15:35:13.621528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-15 15:35:13.621541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-15 15:35:13.621784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-15 15:35:13.621797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-15 15:35:13.622129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-15 15:35:13.622143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-15 15:35:13.622393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-15 15:35:13.622406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-15 15:35:13.622743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-15 15:35:13.622756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-15 15:35:13.622999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-15 15:35:13.623013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-15 15:35:13.623269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-15 15:35:13.623282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-15 15:35:13.623515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-15 15:35:13.623529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-15 15:35:13.623855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-15 15:35:13.623869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-15 15:35:13.624068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.977 [2024-07-15 15:35:13.624082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.977 qpair failed and we were unable to recover it. 00:30:09.977 [2024-07-15 15:35:13.624251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-15 15:35:13.624265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-15 15:35:13.624613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-15 15:35:13.624627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-15 15:35:13.624878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-15 15:35:13.624893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-15 15:35:13.625218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-15 15:35:13.625232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-15 15:35:13.625418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-15 15:35:13.625432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-15 15:35:13.625735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-15 15:35:13.625749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-15 15:35:13.625992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-15 15:35:13.626006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-15 15:35:13.626269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-15 15:35:13.626283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-15 15:35:13.626584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-15 15:35:13.626597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-15 15:35:13.626897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-15 15:35:13.626912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-15 15:35:13.627160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-15 15:35:13.627173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-15 15:35:13.627474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-15 15:35:13.627487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-15 15:35:13.627811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-15 15:35:13.627825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-15 15:35:13.628159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-15 15:35:13.628173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-15 15:35:13.628403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-15 15:35:13.628417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-15 15:35:13.628732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-15 15:35:13.628746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-15 15:35:13.629052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-15 15:35:13.629066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-15 15:35:13.629396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-15 15:35:13.629410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-15 15:35:13.629711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-15 15:35:13.629727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-15 15:35:13.629972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-15 15:35:13.629985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-15 15:35:13.630249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-15 15:35:13.630263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-15 15:35:13.630495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-15 15:35:13.630509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-15 15:35:13.630779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-15 15:35:13.630793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-15 15:35:13.631072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-15 15:35:13.631086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-15 15:35:13.631355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-15 15:35:13.631369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-15 15:35:13.631674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-15 15:35:13.631687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-15 15:35:13.631919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-15 15:35:13.631933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-15 15:35:13.632209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-15 15:35:13.632222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-15 15:35:13.632480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-15 15:35:13.632493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-15 15:35:13.632657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-15 15:35:13.632670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-15 15:35:13.632906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-15 15:35:13.632920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-15 15:35:13.633241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-15 15:35:13.633254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-15 15:35:13.633558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-15 15:35:13.633572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-15 15:35:13.633874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-15 15:35:13.633888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-15 15:35:13.634191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-15 15:35:13.634204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-15 15:35:13.634543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-15 15:35:13.634557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-15 15:35:13.634859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-15 15:35:13.634873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-15 15:35:13.635172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-15 15:35:13.635185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-15 15:35:13.635440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-15 15:35:13.635454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-15 15:35:13.635708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-15 15:35:13.635723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-15 15:35:13.636050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-15 15:35:13.636064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-15 15:35:13.636298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-15 15:35:13.636312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-15 15:35:13.636492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.978 [2024-07-15 15:35:13.636505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.978 qpair failed and we were unable to recover it. 00:30:09.978 [2024-07-15 15:35:13.636839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-15 15:35:13.636853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-15 15:35:13.637104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-15 15:35:13.637117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-15 15:35:13.637383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-15 15:35:13.637397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-15 15:35:13.637747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-15 15:35:13.637760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-15 15:35:13.638088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-15 15:35:13.638102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-15 15:35:13.638336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-15 15:35:13.638350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-15 15:35:13.638653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-15 15:35:13.638667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-15 15:35:13.638992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-15 15:35:13.639006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-15 15:35:13.639236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-15 15:35:13.639250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-15 15:35:13.639564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-15 15:35:13.639578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-15 15:35:13.639807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-15 15:35:13.639821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-15 15:35:13.640072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-15 15:35:13.640085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-15 15:35:13.640331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-15 15:35:13.640345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-15 15:35:13.640670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-15 15:35:13.640684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-15 15:35:13.640869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-15 15:35:13.640883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-15 15:35:13.641206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-15 15:35:13.641222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-15 15:35:13.641539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-15 15:35:13.641552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-15 15:35:13.641877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-15 15:35:13.641892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-15 15:35:13.642218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-15 15:35:13.642232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-15 15:35:13.642582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-15 15:35:13.642597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-15 15:35:13.642945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-15 15:35:13.642957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-15 15:35:13.643219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-15 15:35:13.643231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-15 15:35:13.643465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-15 15:35:13.643476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-15 15:35:13.643708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-15 15:35:13.643720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-15 15:35:13.644044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-15 15:35:13.644057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-15 15:35:13.644292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-15 15:35:13.644304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-15 15:35:13.644606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-15 15:35:13.644618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-15 15:35:13.644868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-15 15:35:13.644881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-15 15:35:13.645211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-15 15:35:13.645224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-15 15:35:13.645588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-15 15:35:13.645601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-15 15:35:13.645930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-15 15:35:13.645942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-15 15:35:13.646197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-15 15:35:13.646209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-15 15:35:13.646453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-15 15:35:13.646466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-15 15:35:13.646649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-15 15:35:13.646660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-15 15:35:13.646918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-15 15:35:13.646930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-15 15:35:13.647254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-15 15:35:13.647266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-15 15:35:13.647506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-15 15:35:13.647519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-15 15:35:13.647807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-15 15:35:13.647819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-15 15:35:13.648150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-15 15:35:13.648163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-15 15:35:13.648418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-15 15:35:13.648431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-15 15:35:13.648767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-15 15:35:13.648781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-15 15:35:13.649149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-15 15:35:13.649162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-15 15:35:13.649467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-15 15:35:13.649479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-15 15:35:13.649828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-15 15:35:13.649844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-15 15:35:13.650103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-15 15:35:13.650115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-15 15:35:13.650348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-15 15:35:13.650360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-15 15:35:13.650737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-15 15:35:13.650749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-15 15:35:13.650982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-15 15:35:13.650994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-15 15:35:13.651259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-15 15:35:13.651271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-15 15:35:13.651591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.979 [2024-07-15 15:35:13.651603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.979 qpair failed and we were unable to recover it. 00:30:09.979 [2024-07-15 15:35:13.651905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-15 15:35:13.651917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-15 15:35:13.652083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-15 15:35:13.652095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-15 15:35:13.652347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-15 15:35:13.652360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-15 15:35:13.652632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-15 15:35:13.652646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-15 15:35:13.652948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-15 15:35:13.652961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-15 15:35:13.653288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-15 15:35:13.653303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-15 15:35:13.653606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-15 15:35:13.653620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-15 15:35:13.653878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-15 15:35:13.653891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-15 15:35:13.654143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-15 15:35:13.654155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-15 15:35:13.654424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-15 15:35:13.654437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-15 15:35:13.654689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-15 15:35:13.654701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-15 15:35:13.654951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-15 15:35:13.654963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-15 15:35:13.655320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-15 15:35:13.655332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-15 15:35:13.655580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-15 15:35:13.655592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-15 15:35:13.655841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-15 15:35:13.655853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-15 15:35:13.656177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-15 15:35:13.656188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-15 15:35:13.656463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-15 15:35:13.656475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-15 15:35:13.656780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-15 15:35:13.656791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-15 15:35:13.657093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-15 15:35:13.657106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-15 15:35:13.657410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-15 15:35:13.657422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-15 15:35:13.657652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-15 15:35:13.657664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-15 15:35:13.658012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-15 15:35:13.658024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-15 15:35:13.658354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-15 15:35:13.658367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-15 15:35:13.658643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-15 15:35:13.658655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-15 15:35:13.658959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-15 15:35:13.658971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-15 15:35:13.659290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-15 15:35:13.659302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-15 15:35:13.659636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-15 15:35:13.659648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-15 15:35:13.659964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-15 15:35:13.659976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-15 15:35:13.660230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-15 15:35:13.660243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-15 15:35:13.660566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-15 15:35:13.660578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-15 15:35:13.660925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-15 15:35:13.660937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-15 15:35:13.661248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-15 15:35:13.661261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-15 15:35:13.661526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-15 15:35:13.661538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-15 15:35:13.661861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-15 15:35:13.661874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-15 15:35:13.662178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-15 15:35:13.662190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-15 15:35:13.662453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-15 15:35:13.662465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-15 15:35:13.662766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-15 15:35:13.662778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-15 15:35:13.663056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-15 15:35:13.663068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-15 15:35:13.663341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-15 15:35:13.663353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-15 15:35:13.663676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-15 15:35:13.663688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-15 15:35:13.663945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-15 15:35:13.663957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-15 15:35:13.664273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-15 15:35:13.664285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-15 15:35:13.664616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-15 15:35:13.664628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-15 15:35:13.664955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-15 15:35:13.664967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-15 15:35:13.665289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-15 15:35:13.665301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-15 15:35:13.665589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-15 15:35:13.665604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-15 15:35:13.665886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.980 [2024-07-15 15:35:13.665898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.980 qpair failed and we were unable to recover it. 00:30:09.980 [2024-07-15 15:35:13.666152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-15 15:35:13.666164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-15 15:35:13.666430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-15 15:35:13.666442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-15 15:35:13.666765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-15 15:35:13.666777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-15 15:35:13.666982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-15 15:35:13.666994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-15 15:35:13.667272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-15 15:35:13.667285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-15 15:35:13.667587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-15 15:35:13.667599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-15 15:35:13.667945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-15 15:35:13.667958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-15 15:35:13.668210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-15 15:35:13.668222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-15 15:35:13.668427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-15 15:35:13.668439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-15 15:35:13.668765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-15 15:35:13.668777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-15 15:35:13.669035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-15 15:35:13.669047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-15 15:35:13.669398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-15 15:35:13.669411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-15 15:35:13.669755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-15 15:35:13.669767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-15 15:35:13.670020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-15 15:35:13.670033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-15 15:35:13.670307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-15 15:35:13.670319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-15 15:35:13.670645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-15 15:35:13.670657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-15 15:35:13.670935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-15 15:35:13.670947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-15 15:35:13.671247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-15 15:35:13.671259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-15 15:35:13.671439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-15 15:35:13.671452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-15 15:35:13.671688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-15 15:35:13.671700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-15 15:35:13.672000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-15 15:35:13.672012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-15 15:35:13.672334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-15 15:35:13.672347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-15 15:35:13.672656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-15 15:35:13.672668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-15 15:35:13.672938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-15 15:35:13.672951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-15 15:35:13.673138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-15 15:35:13.673151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-15 15:35:13.673482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-15 15:35:13.673494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-15 15:35:13.673779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-15 15:35:13.673790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-15 15:35:13.674100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-15 15:35:13.674113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-15 15:35:13.674445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-15 15:35:13.674457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-15 15:35:13.674786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-15 15:35:13.674799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-15 15:35:13.675039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-15 15:35:13.675051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-15 15:35:13.675327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-15 15:35:13.675340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-15 15:35:13.675642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-15 15:35:13.675654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-15 15:35:13.675900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-15 15:35:13.675912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-15 15:35:13.676192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-15 15:35:13.676204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-15 15:35:13.676491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-15 15:35:13.676503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-15 15:35:13.676737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-15 15:35:13.676750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-15 15:35:13.677059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-15 15:35:13.677072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-15 15:35:13.677316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-15 15:35:13.677331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-15 15:35:13.677657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-15 15:35:13.677669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-15 15:35:13.677994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-15 15:35:13.678007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.981 [2024-07-15 15:35:13.678312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.981 [2024-07-15 15:35:13.678325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.981 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-15 15:35:13.678527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-15 15:35:13.678539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-15 15:35:13.678868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-15 15:35:13.678880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-15 15:35:13.679153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-15 15:35:13.679166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-15 15:35:13.679494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-15 15:35:13.679506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-15 15:35:13.679759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-15 15:35:13.679771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-15 15:35:13.680138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-15 15:35:13.680150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-15 15:35:13.680451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-15 15:35:13.680463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-15 15:35:13.680752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-15 15:35:13.680765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-15 15:35:13.681000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-15 15:35:13.681012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-15 15:35:13.681290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-15 15:35:13.681302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-15 15:35:13.681540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-15 15:35:13.681552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-15 15:35:13.681748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-15 15:35:13.681759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-15 15:35:13.682084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-15 15:35:13.682096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-15 15:35:13.682348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-15 15:35:13.682359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-15 15:35:13.682604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-15 15:35:13.682616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-15 15:35:13.682939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-15 15:35:13.682951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-15 15:35:13.683278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-15 15:35:13.683290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-15 15:35:13.683526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-15 15:35:13.683539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-15 15:35:13.683847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-15 15:35:13.683859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-15 15:35:13.684134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-15 15:35:13.684146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-15 15:35:13.684407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-15 15:35:13.684419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-15 15:35:13.684686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-15 15:35:13.684698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-15 15:35:13.684961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-15 15:35:13.684973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-15 15:35:13.685232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-15 15:35:13.685244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-15 15:35:13.685525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-15 15:35:13.685537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-15 15:35:13.685863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-15 15:35:13.685878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-15 15:35:13.686180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-15 15:35:13.686192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-15 15:35:13.686395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-15 15:35:13.686406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-15 15:35:13.686641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-15 15:35:13.686654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-15 15:35:13.686951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-15 15:35:13.686965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-15 15:35:13.687301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-15 15:35:13.687313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-15 15:35:13.687619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-15 15:35:13.687631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-15 15:35:13.687983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-15 15:35:13.687996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-15 15:35:13.688336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-15 15:35:13.688348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-15 15:35:13.688663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-15 15:35:13.688675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-15 15:35:13.688989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-15 15:35:13.689001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-15 15:35:13.689259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-15 15:35:13.689274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-15 15:35:13.689581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-15 15:35:13.689594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-15 15:35:13.689920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-15 15:35:13.689932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-15 15:35:13.690165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-15 15:35:13.690178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-15 15:35:13.690386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-15 15:35:13.690398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-15 15:35:13.690654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-15 15:35:13.690667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-15 15:35:13.690918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-15 15:35:13.690930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-15 15:35:13.691254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-15 15:35:13.691266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-15 15:35:13.691622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-15 15:35:13.691634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-15 15:35:13.691916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-15 15:35:13.691929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-15 15:35:13.692248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-15 15:35:13.692260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-15 15:35:13.692525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-15 15:35:13.692536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-15 15:35:13.692725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-15 15:35:13.692737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.982 [2024-07-15 15:35:13.693008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.982 [2024-07-15 15:35:13.693020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.982 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-15 15:35:13.693220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-15 15:35:13.693232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-15 15:35:13.693483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-15 15:35:13.693496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-15 15:35:13.693769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-15 15:35:13.693781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-15 15:35:13.694039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-15 15:35:13.694052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-15 15:35:13.694370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-15 15:35:13.694382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-15 15:35:13.694684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-15 15:35:13.694696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-15 15:35:13.695021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-15 15:35:13.695033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-15 15:35:13.695359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-15 15:35:13.695371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-15 15:35:13.695613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-15 15:35:13.695624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-15 15:35:13.695798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-15 15:35:13.695810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-15 15:35:13.696146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-15 15:35:13.696159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-15 15:35:13.696439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-15 15:35:13.696451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-15 15:35:13.696732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-15 15:35:13.696744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-15 15:35:13.697053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-15 15:35:13.697066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-15 15:35:13.697409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-15 15:35:13.697421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-15 15:35:13.697698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-15 15:35:13.697710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-15 15:35:13.698056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-15 15:35:13.698068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-15 15:35:13.698344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-15 15:35:13.698356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-15 15:35:13.698608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-15 15:35:13.698619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-15 15:35:13.698885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-15 15:35:13.698898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-15 15:35:13.699228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-15 15:35:13.699240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-15 15:35:13.699495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-15 15:35:13.699507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-15 15:35:13.699775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-15 15:35:13.699788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-15 15:35:13.700115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-15 15:35:13.700127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-15 15:35:13.700328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-15 15:35:13.700339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-15 15:35:13.700609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-15 15:35:13.700620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-15 15:35:13.700903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-15 15:35:13.700915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-15 15:35:13.701224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-15 15:35:13.701236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-15 15:35:13.701439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-15 15:35:13.701451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-15 15:35:13.701637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-15 15:35:13.701649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-15 15:35:13.701898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-15 15:35:13.701910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-15 15:35:13.702200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-15 15:35:13.702212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-15 15:35:13.702522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-15 15:35:13.702534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-15 15:35:13.702776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-15 15:35:13.702788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-15 15:35:13.703026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-15 15:35:13.703038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-15 15:35:13.703361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-15 15:35:13.703373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-15 15:35:13.703655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-15 15:35:13.703666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-15 15:35:13.703847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-15 15:35:13.703859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-15 15:35:13.704215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-15 15:35:13.704227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-15 15:35:13.704464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-15 15:35:13.704476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-15 15:35:13.704740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-15 15:35:13.704753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-15 15:35:13.705003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-15 15:35:13.705015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-15 15:35:13.705317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-15 15:35:13.705329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-15 15:35:13.705698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-15 15:35:13.705710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-15 15:35:13.706019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-15 15:35:13.706031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-15 15:35:13.706217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-15 15:35:13.706230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.983 qpair failed and we were unable to recover it. 00:30:09.983 [2024-07-15 15:35:13.706391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.983 [2024-07-15 15:35:13.706403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-15 15:35:13.706733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-15 15:35:13.706746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-15 15:35:13.707052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-15 15:35:13.707064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-15 15:35:13.707450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-15 15:35:13.707461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-15 15:35:13.707725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-15 15:35:13.707736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-15 15:35:13.708031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-15 15:35:13.708043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-15 15:35:13.708239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-15 15:35:13.708251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-15 15:35:13.708496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-15 15:35:13.708511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-15 15:35:13.708811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-15 15:35:13.708823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-15 15:35:13.709103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-15 15:35:13.709115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-15 15:35:13.709438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-15 15:35:13.709450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-15 15:35:13.709774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-15 15:35:13.709786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-15 15:35:13.710161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-15 15:35:13.710173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-15 15:35:13.710419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-15 15:35:13.710431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-15 15:35:13.710703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-15 15:35:13.710715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-15 15:35:13.711017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-15 15:35:13.711029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-15 15:35:13.711264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-15 15:35:13.711276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-15 15:35:13.711568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-15 15:35:13.711580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-15 15:35:13.711810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-15 15:35:13.711822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-15 15:35:13.712195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-15 15:35:13.712207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-15 15:35:13.712438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-15 15:35:13.712450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-15 15:35:13.712775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-15 15:35:13.712787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-15 15:35:13.713108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-15 15:35:13.713120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-15 15:35:13.713430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-15 15:35:13.713443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-15 15:35:13.713678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-15 15:35:13.713691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-15 15:35:13.713993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-15 15:35:13.714006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-15 15:35:13.714255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-15 15:35:13.714266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-15 15:35:13.714567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-15 15:35:13.714579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-15 15:35:13.714812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-15 15:35:13.714824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-15 15:35:13.715141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-15 15:35:13.715153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-15 15:35:13.715475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-15 15:35:13.715487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-15 15:35:13.715739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-15 15:35:13.715751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-15 15:35:13.716051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-15 15:35:13.716064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-15 15:35:13.716314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-15 15:35:13.716326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-15 15:35:13.716572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-15 15:35:13.716584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-15 15:35:13.716885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-15 15:35:13.716897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-15 15:35:13.717223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-15 15:35:13.717235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-15 15:35:13.717484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-15 15:35:13.717497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-15 15:35:13.717762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-15 15:35:13.717774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-15 15:35:13.718097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-15 15:35:13.718109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-15 15:35:13.718431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-15 15:35:13.718443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-15 15:35:13.718698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-15 15:35:13.718710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-15 15:35:13.718952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-15 15:35:13.718964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-15 15:35:13.719214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-15 15:35:13.719226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.984 [2024-07-15 15:35:13.719412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.984 [2024-07-15 15:35:13.719424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.984 qpair failed and we were unable to recover it. 00:30:09.985 [2024-07-15 15:35:13.719684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.985 [2024-07-15 15:35:13.719696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.985 qpair failed and we were unable to recover it. 00:30:09.985 [2024-07-15 15:35:13.719998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.985 [2024-07-15 15:35:13.720010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.985 qpair failed and we were unable to recover it. 00:30:09.985 [2024-07-15 15:35:13.720203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.985 [2024-07-15 15:35:13.720217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.985 qpair failed and we were unable to recover it. 00:30:09.985 [2024-07-15 15:35:13.720481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.985 [2024-07-15 15:35:13.720493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.985 qpair failed and we were unable to recover it. 00:30:09.985 [2024-07-15 15:35:13.720839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.985 [2024-07-15 15:35:13.720851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.985 qpair failed and we were unable to recover it. 00:30:09.985 [2024-07-15 15:35:13.721063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.985 [2024-07-15 15:35:13.721075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.985 qpair failed and we were unable to recover it. 00:30:09.985 [2024-07-15 15:35:13.721335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.985 [2024-07-15 15:35:13.721346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.985 qpair failed and we were unable to recover it. 00:30:09.985 [2024-07-15 15:35:13.721696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.985 [2024-07-15 15:35:13.721708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.985 qpair failed and we were unable to recover it. 00:30:09.985 [2024-07-15 15:35:13.722075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.985 [2024-07-15 15:35:13.722087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.985 qpair failed and we were unable to recover it. 00:30:09.985 [2024-07-15 15:35:13.722334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.985 [2024-07-15 15:35:13.722346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.985 qpair failed and we were unable to recover it. 00:30:09.985 [2024-07-15 15:35:13.722595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.985 [2024-07-15 15:35:13.722607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.985 qpair failed and we were unable to recover it. 00:30:09.985 [2024-07-15 15:35:13.722932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.985 [2024-07-15 15:35:13.722944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.985 qpair failed and we were unable to recover it. 00:30:09.985 [2024-07-15 15:35:13.723314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.985 [2024-07-15 15:35:13.723326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.985 qpair failed and we were unable to recover it. 00:30:09.985 [2024-07-15 15:35:13.723593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.985 [2024-07-15 15:35:13.723604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.985 qpair failed and we were unable to recover it. 00:30:09.985 [2024-07-15 15:35:13.723931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.985 [2024-07-15 15:35:13.723943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.985 qpair failed and we were unable to recover it. 00:30:09.985 [2024-07-15 15:35:13.724268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.985 [2024-07-15 15:35:13.724280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.985 qpair failed and we were unable to recover it. 00:30:09.985 [2024-07-15 15:35:13.724487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.985 [2024-07-15 15:35:13.724499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.985 qpair failed and we were unable to recover it. 00:30:09.985 [2024-07-15 15:35:13.724799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.985 [2024-07-15 15:35:13.724811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.985 qpair failed and we were unable to recover it. 00:30:09.985 [2024-07-15 15:35:13.725019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.985 [2024-07-15 15:35:13.725031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.985 qpair failed and we were unable to recover it. 00:30:09.985 [2024-07-15 15:35:13.725285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.985 [2024-07-15 15:35:13.725298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.985 qpair failed and we were unable to recover it. 00:30:09.985 [2024-07-15 15:35:13.725503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.985 [2024-07-15 15:35:13.725515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.985 qpair failed and we were unable to recover it. 00:30:09.985 [2024-07-15 15:35:13.725804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.985 [2024-07-15 15:35:13.725815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.985 qpair failed and we were unable to recover it. 00:30:09.985 [2024-07-15 15:35:13.726083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.985 [2024-07-15 15:35:13.726095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.985 qpair failed and we were unable to recover it. 00:30:09.985 [2024-07-15 15:35:13.726271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.985 [2024-07-15 15:35:13.726283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.985 qpair failed and we were unable to recover it. 00:30:09.985 [2024-07-15 15:35:13.726490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.985 [2024-07-15 15:35:13.726502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.985 qpair failed and we were unable to recover it. 00:30:09.985 [2024-07-15 15:35:13.726753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.985 [2024-07-15 15:35:13.726765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.985 qpair failed and we were unable to recover it. 00:30:09.985 [2024-07-15 15:35:13.727020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.985 [2024-07-15 15:35:13.727040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.985 qpair failed and we were unable to recover it. 00:30:09.985 [2024-07-15 15:35:13.727230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.985 [2024-07-15 15:35:13.727242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.985 qpair failed and we were unable to recover it. 00:30:09.985 [2024-07-15 15:35:13.727481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.985 [2024-07-15 15:35:13.727494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.985 qpair failed and we were unable to recover it. 00:30:09.985 [2024-07-15 15:35:13.727729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.985 [2024-07-15 15:35:13.727741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.985 qpair failed and we were unable to recover it. 00:30:09.985 [2024-07-15 15:35:13.728038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.985 [2024-07-15 15:35:13.728050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.985 qpair failed and we were unable to recover it. 00:30:09.986 [2024-07-15 15:35:13.728215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.986 [2024-07-15 15:35:13.728227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.986 qpair failed and we were unable to recover it. 00:30:09.986 [2024-07-15 15:35:13.728495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.986 [2024-07-15 15:35:13.728507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.986 qpair failed and we were unable to recover it. 00:30:09.986 [2024-07-15 15:35:13.728751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.986 [2024-07-15 15:35:13.728763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.986 qpair failed and we were unable to recover it. 00:30:09.986 [2024-07-15 15:35:13.729069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.986 [2024-07-15 15:35:13.729081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.986 qpair failed and we were unable to recover it. 00:30:09.986 [2024-07-15 15:35:13.729344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.986 [2024-07-15 15:35:13.729356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.986 qpair failed and we were unable to recover it. 00:30:09.986 [2024-07-15 15:35:13.729620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.986 [2024-07-15 15:35:13.729633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.986 qpair failed and we were unable to recover it. 00:30:09.986 [2024-07-15 15:35:13.729812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.986 [2024-07-15 15:35:13.729825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.986 qpair failed and we were unable to recover it. 00:30:09.986 [2024-07-15 15:35:13.730070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.986 [2024-07-15 15:35:13.730082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.986 qpair failed and we were unable to recover it. 00:30:09.986 [2024-07-15 15:35:13.730337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.986 [2024-07-15 15:35:13.730348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.986 qpair failed and we were unable to recover it. 00:30:09.986 [2024-07-15 15:35:13.730600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.986 [2024-07-15 15:35:13.730612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.986 qpair failed and we were unable to recover it. 00:30:09.986 [2024-07-15 15:35:13.730850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.986 [2024-07-15 15:35:13.730862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.986 qpair failed and we were unable to recover it. 00:30:09.986 [2024-07-15 15:35:13.731188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.986 [2024-07-15 15:35:13.731201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.986 qpair failed and we were unable to recover it. 00:30:09.986 [2024-07-15 15:35:13.731395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.986 [2024-07-15 15:35:13.731407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.986 qpair failed and we were unable to recover it. 00:30:09.986 [2024-07-15 15:35:13.731679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.986 [2024-07-15 15:35:13.731691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.986 qpair failed and we were unable to recover it. 00:30:09.986 [2024-07-15 15:35:13.731965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.986 [2024-07-15 15:35:13.731978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.986 qpair failed and we were unable to recover it. 00:30:09.986 [2024-07-15 15:35:13.732212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.986 [2024-07-15 15:35:13.732224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.986 qpair failed and we were unable to recover it. 00:30:09.986 [2024-07-15 15:35:13.732459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.986 [2024-07-15 15:35:13.732471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.986 qpair failed and we were unable to recover it. 00:30:09.986 [2024-07-15 15:35:13.732774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.986 [2024-07-15 15:35:13.732786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.986 qpair failed and we were unable to recover it. 00:30:09.986 [2024-07-15 15:35:13.733074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.986 [2024-07-15 15:35:13.733086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.986 qpair failed and we were unable to recover it. 00:30:09.986 [2024-07-15 15:35:13.733345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.986 [2024-07-15 15:35:13.733357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.986 qpair failed and we were unable to recover it. 00:30:09.986 [2024-07-15 15:35:13.733716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.986 [2024-07-15 15:35:13.733728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.986 qpair failed and we were unable to recover it. 00:30:09.986 [2024-07-15 15:35:13.734074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.986 [2024-07-15 15:35:13.734087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.986 qpair failed and we were unable to recover it. 00:30:09.986 [2024-07-15 15:35:13.734360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.986 [2024-07-15 15:35:13.734372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.986 qpair failed and we were unable to recover it. 00:30:09.986 [2024-07-15 15:35:13.734667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.986 [2024-07-15 15:35:13.734679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.986 qpair failed and we were unable to recover it. 00:30:09.986 [2024-07-15 15:35:13.734939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.986 [2024-07-15 15:35:13.734951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.986 qpair failed and we were unable to recover it. 00:30:09.986 [2024-07-15 15:35:13.735186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.986 [2024-07-15 15:35:13.735198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.986 qpair failed and we were unable to recover it. 00:30:09.986 [2024-07-15 15:35:13.735502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.986 [2024-07-15 15:35:13.735514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.986 qpair failed and we were unable to recover it. 00:30:09.986 [2024-07-15 15:35:13.735814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.986 [2024-07-15 15:35:13.735826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.986 qpair failed and we were unable to recover it. 00:30:09.986 [2024-07-15 15:35:13.736154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.986 [2024-07-15 15:35:13.736167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.986 qpair failed and we were unable to recover it. 00:30:09.986 [2024-07-15 15:35:13.736419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.986 [2024-07-15 15:35:13.736432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.986 qpair failed and we were unable to recover it. 00:30:09.986 [2024-07-15 15:35:13.736732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.986 [2024-07-15 15:35:13.736744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.986 qpair failed and we were unable to recover it. 00:30:09.986 [2024-07-15 15:35:13.737010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.986 [2024-07-15 15:35:13.737022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.986 qpair failed and we were unable to recover it. 00:30:09.986 [2024-07-15 15:35:13.737210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.986 [2024-07-15 15:35:13.737222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.986 qpair failed and we were unable to recover it. 00:30:09.986 [2024-07-15 15:35:13.737542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.986 [2024-07-15 15:35:13.737554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.986 qpair failed and we were unable to recover it. 00:30:09.986 [2024-07-15 15:35:13.737816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.986 [2024-07-15 15:35:13.737828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.986 qpair failed and we were unable to recover it. 00:30:09.986 [2024-07-15 15:35:13.738092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.986 [2024-07-15 15:35:13.738104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.987 qpair failed and we were unable to recover it. 00:30:09.987 [2024-07-15 15:35:13.738405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.987 [2024-07-15 15:35:13.738417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.987 qpair failed and we were unable to recover it. 00:30:09.987 [2024-07-15 15:35:13.738769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.987 [2024-07-15 15:35:13.738781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.987 qpair failed and we were unable to recover it. 00:30:09.987 [2024-07-15 15:35:13.739017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.987 [2024-07-15 15:35:13.739030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.987 qpair failed and we were unable to recover it. 00:30:09.987 [2024-07-15 15:35:13.739194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.987 [2024-07-15 15:35:13.739206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.987 qpair failed and we were unable to recover it. 00:30:09.987 [2024-07-15 15:35:13.739458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.987 [2024-07-15 15:35:13.739470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.987 qpair failed and we were unable to recover it. 00:30:09.987 [2024-07-15 15:35:13.739749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.987 [2024-07-15 15:35:13.739760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.987 qpair failed and we were unable to recover it. 00:30:09.987 [2024-07-15 15:35:13.740090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.987 [2024-07-15 15:35:13.740102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.987 qpair failed and we were unable to recover it. 00:30:09.987 [2024-07-15 15:35:13.740344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.987 [2024-07-15 15:35:13.740356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.987 qpair failed and we were unable to recover it. 00:30:09.987 [2024-07-15 15:35:13.740682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.987 [2024-07-15 15:35:13.740694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.987 qpair failed and we were unable to recover it. 00:30:09.987 [2024-07-15 15:35:13.740945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.987 [2024-07-15 15:35:13.740958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.987 qpair failed and we were unable to recover it. 00:30:09.987 [2024-07-15 15:35:13.741216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.987 [2024-07-15 15:35:13.741228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.987 qpair failed and we were unable to recover it. 00:30:09.987 [2024-07-15 15:35:13.741570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.987 [2024-07-15 15:35:13.741582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.987 qpair failed and we were unable to recover it. 00:30:09.987 [2024-07-15 15:35:13.741927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.987 [2024-07-15 15:35:13.741940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.987 qpair failed and we were unable to recover it. 00:30:09.987 [2024-07-15 15:35:13.742260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.987 [2024-07-15 15:35:13.742271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.987 qpair failed and we were unable to recover it. 00:30:09.987 [2024-07-15 15:35:13.742556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.987 [2024-07-15 15:35:13.742567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.987 qpair failed and we were unable to recover it. 00:30:09.987 [2024-07-15 15:35:13.742843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.987 [2024-07-15 15:35:13.742857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.987 qpair failed and we were unable to recover it. 00:30:09.987 [2024-07-15 15:35:13.743160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.987 [2024-07-15 15:35:13.743173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.987 qpair failed and we were unable to recover it. 00:30:09.987 [2024-07-15 15:35:13.743382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.987 [2024-07-15 15:35:13.743394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.987 qpair failed and we were unable to recover it. 00:30:09.987 [2024-07-15 15:35:13.743724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.987 [2024-07-15 15:35:13.743736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.987 qpair failed and we were unable to recover it. 00:30:09.987 [2024-07-15 15:35:13.743934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.987 [2024-07-15 15:35:13.743947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.987 qpair failed and we were unable to recover it. 00:30:09.987 [2024-07-15 15:35:13.744275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.987 [2024-07-15 15:35:13.744287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.987 qpair failed and we were unable to recover it. 00:30:09.987 [2024-07-15 15:35:13.744683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.987 [2024-07-15 15:35:13.744695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.987 qpair failed and we were unable to recover it. 00:30:09.987 [2024-07-15 15:35:13.745027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.987 [2024-07-15 15:35:13.745039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.987 qpair failed and we were unable to recover it. 00:30:09.987 [2024-07-15 15:35:13.745283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.987 [2024-07-15 15:35:13.745295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.987 qpair failed and we were unable to recover it. 00:30:09.987 [2024-07-15 15:35:13.745600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.987 [2024-07-15 15:35:13.745612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.987 qpair failed and we were unable to recover it. 00:30:09.987 [2024-07-15 15:35:13.745916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.987 [2024-07-15 15:35:13.745929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.987 qpair failed and we were unable to recover it. 00:30:09.987 [2024-07-15 15:35:13.746175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.987 [2024-07-15 15:35:13.746187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.987 qpair failed and we were unable to recover it. 00:30:09.987 [2024-07-15 15:35:13.746382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.987 [2024-07-15 15:35:13.746394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.987 qpair failed and we were unable to recover it. 00:30:09.987 [2024-07-15 15:35:13.746654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.987 [2024-07-15 15:35:13.746666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.987 qpair failed and we were unable to recover it. 00:30:09.987 [2024-07-15 15:35:13.746982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.987 [2024-07-15 15:35:13.746995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.987 qpair failed and we were unable to recover it. 00:30:09.987 [2024-07-15 15:35:13.747296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.987 [2024-07-15 15:35:13.747308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.987 qpair failed and we were unable to recover it. 00:30:09.987 [2024-07-15 15:35:13.747647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.987 [2024-07-15 15:35:13.747659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.987 qpair failed and we were unable to recover it. 00:30:09.987 [2024-07-15 15:35:13.747920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.987 [2024-07-15 15:35:13.747932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.987 qpair failed and we were unable to recover it. 00:30:09.987 [2024-07-15 15:35:13.748138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.987 [2024-07-15 15:35:13.748150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.987 qpair failed and we were unable to recover it. 00:30:09.987 [2024-07-15 15:35:13.748354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.987 [2024-07-15 15:35:13.748365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.987 qpair failed and we were unable to recover it. 00:30:09.987 [2024-07-15 15:35:13.748647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.987 [2024-07-15 15:35:13.748659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.987 qpair failed and we were unable to recover it. 00:30:09.987 [2024-07-15 15:35:13.749033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.987 [2024-07-15 15:35:13.749045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.987 qpair failed and we were unable to recover it. 00:30:09.987 [2024-07-15 15:35:13.749247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.987 [2024-07-15 15:35:13.749258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.987 qpair failed and we were unable to recover it. 00:30:09.987 [2024-07-15 15:35:13.749582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.987 [2024-07-15 15:35:13.749594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.987 qpair failed and we were unable to recover it. 00:30:09.987 [2024-07-15 15:35:13.749842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.987 [2024-07-15 15:35:13.749854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.987 qpair failed and we were unable to recover it. 00:30:09.988 [2024-07-15 15:35:13.750037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.988 [2024-07-15 15:35:13.750049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.988 qpair failed and we were unable to recover it. 00:30:09.988 [2024-07-15 15:35:13.750255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.988 [2024-07-15 15:35:13.750267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.988 qpair failed and we were unable to recover it. 00:30:09.988 [2024-07-15 15:35:13.750536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.988 [2024-07-15 15:35:13.750548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.988 qpair failed and we were unable to recover it. 00:30:09.988 [2024-07-15 15:35:13.750850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.988 [2024-07-15 15:35:13.750862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.988 qpair failed and we were unable to recover it. 00:30:09.988 [2024-07-15 15:35:13.751064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.988 [2024-07-15 15:35:13.751076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.988 qpair failed and we were unable to recover it. 00:30:09.988 [2024-07-15 15:35:13.751268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.988 [2024-07-15 15:35:13.751280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.988 qpair failed and we were unable to recover it. 00:30:09.988 [2024-07-15 15:35:13.751532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.988 [2024-07-15 15:35:13.751543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.988 qpair failed and we were unable to recover it. 00:30:09.988 [2024-07-15 15:35:13.751874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.988 [2024-07-15 15:35:13.751886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.988 qpair failed and we were unable to recover it. 00:30:09.988 [2024-07-15 15:35:13.752180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.988 [2024-07-15 15:35:13.752192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.988 qpair failed and we were unable to recover it. 00:30:09.988 [2024-07-15 15:35:13.752385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.988 [2024-07-15 15:35:13.752397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.988 qpair failed and we were unable to recover it. 00:30:09.988 [2024-07-15 15:35:13.752599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.988 [2024-07-15 15:35:13.752610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.988 qpair failed and we were unable to recover it. 00:30:09.988 [2024-07-15 15:35:13.752861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.988 [2024-07-15 15:35:13.752874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.988 qpair failed and we were unable to recover it. 00:30:09.988 [2024-07-15 15:35:13.753126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.988 [2024-07-15 15:35:13.753137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.988 qpair failed and we were unable to recover it. 00:30:09.988 [2024-07-15 15:35:13.753440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.988 [2024-07-15 15:35:13.753451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.988 qpair failed and we were unable to recover it. 00:30:09.988 [2024-07-15 15:35:13.753776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.988 [2024-07-15 15:35:13.753788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.988 qpair failed and we were unable to recover it. 00:30:09.988 [2024-07-15 15:35:13.754048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.988 [2024-07-15 15:35:13.754062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.988 qpair failed and we were unable to recover it. 00:30:09.988 [2024-07-15 15:35:13.754386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.988 [2024-07-15 15:35:13.754398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.988 qpair failed and we were unable to recover it. 00:30:09.988 [2024-07-15 15:35:13.754708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.988 [2024-07-15 15:35:13.754719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.988 qpair failed and we were unable to recover it. 00:30:09.988 [2024-07-15 15:35:13.755029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.988 [2024-07-15 15:35:13.755041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.988 qpair failed and we were unable to recover it. 00:30:09.988 [2024-07-15 15:35:13.755280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.988 [2024-07-15 15:35:13.755292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.988 qpair failed and we were unable to recover it. 00:30:09.988 [2024-07-15 15:35:13.755642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.988 [2024-07-15 15:35:13.755653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.988 qpair failed and we were unable to recover it. 00:30:09.988 [2024-07-15 15:35:13.755929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.988 [2024-07-15 15:35:13.755941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.988 qpair failed and we were unable to recover it. 00:30:09.988 [2024-07-15 15:35:13.756193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.988 [2024-07-15 15:35:13.756205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.988 qpair failed and we were unable to recover it. 00:30:09.988 [2024-07-15 15:35:13.756511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.988 [2024-07-15 15:35:13.756524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.988 qpair failed and we were unable to recover it. 00:30:09.988 [2024-07-15 15:35:13.756781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.988 [2024-07-15 15:35:13.756793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.988 qpair failed and we were unable to recover it. 00:30:09.988 [2024-07-15 15:35:13.756988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.988 [2024-07-15 15:35:13.757000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.988 qpair failed and we were unable to recover it. 00:30:09.988 [2024-07-15 15:35:13.757311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.988 [2024-07-15 15:35:13.757323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.988 qpair failed and we were unable to recover it. 00:30:09.988 [2024-07-15 15:35:13.757574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.988 [2024-07-15 15:35:13.757586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.988 qpair failed and we were unable to recover it. 00:30:09.988 [2024-07-15 15:35:13.757845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.988 [2024-07-15 15:35:13.757856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.988 qpair failed and we were unable to recover it. 00:30:09.988 [2024-07-15 15:35:13.758109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.988 [2024-07-15 15:35:13.758121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.988 qpair failed and we were unable to recover it. 00:30:09.988 [2024-07-15 15:35:13.758370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.988 [2024-07-15 15:35:13.758382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.988 qpair failed and we were unable to recover it. 00:30:09.988 [2024-07-15 15:35:13.758709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.988 [2024-07-15 15:35:13.758721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.988 qpair failed and we were unable to recover it. 00:30:09.988 [2024-07-15 15:35:13.758966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.988 [2024-07-15 15:35:13.758978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.988 qpair failed and we were unable to recover it. 00:30:09.988 [2024-07-15 15:35:13.759245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.988 [2024-07-15 15:35:13.759257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.988 qpair failed and we were unable to recover it. 00:30:09.988 [2024-07-15 15:35:13.759563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.988 [2024-07-15 15:35:13.759575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.988 qpair failed and we were unable to recover it. 00:30:09.988 [2024-07-15 15:35:13.759740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.988 [2024-07-15 15:35:13.759752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.988 qpair failed and we were unable to recover it. 00:30:09.988 [2024-07-15 15:35:13.760006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.988 [2024-07-15 15:35:13.760018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.988 qpair failed and we were unable to recover it. 00:30:09.988 [2024-07-15 15:35:13.760272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.988 [2024-07-15 15:35:13.760284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.988 qpair failed and we were unable to recover it. 00:30:09.988 [2024-07-15 15:35:13.760526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.988 [2024-07-15 15:35:13.760538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.988 qpair failed and we were unable to recover it. 00:30:09.988 [2024-07-15 15:35:13.760864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.988 [2024-07-15 15:35:13.760877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.988 qpair failed and we were unable to recover it. 00:30:09.988 [2024-07-15 15:35:13.761133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.989 [2024-07-15 15:35:13.761145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.989 qpair failed and we were unable to recover it. 00:30:09.989 [2024-07-15 15:35:13.761406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.989 [2024-07-15 15:35:13.761419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.989 qpair failed and we were unable to recover it. 00:30:09.989 [2024-07-15 15:35:13.761624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.989 [2024-07-15 15:35:13.761636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.989 qpair failed and we were unable to recover it. 00:30:09.989 [2024-07-15 15:35:13.761915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.989 [2024-07-15 15:35:13.761928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.989 qpair failed and we were unable to recover it. 00:30:09.989 [2024-07-15 15:35:13.762175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.989 [2024-07-15 15:35:13.762187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.989 qpair failed and we were unable to recover it. 00:30:09.989 [2024-07-15 15:35:13.762443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.989 [2024-07-15 15:35:13.762455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.989 qpair failed and we were unable to recover it. 00:30:09.989 [2024-07-15 15:35:13.762705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.989 [2024-07-15 15:35:13.762717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.989 qpair failed and we were unable to recover it. 00:30:09.989 [2024-07-15 15:35:13.762959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.989 [2024-07-15 15:35:13.762971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.989 qpair failed and we were unable to recover it. 00:30:09.989 [2024-07-15 15:35:13.763295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.989 [2024-07-15 15:35:13.763307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.989 qpair failed and we were unable to recover it. 00:30:09.989 [2024-07-15 15:35:13.763578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.989 [2024-07-15 15:35:13.763590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.989 qpair failed and we were unable to recover it. 00:30:09.989 [2024-07-15 15:35:13.763769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.989 [2024-07-15 15:35:13.763781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.989 qpair failed and we were unable to recover it. 00:30:09.989 [2024-07-15 15:35:13.764032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.989 [2024-07-15 15:35:13.764044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.989 qpair failed and we were unable to recover it. 00:30:09.989 [2024-07-15 15:35:13.764365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.989 [2024-07-15 15:35:13.764377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.989 qpair failed and we were unable to recover it. 00:30:09.989 [2024-07-15 15:35:13.764625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.989 [2024-07-15 15:35:13.764637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.989 qpair failed and we were unable to recover it. 00:30:09.989 [2024-07-15 15:35:13.764912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.989 [2024-07-15 15:35:13.764924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.989 qpair failed and we were unable to recover it. 00:30:09.989 [2024-07-15 15:35:13.765178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.989 [2024-07-15 15:35:13.765192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.989 qpair failed and we were unable to recover it. 00:30:09.989 [2024-07-15 15:35:13.765389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.989 [2024-07-15 15:35:13.765400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.989 qpair failed and we were unable to recover it. 00:30:09.989 [2024-07-15 15:35:13.765703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.989 [2024-07-15 15:35:13.765715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.989 qpair failed and we were unable to recover it. 00:30:09.989 [2024-07-15 15:35:13.766003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.989 [2024-07-15 15:35:13.766015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.989 qpair failed and we were unable to recover it. 00:30:09.989 [2024-07-15 15:35:13.766349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.989 [2024-07-15 15:35:13.766361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.989 qpair failed and we were unable to recover it. 00:30:09.989 [2024-07-15 15:35:13.766697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.989 [2024-07-15 15:35:13.766708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.989 qpair failed and we were unable to recover it. 00:30:09.989 [2024-07-15 15:35:13.766949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.989 [2024-07-15 15:35:13.766961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.989 qpair failed and we were unable to recover it. 00:30:09.989 [2024-07-15 15:35:13.767263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.989 [2024-07-15 15:35:13.767274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.989 qpair failed and we were unable to recover it. 00:30:09.989 [2024-07-15 15:35:13.767622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.989 [2024-07-15 15:35:13.767634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.989 qpair failed and we were unable to recover it. 00:30:09.989 [2024-07-15 15:35:13.767935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.989 [2024-07-15 15:35:13.767947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.989 qpair failed and we were unable to recover it. 00:30:09.989 [2024-07-15 15:35:13.768227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.989 [2024-07-15 15:35:13.768239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.989 qpair failed and we were unable to recover it. 00:30:09.989 [2024-07-15 15:35:13.768517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.989 [2024-07-15 15:35:13.768530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.989 qpair failed and we were unable to recover it. 00:30:09.989 [2024-07-15 15:35:13.768765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.989 [2024-07-15 15:35:13.768777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.989 qpair failed and we were unable to recover it. 00:30:09.989 [2024-07-15 15:35:13.769051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.989 [2024-07-15 15:35:13.769063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.989 qpair failed and we were unable to recover it. 00:30:09.989 [2024-07-15 15:35:13.769410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.989 [2024-07-15 15:35:13.769423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.989 qpair failed and we were unable to recover it. 00:30:09.989 [2024-07-15 15:35:13.769689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.989 [2024-07-15 15:35:13.769701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.989 qpair failed and we were unable to recover it. 00:30:09.989 [2024-07-15 15:35:13.769972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.989 [2024-07-15 15:35:13.769984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.989 qpair failed and we were unable to recover it. 00:30:09.989 [2024-07-15 15:35:13.770194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.989 [2024-07-15 15:35:13.770206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.989 qpair failed and we were unable to recover it. 00:30:09.989 [2024-07-15 15:35:13.770451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.989 [2024-07-15 15:35:13.770463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.989 qpair failed and we were unable to recover it. 00:30:09.989 [2024-07-15 15:35:13.770748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.989 [2024-07-15 15:35:13.770761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.989 qpair failed and we were unable to recover it. 00:30:09.989 [2024-07-15 15:35:13.771014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.989 [2024-07-15 15:35:13.771026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.989 qpair failed and we were unable to recover it. 00:30:09.989 [2024-07-15 15:35:13.771346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.989 [2024-07-15 15:35:13.771358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.989 qpair failed and we were unable to recover it. 00:30:09.990 [2024-07-15 15:35:13.771631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.990 [2024-07-15 15:35:13.771644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.990 qpair failed and we were unable to recover it. 00:30:09.990 [2024-07-15 15:35:13.771888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.990 [2024-07-15 15:35:13.771900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.990 qpair failed and we were unable to recover it. 00:30:09.990 [2024-07-15 15:35:13.772157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.990 [2024-07-15 15:35:13.772170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.990 qpair failed and we were unable to recover it. 00:30:09.990 [2024-07-15 15:35:13.772402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.990 [2024-07-15 15:35:13.772414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.990 qpair failed and we were unable to recover it. 00:30:09.990 [2024-07-15 15:35:13.772740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.990 [2024-07-15 15:35:13.772753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.990 qpair failed and we were unable to recover it. 00:30:09.990 [2024-07-15 15:35:13.773018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.990 [2024-07-15 15:35:13.773030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.990 qpair failed and we were unable to recover it. 00:30:09.990 [2024-07-15 15:35:13.773318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.990 [2024-07-15 15:35:13.773330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.990 qpair failed and we were unable to recover it. 00:30:09.990 [2024-07-15 15:35:13.773589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.990 [2024-07-15 15:35:13.773601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.990 qpair failed and we were unable to recover it. 00:30:09.990 [2024-07-15 15:35:13.773954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.990 [2024-07-15 15:35:13.773967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.990 qpair failed and we were unable to recover it. 00:30:09.990 [2024-07-15 15:35:13.774292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.990 [2024-07-15 15:35:13.774305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.990 qpair failed and we were unable to recover it. 00:30:09.990 [2024-07-15 15:35:13.774515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.990 [2024-07-15 15:35:13.774527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.990 qpair failed and we were unable to recover it. 00:30:09.990 [2024-07-15 15:35:13.774798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.990 [2024-07-15 15:35:13.774810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.990 qpair failed and we were unable to recover it. 00:30:09.990 [2024-07-15 15:35:13.775073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.990 [2024-07-15 15:35:13.775086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.990 qpair failed and we were unable to recover it. 00:30:09.990 [2024-07-15 15:35:13.775345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.990 [2024-07-15 15:35:13.775357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.990 qpair failed and we were unable to recover it. 00:30:09.990 [2024-07-15 15:35:13.775547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.990 [2024-07-15 15:35:13.775560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.990 qpair failed and we were unable to recover it. 00:30:09.990 [2024-07-15 15:35:13.775907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.990 [2024-07-15 15:35:13.775919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.990 qpair failed and we were unable to recover it. 00:30:09.990 [2024-07-15 15:35:13.776183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.990 [2024-07-15 15:35:13.776195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.990 qpair failed and we were unable to recover it. 00:30:09.990 [2024-07-15 15:35:13.776462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.990 [2024-07-15 15:35:13.776473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.990 qpair failed and we were unable to recover it. 00:30:09.990 [2024-07-15 15:35:13.776708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.990 [2024-07-15 15:35:13.776722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.990 qpair failed and we were unable to recover it. 00:30:09.990 [2024-07-15 15:35:13.776992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.990 [2024-07-15 15:35:13.777005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.990 qpair failed and we were unable to recover it. 00:30:09.990 [2024-07-15 15:35:13.777262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.990 [2024-07-15 15:35:13.777274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.990 qpair failed and we were unable to recover it. 00:30:09.990 [2024-07-15 15:35:13.777465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.990 [2024-07-15 15:35:13.777476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.990 qpair failed and we were unable to recover it. 00:30:09.990 [2024-07-15 15:35:13.777677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.990 [2024-07-15 15:35:13.777689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.990 qpair failed and we were unable to recover it. 00:30:09.990 [2024-07-15 15:35:13.777980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.990 [2024-07-15 15:35:13.777992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.990 qpair failed and we were unable to recover it. 00:30:09.990 [2024-07-15 15:35:13.778235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.990 [2024-07-15 15:35:13.778247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.990 qpair failed and we were unable to recover it. 00:30:09.990 [2024-07-15 15:35:13.778503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.990 [2024-07-15 15:35:13.778514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.990 qpair failed and we were unable to recover it. 00:30:09.990 [2024-07-15 15:35:13.778761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.990 [2024-07-15 15:35:13.778773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.990 qpair failed and we were unable to recover it. 00:30:09.990 [2024-07-15 15:35:13.779027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.990 [2024-07-15 15:35:13.779045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.990 qpair failed and we were unable to recover it. 00:30:09.990 [2024-07-15 15:35:13.779361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.990 [2024-07-15 15:35:13.779373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.990 qpair failed and we were unable to recover it. 00:30:09.990 [2024-07-15 15:35:13.779631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.990 [2024-07-15 15:35:13.779642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.990 qpair failed and we were unable to recover it. 00:30:09.990 [2024-07-15 15:35:13.779968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.990 [2024-07-15 15:35:13.779980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.990 qpair failed and we were unable to recover it. 00:30:09.990 [2024-07-15 15:35:13.780257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.990 [2024-07-15 15:35:13.780270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.990 qpair failed and we were unable to recover it. 00:30:09.990 [2024-07-15 15:35:13.780580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.990 [2024-07-15 15:35:13.780593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.990 qpair failed and we were unable to recover it. 00:30:09.990 [2024-07-15 15:35:13.780914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.990 [2024-07-15 15:35:13.780926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.990 qpair failed and we were unable to recover it. 00:30:09.990 [2024-07-15 15:35:13.781106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.990 [2024-07-15 15:35:13.781118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.990 qpair failed and we were unable to recover it. 00:30:09.990 [2024-07-15 15:35:13.781298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.990 [2024-07-15 15:35:13.781309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.990 qpair failed and we were unable to recover it. 00:30:09.990 [2024-07-15 15:35:13.781491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.990 [2024-07-15 15:35:13.781503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.990 qpair failed and we were unable to recover it. 00:30:09.990 [2024-07-15 15:35:13.781802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.990 [2024-07-15 15:35:13.781814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.990 qpair failed and we were unable to recover it. 00:30:09.990 [2024-07-15 15:35:13.782051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.991 [2024-07-15 15:35:13.782063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.991 qpair failed and we were unable to recover it. 00:30:09.991 [2024-07-15 15:35:13.782290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.991 [2024-07-15 15:35:13.782302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.991 qpair failed and we were unable to recover it. 00:30:09.991 [2024-07-15 15:35:13.782645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.991 [2024-07-15 15:35:13.782657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.991 qpair failed and we were unable to recover it. 00:30:09.991 [2024-07-15 15:35:13.782955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.991 [2024-07-15 15:35:13.782968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.991 qpair failed and we were unable to recover it. 00:30:09.991 [2024-07-15 15:35:13.783270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.991 [2024-07-15 15:35:13.783282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.991 qpair failed and we were unable to recover it. 00:30:09.991 [2024-07-15 15:35:13.783578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.991 [2024-07-15 15:35:13.783590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.991 qpair failed and we were unable to recover it. 00:30:09.991 [2024-07-15 15:35:13.783814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.991 [2024-07-15 15:35:13.783826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.991 qpair failed and we were unable to recover it. 00:30:09.991 [2024-07-15 15:35:13.784061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.991 [2024-07-15 15:35:13.784073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.991 qpair failed and we were unable to recover it. 00:30:09.991 [2024-07-15 15:35:13.784327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.991 [2024-07-15 15:35:13.784340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.991 qpair failed and we were unable to recover it. 00:30:09.991 [2024-07-15 15:35:13.784667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.991 [2024-07-15 15:35:13.784679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.991 qpair failed and we were unable to recover it. 00:30:09.991 [2024-07-15 15:35:13.784889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.991 [2024-07-15 15:35:13.784901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.991 qpair failed and we were unable to recover it. 00:30:09.991 [2024-07-15 15:35:13.785137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.991 [2024-07-15 15:35:13.785150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.991 qpair failed and we were unable to recover it. 00:30:09.991 [2024-07-15 15:35:13.785406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.991 [2024-07-15 15:35:13.785419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.991 qpair failed and we were unable to recover it. 00:30:09.991 [2024-07-15 15:35:13.785678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.991 [2024-07-15 15:35:13.785690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.991 qpair failed and we were unable to recover it. 00:30:09.991 [2024-07-15 15:35:13.785946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.991 [2024-07-15 15:35:13.785958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.991 qpair failed and we were unable to recover it. 00:30:09.991 [2024-07-15 15:35:13.786214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.991 [2024-07-15 15:35:13.786226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.991 qpair failed and we were unable to recover it. 00:30:09.991 [2024-07-15 15:35:13.786460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.991 [2024-07-15 15:35:13.786472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.991 qpair failed and we were unable to recover it. 00:30:09.991 [2024-07-15 15:35:13.786651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.991 [2024-07-15 15:35:13.786663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.991 qpair failed and we were unable to recover it. 00:30:09.991 [2024-07-15 15:35:13.786941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.991 [2024-07-15 15:35:13.786954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.991 qpair failed and we were unable to recover it. 00:30:09.991 [2024-07-15 15:35:13.787285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.991 [2024-07-15 15:35:13.787297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.991 qpair failed and we were unable to recover it. 00:30:09.991 [2024-07-15 15:35:13.787479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.991 [2024-07-15 15:35:13.787492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.991 qpair failed and we were unable to recover it. 00:30:09.991 [2024-07-15 15:35:13.787816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.991 [2024-07-15 15:35:13.787830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.991 qpair failed and we were unable to recover it. 00:30:09.991 [2024-07-15 15:35:13.788144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.991 [2024-07-15 15:35:13.788157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.991 qpair failed and we were unable to recover it. 00:30:09.991 [2024-07-15 15:35:13.788351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.991 [2024-07-15 15:35:13.788363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.991 qpair failed and we were unable to recover it. 00:30:09.991 [2024-07-15 15:35:13.788572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.991 [2024-07-15 15:35:13.788584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.991 qpair failed and we were unable to recover it. 00:30:09.991 [2024-07-15 15:35:13.788866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.991 [2024-07-15 15:35:13.788879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.991 qpair failed and we were unable to recover it. 00:30:09.991 [2024-07-15 15:35:13.789136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.991 [2024-07-15 15:35:13.789148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.991 qpair failed and we were unable to recover it. 00:30:09.991 [2024-07-15 15:35:13.789404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.991 [2024-07-15 15:35:13.789416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.991 qpair failed and we were unable to recover it. 00:30:09.991 [2024-07-15 15:35:13.789603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.991 [2024-07-15 15:35:13.789615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.991 qpair failed and we were unable to recover it. 00:30:09.991 [2024-07-15 15:35:13.789779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.991 [2024-07-15 15:35:13.789792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.991 qpair failed and we were unable to recover it. 00:30:09.991 [2024-07-15 15:35:13.789995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.991 [2024-07-15 15:35:13.790008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.991 qpair failed and we were unable to recover it. 00:30:09.991 [2024-07-15 15:35:13.790266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.991 [2024-07-15 15:35:13.790278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.991 qpair failed and we were unable to recover it. 00:30:09.991 [2024-07-15 15:35:13.790556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.991 [2024-07-15 15:35:13.790568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.991 qpair failed and we were unable to recover it. 00:30:09.991 [2024-07-15 15:35:13.790810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.991 [2024-07-15 15:35:13.790822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.991 qpair failed and we were unable to recover it. 00:30:09.991 [2024-07-15 15:35:13.790954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.991 [2024-07-15 15:35:13.790966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.991 qpair failed and we were unable to recover it. 00:30:09.991 [2024-07-15 15:35:13.791199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.991 [2024-07-15 15:35:13.791211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.991 qpair failed and we were unable to recover it. 00:30:09.991 [2024-07-15 15:35:13.791402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.991 [2024-07-15 15:35:13.791415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.991 qpair failed and we were unable to recover it. 00:30:09.991 [2024-07-15 15:35:13.791595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.991 [2024-07-15 15:35:13.791607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.991 qpair failed and we were unable to recover it. 00:30:09.991 [2024-07-15 15:35:13.791775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.991 [2024-07-15 15:35:13.791787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.991 qpair failed and we were unable to recover it. 00:30:09.991 [2024-07-15 15:35:13.791970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.991 [2024-07-15 15:35:13.791983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.991 qpair failed and we were unable to recover it. 00:30:09.991 [2024-07-15 15:35:13.792146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.991 [2024-07-15 15:35:13.792158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.992 qpair failed and we were unable to recover it. 00:30:09.992 [2024-07-15 15:35:13.792352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.992 [2024-07-15 15:35:13.792364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.992 qpair failed and we were unable to recover it. 00:30:09.992 [2024-07-15 15:35:13.792607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.992 [2024-07-15 15:35:13.792619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.992 qpair failed and we were unable to recover it. 00:30:09.992 [2024-07-15 15:35:13.792869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.992 [2024-07-15 15:35:13.792881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.992 qpair failed and we were unable to recover it. 00:30:09.992 [2024-07-15 15:35:13.793112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.992 [2024-07-15 15:35:13.793126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.992 qpair failed and we were unable to recover it. 00:30:09.992 [2024-07-15 15:35:13.793311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.992 [2024-07-15 15:35:13.793323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.992 qpair failed and we were unable to recover it. 00:30:09.992 [2024-07-15 15:35:13.793504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.992 [2024-07-15 15:35:13.793517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.992 qpair failed and we were unable to recover it. 00:30:09.992 [2024-07-15 15:35:13.793835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.992 [2024-07-15 15:35:13.793847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.992 qpair failed and we were unable to recover it. 00:30:09.992 [2024-07-15 15:35:13.794083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.992 [2024-07-15 15:35:13.794095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.992 qpair failed and we were unable to recover it. 00:30:09.992 [2024-07-15 15:35:13.794340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.992 [2024-07-15 15:35:13.794353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.992 qpair failed and we were unable to recover it. 00:30:09.992 [2024-07-15 15:35:13.794532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.992 [2024-07-15 15:35:13.794544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.992 qpair failed and we were unable to recover it. 00:30:09.992 [2024-07-15 15:35:13.794714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.992 [2024-07-15 15:35:13.794726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.992 qpair failed and we were unable to recover it. 00:30:09.992 [2024-07-15 15:35:13.795026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.992 [2024-07-15 15:35:13.795039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.992 qpair failed and we were unable to recover it. 00:30:09.992 [2024-07-15 15:35:13.795278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.992 [2024-07-15 15:35:13.795290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.992 qpair failed and we were unable to recover it. 00:30:09.992 [2024-07-15 15:35:13.795460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.992 [2024-07-15 15:35:13.795473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.992 qpair failed and we were unable to recover it. 00:30:09.992 [2024-07-15 15:35:13.795649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.992 [2024-07-15 15:35:13.795661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.992 qpair failed and we were unable to recover it. 00:30:09.992 [2024-07-15 15:35:13.795894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.992 [2024-07-15 15:35:13.795906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.992 qpair failed and we were unable to recover it. 00:30:09.992 [2024-07-15 15:35:13.796170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.992 [2024-07-15 15:35:13.796182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.992 qpair failed and we were unable to recover it. 00:30:09.992 [2024-07-15 15:35:13.796360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.992 [2024-07-15 15:35:13.796372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.992 qpair failed and we were unable to recover it. 00:30:09.992 [2024-07-15 15:35:13.796561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.992 [2024-07-15 15:35:13.796573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.992 qpair failed and we were unable to recover it. 00:30:09.992 [2024-07-15 15:35:13.796749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.992 [2024-07-15 15:35:13.796763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.992 qpair failed and we were unable to recover it. 00:30:09.992 [2024-07-15 15:35:13.796999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.992 [2024-07-15 15:35:13.797011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.992 qpair failed and we were unable to recover it. 00:30:09.992 [2024-07-15 15:35:13.797260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.992 [2024-07-15 15:35:13.797272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.992 qpair failed and we were unable to recover it. 00:30:09.992 [2024-07-15 15:35:13.797539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.992 [2024-07-15 15:35:13.797551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.992 qpair failed and we were unable to recover it. 00:30:09.992 [2024-07-15 15:35:13.797886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.992 [2024-07-15 15:35:13.797900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.992 qpair failed and we were unable to recover it. 00:30:09.992 [2024-07-15 15:35:13.798112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.992 [2024-07-15 15:35:13.798124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.992 qpair failed and we were unable to recover it. 00:30:09.992 [2024-07-15 15:35:13.798288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.992 [2024-07-15 15:35:13.798300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.992 qpair failed and we were unable to recover it. 00:30:09.992 [2024-07-15 15:35:13.798484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.992 [2024-07-15 15:35:13.798496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.992 qpair failed and we were unable to recover it. 00:30:09.992 [2024-07-15 15:35:13.798733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.992 [2024-07-15 15:35:13.798745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.992 qpair failed and we were unable to recover it. 00:30:09.992 [2024-07-15 15:35:13.798999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.992 [2024-07-15 15:35:13.799012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.992 qpair failed and we were unable to recover it. 00:30:09.992 [2024-07-15 15:35:13.799316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.992 [2024-07-15 15:35:13.799329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.992 qpair failed and we were unable to recover it. 00:30:09.992 [2024-07-15 15:35:13.799565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.992 [2024-07-15 15:35:13.799577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.992 qpair failed and we were unable to recover it. 00:30:09.992 [2024-07-15 15:35:13.799747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.992 [2024-07-15 15:35:13.799759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.992 qpair failed and we were unable to recover it. 00:30:09.992 [2024-07-15 15:35:13.799994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.992 [2024-07-15 15:35:13.800007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.992 qpair failed and we were unable to recover it. 00:30:09.992 [2024-07-15 15:35:13.800352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.992 [2024-07-15 15:35:13.800364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.992 qpair failed and we were unable to recover it. 00:30:09.992 [2024-07-15 15:35:13.800548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.992 [2024-07-15 15:35:13.800561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.992 qpair failed and we were unable to recover it. 00:30:09.992 [2024-07-15 15:35:13.800866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.992 [2024-07-15 15:35:13.800888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.992 qpair failed and we were unable to recover it. 00:30:09.992 [2024-07-15 15:35:13.801056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.992 [2024-07-15 15:35:13.801068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.992 qpair failed and we were unable to recover it. 00:30:09.992 [2024-07-15 15:35:13.801373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.992 [2024-07-15 15:35:13.801386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.992 qpair failed and we were unable to recover it. 00:30:09.992 [2024-07-15 15:35:13.801636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.992 [2024-07-15 15:35:13.801648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.992 qpair failed and we were unable to recover it. 00:30:09.992 [2024-07-15 15:35:13.801900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.992 [2024-07-15 15:35:13.801912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.992 qpair failed and we were unable to recover it. 00:30:09.993 [2024-07-15 15:35:13.802149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.993 [2024-07-15 15:35:13.802161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.993 qpair failed and we were unable to recover it. 00:30:09.993 [2024-07-15 15:35:13.802341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.993 [2024-07-15 15:35:13.802353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.993 qpair failed and we were unable to recover it. 00:30:09.993 [2024-07-15 15:35:13.802607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.993 [2024-07-15 15:35:13.802619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.993 qpair failed and we were unable to recover it. 00:30:09.993 [2024-07-15 15:35:13.802861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.993 [2024-07-15 15:35:13.802873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.993 qpair failed and we were unable to recover it. 00:30:09.993 [2024-07-15 15:35:13.803165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.993 [2024-07-15 15:35:13.803178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.993 qpair failed and we were unable to recover it. 00:30:09.993 [2024-07-15 15:35:13.803417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.993 [2024-07-15 15:35:13.803429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.993 qpair failed and we were unable to recover it. 00:30:09.993 [2024-07-15 15:35:13.803614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.993 [2024-07-15 15:35:13.803626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.993 qpair failed and we were unable to recover it. 00:30:09.993 [2024-07-15 15:35:13.803818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.993 [2024-07-15 15:35:13.803829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.993 qpair failed and we were unable to recover it. 00:30:09.993 [2024-07-15 15:35:13.804178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.993 [2024-07-15 15:35:13.804191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.993 qpair failed and we were unable to recover it. 00:30:09.993 [2024-07-15 15:35:13.804463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.993 [2024-07-15 15:35:13.804474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.993 qpair failed and we were unable to recover it. 00:30:09.993 [2024-07-15 15:35:13.804640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.993 [2024-07-15 15:35:13.804652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.993 qpair failed and we were unable to recover it. 00:30:09.993 [2024-07-15 15:35:13.804825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.993 [2024-07-15 15:35:13.804842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.993 qpair failed and we were unable to recover it. 00:30:09.993 [2024-07-15 15:35:13.805077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.993 [2024-07-15 15:35:13.805089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.993 qpair failed and we were unable to recover it. 00:30:09.993 [2024-07-15 15:35:13.805277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.993 [2024-07-15 15:35:13.805290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.993 qpair failed and we were unable to recover it. 00:30:09.993 [2024-07-15 15:35:13.805548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.993 [2024-07-15 15:35:13.805561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.993 qpair failed and we were unable to recover it. 00:30:09.993 [2024-07-15 15:35:13.805768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.993 [2024-07-15 15:35:13.805780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.993 qpair failed and we were unable to recover it. 00:30:09.993 [2024-07-15 15:35:13.806082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.993 [2024-07-15 15:35:13.806095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.993 qpair failed and we were unable to recover it. 00:30:09.993 [2024-07-15 15:35:13.806394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.993 [2024-07-15 15:35:13.806406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.993 qpair failed and we were unable to recover it. 00:30:09.993 [2024-07-15 15:35:13.806591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.993 [2024-07-15 15:35:13.806603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.993 qpair failed and we were unable to recover it. 00:30:09.993 [2024-07-15 15:35:13.806703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.993 [2024-07-15 15:35:13.806717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.993 qpair failed and we were unable to recover it. 00:30:09.993 [2024-07-15 15:35:13.806962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.993 [2024-07-15 15:35:13.806975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.993 qpair failed and we were unable to recover it. 00:30:09.993 [2024-07-15 15:35:13.807212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.993 [2024-07-15 15:35:13.807224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.993 qpair failed and we were unable to recover it. 00:30:09.993 [2024-07-15 15:35:13.807487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.993 [2024-07-15 15:35:13.807499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.993 qpair failed and we were unable to recover it. 00:30:09.993 [2024-07-15 15:35:13.807741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.993 [2024-07-15 15:35:13.807754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.993 qpair failed and we were unable to recover it. 00:30:09.993 [2024-07-15 15:35:13.808080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.993 [2024-07-15 15:35:13.808093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.993 qpair failed and we were unable to recover it. 00:30:09.993 [2024-07-15 15:35:13.808333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.993 [2024-07-15 15:35:13.808345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.993 qpair failed and we were unable to recover it. 00:30:09.993 [2024-07-15 15:35:13.808586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.993 [2024-07-15 15:35:13.808598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.993 qpair failed and we were unable to recover it. 00:30:09.993 [2024-07-15 15:35:13.808788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.993 [2024-07-15 15:35:13.808800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.993 qpair failed and we were unable to recover it. 00:30:09.993 [2024-07-15 15:35:13.809102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.993 [2024-07-15 15:35:13.809115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.993 qpair failed and we were unable to recover it. 00:30:09.993 [2024-07-15 15:35:13.809388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.993 [2024-07-15 15:35:13.809400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.993 qpair failed and we were unable to recover it. 00:30:09.993 [2024-07-15 15:35:13.809566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.993 [2024-07-15 15:35:13.809578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.993 qpair failed and we were unable to recover it. 00:30:09.993 [2024-07-15 15:35:13.809821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.993 [2024-07-15 15:35:13.809838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.993 qpair failed and we were unable to recover it. 00:30:09.993 [2024-07-15 15:35:13.810020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.993 [2024-07-15 15:35:13.810032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.993 qpair failed and we were unable to recover it. 00:30:09.993 [2024-07-15 15:35:13.810232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.993 [2024-07-15 15:35:13.810244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.993 qpair failed and we were unable to recover it. 00:30:09.993 [2024-07-15 15:35:13.810414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.993 [2024-07-15 15:35:13.810426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.993 qpair failed and we were unable to recover it. 00:30:09.993 [2024-07-15 15:35:13.810731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.993 [2024-07-15 15:35:13.810743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.993 qpair failed and we were unable to recover it. 00:30:09.993 [2024-07-15 15:35:13.810989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.993 [2024-07-15 15:35:13.811002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.993 qpair failed and we were unable to recover it. 00:30:09.993 [2024-07-15 15:35:13.811327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.993 [2024-07-15 15:35:13.811339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.993 qpair failed and we were unable to recover it. 00:30:09.993 [2024-07-15 15:35:13.811591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.993 [2024-07-15 15:35:13.811602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.993 qpair failed and we were unable to recover it. 00:30:09.993 [2024-07-15 15:35:13.811799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.993 [2024-07-15 15:35:13.811811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.993 qpair failed and we were unable to recover it. 00:30:09.994 [2024-07-15 15:35:13.812129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.994 [2024-07-15 15:35:13.812141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.994 qpair failed and we were unable to recover it. 00:30:09.994 [2024-07-15 15:35:13.812333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.994 [2024-07-15 15:35:13.812345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.994 qpair failed and we were unable to recover it. 00:30:09.994 [2024-07-15 15:35:13.812602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.994 [2024-07-15 15:35:13.812614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.994 qpair failed and we were unable to recover it. 00:30:09.994 [2024-07-15 15:35:13.812809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.994 [2024-07-15 15:35:13.812821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.994 qpair failed and we were unable to recover it. 00:30:09.994 [2024-07-15 15:35:13.813016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.994 [2024-07-15 15:35:13.813028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.994 qpair failed and we were unable to recover it. 00:30:09.994 [2024-07-15 15:35:13.813284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.994 [2024-07-15 15:35:13.813295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.994 qpair failed and we were unable to recover it. 00:30:09.994 [2024-07-15 15:35:13.813563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.994 [2024-07-15 15:35:13.813595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:09.994 qpair failed and we were unable to recover it. 00:30:09.994 [2024-07-15 15:35:13.813854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.994 [2024-07-15 15:35:13.813872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:09.994 qpair failed and we were unable to recover it. 00:30:09.994 [2024-07-15 15:35:13.814120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.994 [2024-07-15 15:35:13.814136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:09.994 qpair failed and we were unable to recover it. 00:30:09.994 [2024-07-15 15:35:13.814405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.994 [2024-07-15 15:35:13.814422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:09.994 qpair failed and we were unable to recover it. 00:30:09.994 [2024-07-15 15:35:13.814758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.994 [2024-07-15 15:35:13.814775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:09.994 qpair failed and we were unable to recover it. 00:30:09.994 [2024-07-15 15:35:13.814983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.994 [2024-07-15 15:35:13.815000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:09.994 qpair failed and we were unable to recover it. 00:30:09.994 [2024-07-15 15:35:13.815244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.994 [2024-07-15 15:35:13.815260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:09.994 qpair failed and we were unable to recover it. 00:30:09.994 [2024-07-15 15:35:13.815449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.994 [2024-07-15 15:35:13.815465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:09.994 qpair failed and we were unable to recover it. 00:30:09.994 [2024-07-15 15:35:13.815744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.994 [2024-07-15 15:35:13.815757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.994 qpair failed and we were unable to recover it. 00:30:09.994 [2024-07-15 15:35:13.815946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.994 [2024-07-15 15:35:13.815958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.994 qpair failed and we were unable to recover it. 00:30:09.994 [2024-07-15 15:35:13.816260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.994 [2024-07-15 15:35:13.816272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.994 qpair failed and we were unable to recover it. 00:30:09.994 [2024-07-15 15:35:13.816521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.994 [2024-07-15 15:35:13.816533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.994 qpair failed and we were unable to recover it. 00:30:09.994 [2024-07-15 15:35:13.816711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.994 [2024-07-15 15:35:13.816723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.994 qpair failed and we were unable to recover it. 00:30:09.994 [2024-07-15 15:35:13.816922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.994 [2024-07-15 15:35:13.816937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.994 qpair failed and we were unable to recover it. 00:30:09.994 [2024-07-15 15:35:13.817185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.994 [2024-07-15 15:35:13.817197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.994 qpair failed and we were unable to recover it. 00:30:09.994 [2024-07-15 15:35:13.817497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.994 [2024-07-15 15:35:13.817509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.994 qpair failed and we were unable to recover it. 00:30:09.994 [2024-07-15 15:35:13.817764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.994 [2024-07-15 15:35:13.817776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.994 qpair failed and we were unable to recover it. 00:30:09.994 [2024-07-15 15:35:13.818014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.994 [2024-07-15 15:35:13.818026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.994 qpair failed and we were unable to recover it. 00:30:09.994 [2024-07-15 15:35:13.818278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.994 [2024-07-15 15:35:13.818290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.994 qpair failed and we were unable to recover it. 00:30:09.994 [2024-07-15 15:35:13.818477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.994 [2024-07-15 15:35:13.818489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.994 qpair failed and we were unable to recover it. 00:30:09.994 [2024-07-15 15:35:13.818662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.994 [2024-07-15 15:35:13.818674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.994 qpair failed and we were unable to recover it. 00:30:09.994 [2024-07-15 15:35:13.818930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.994 [2024-07-15 15:35:13.818943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.994 qpair failed and we were unable to recover it. 00:30:09.994 [2024-07-15 15:35:13.819181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.994 [2024-07-15 15:35:13.819193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.994 qpair failed and we were unable to recover it. 00:30:09.994 [2024-07-15 15:35:13.819360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.994 [2024-07-15 15:35:13.819372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.994 qpair failed and we were unable to recover it. 00:30:09.994 [2024-07-15 15:35:13.819624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.994 [2024-07-15 15:35:13.819636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.994 qpair failed and we were unable to recover it. 00:30:09.994 [2024-07-15 15:35:13.819807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.994 [2024-07-15 15:35:13.819819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.994 qpair failed and we were unable to recover it. 00:30:09.994 [2024-07-15 15:35:13.820072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.994 [2024-07-15 15:35:13.820085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.994 qpair failed and we were unable to recover it. 00:30:09.994 [2024-07-15 15:35:13.820331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.995 [2024-07-15 15:35:13.820343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.995 qpair failed and we were unable to recover it. 00:30:09.995 [2024-07-15 15:35:13.820610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.995 [2024-07-15 15:35:13.820623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.995 qpair failed and we were unable to recover it. 00:30:09.995 [2024-07-15 15:35:13.820861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.995 [2024-07-15 15:35:13.820873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.995 qpair failed and we were unable to recover it. 00:30:09.995 [2024-07-15 15:35:13.821109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.995 [2024-07-15 15:35:13.821121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.995 qpair failed and we were unable to recover it. 00:30:09.995 [2024-07-15 15:35:13.821291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.995 [2024-07-15 15:35:13.821303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.995 qpair failed and we were unable to recover it. 00:30:09.995 [2024-07-15 15:35:13.821486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.995 [2024-07-15 15:35:13.821498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.995 qpair failed and we were unable to recover it. 00:30:09.995 [2024-07-15 15:35:13.821732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.995 [2024-07-15 15:35:13.821743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.995 qpair failed and we were unable to recover it. 00:30:09.995 [2024-07-15 15:35:13.821981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.995 [2024-07-15 15:35:13.821993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.995 qpair failed and we were unable to recover it. 00:30:09.995 [2024-07-15 15:35:13.822272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.995 [2024-07-15 15:35:13.822284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.995 qpair failed and we were unable to recover it. 00:30:09.995 [2024-07-15 15:35:13.822512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.995 [2024-07-15 15:35:13.822524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.995 qpair failed and we were unable to recover it. 00:30:09.995 [2024-07-15 15:35:13.822706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.995 [2024-07-15 15:35:13.822718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.995 qpair failed and we were unable to recover it. 00:30:09.995 [2024-07-15 15:35:13.822918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.995 [2024-07-15 15:35:13.822930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.995 qpair failed and we were unable to recover it. 00:30:09.995 [2024-07-15 15:35:13.823180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.995 [2024-07-15 15:35:13.823192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.995 qpair failed and we were unable to recover it. 00:30:09.995 [2024-07-15 15:35:13.823357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.995 [2024-07-15 15:35:13.823369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.995 qpair failed and we were unable to recover it. 00:30:09.995 [2024-07-15 15:35:13.823671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.995 [2024-07-15 15:35:13.823683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.995 qpair failed and we were unable to recover it. 00:30:09.995 [2024-07-15 15:35:13.823865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.995 [2024-07-15 15:35:13.823878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.995 qpair failed and we were unable to recover it. 00:30:09.995 [2024-07-15 15:35:13.824059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.995 [2024-07-15 15:35:13.824071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.995 qpair failed and we were unable to recover it. 00:30:09.995 [2024-07-15 15:35:13.824315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.995 [2024-07-15 15:35:13.824327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.995 qpair failed and we were unable to recover it. 00:30:09.995 [2024-07-15 15:35:13.824580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.995 [2024-07-15 15:35:13.824592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.995 qpair failed and we were unable to recover it. 00:30:09.995 [2024-07-15 15:35:13.824758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.995 [2024-07-15 15:35:13.824770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.995 qpair failed and we were unable to recover it. 00:30:09.995 [2024-07-15 15:35:13.824946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.995 [2024-07-15 15:35:13.824958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.995 qpair failed and we were unable to recover it. 00:30:09.995 [2024-07-15 15:35:13.825186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.995 [2024-07-15 15:35:13.825198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.995 qpair failed and we were unable to recover it. 00:30:09.995 [2024-07-15 15:35:13.825446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.995 [2024-07-15 15:35:13.825457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.995 qpair failed and we were unable to recover it. 00:30:09.995 [2024-07-15 15:35:13.825657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.995 [2024-07-15 15:35:13.825670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.995 qpair failed and we were unable to recover it. 00:30:09.995 [2024-07-15 15:35:13.825850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.995 [2024-07-15 15:35:13.825862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.995 qpair failed and we were unable to recover it. 00:30:09.995 [2024-07-15 15:35:13.826064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.995 [2024-07-15 15:35:13.826076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.995 qpair failed and we were unable to recover it. 00:30:09.995 [2024-07-15 15:35:13.826251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.995 [2024-07-15 15:35:13.826264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.995 qpair failed and we were unable to recover it. 00:30:09.995 [2024-07-15 15:35:13.826444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.995 [2024-07-15 15:35:13.826456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.995 qpair failed and we were unable to recover it. 00:30:09.995 [2024-07-15 15:35:13.826637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.995 [2024-07-15 15:35:13.826648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.995 qpair failed and we were unable to recover it. 00:30:09.995 [2024-07-15 15:35:13.826814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.995 [2024-07-15 15:35:13.826826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.995 qpair failed and we were unable to recover it. 00:30:09.995 [2024-07-15 15:35:13.827067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.995 [2024-07-15 15:35:13.827079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.995 qpair failed and we were unable to recover it. 00:30:09.995 [2024-07-15 15:35:13.827258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.995 [2024-07-15 15:35:13.827270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.995 qpair failed and we were unable to recover it. 00:30:09.995 [2024-07-15 15:35:13.827508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.995 [2024-07-15 15:35:13.827520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.995 qpair failed and we were unable to recover it. 00:30:09.995 [2024-07-15 15:35:13.827757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.995 [2024-07-15 15:35:13.827769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.995 qpair failed and we were unable to recover it. 00:30:09.995 [2024-07-15 15:35:13.828017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.995 [2024-07-15 15:35:13.828029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.995 qpair failed and we were unable to recover it. 00:30:09.995 [2024-07-15 15:35:13.828217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.995 [2024-07-15 15:35:13.828229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.995 qpair failed and we were unable to recover it. 00:30:09.995 [2024-07-15 15:35:13.828498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.995 [2024-07-15 15:35:13.828510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.995 qpair failed and we were unable to recover it. 00:30:09.995 [2024-07-15 15:35:13.828731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.995 [2024-07-15 15:35:13.828743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.995 qpair failed and we were unable to recover it. 00:30:09.995 [2024-07-15 15:35:13.828917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.995 [2024-07-15 15:35:13.828929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.995 qpair failed and we were unable to recover it. 00:30:09.995 [2024-07-15 15:35:13.829031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.996 [2024-07-15 15:35:13.829042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.996 qpair failed and we were unable to recover it. 00:30:09.996 [2024-07-15 15:35:13.829300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.996 [2024-07-15 15:35:13.829312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.996 qpair failed and we were unable to recover it. 00:30:09.996 [2024-07-15 15:35:13.829532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.996 [2024-07-15 15:35:13.829543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.996 qpair failed and we were unable to recover it. 00:30:09.996 [2024-07-15 15:35:13.829784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.996 [2024-07-15 15:35:13.829796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.996 qpair failed and we were unable to recover it. 00:30:09.996 [2024-07-15 15:35:13.830047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.996 [2024-07-15 15:35:13.830059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.996 qpair failed and we were unable to recover it. 00:30:09.996 [2024-07-15 15:35:13.830293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.996 [2024-07-15 15:35:13.830305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.996 qpair failed and we were unable to recover it. 00:30:09.996 [2024-07-15 15:35:13.830536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.996 [2024-07-15 15:35:13.830549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.996 qpair failed and we were unable to recover it. 00:30:09.996 [2024-07-15 15:35:13.830728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.996 [2024-07-15 15:35:13.830740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.996 qpair failed and we were unable to recover it. 00:30:09.996 [2024-07-15 15:35:13.830939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.996 [2024-07-15 15:35:13.830957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.996 qpair failed and we were unable to recover it. 00:30:09.996 [2024-07-15 15:35:13.831146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.996 [2024-07-15 15:35:13.831159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.996 qpair failed and we were unable to recover it. 00:30:09.996 [2024-07-15 15:35:13.831418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.996 [2024-07-15 15:35:13.831430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.996 qpair failed and we were unable to recover it. 00:30:09.996 [2024-07-15 15:35:13.831614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.996 [2024-07-15 15:35:13.831625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.996 qpair failed and we were unable to recover it. 00:30:09.996 [2024-07-15 15:35:13.831812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.996 [2024-07-15 15:35:13.831824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.996 qpair failed and we were unable to recover it. 00:30:09.996 [2024-07-15 15:35:13.832086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.996 [2024-07-15 15:35:13.832099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.996 qpair failed and we were unable to recover it. 00:30:09.996 [2024-07-15 15:35:13.832339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.996 [2024-07-15 15:35:13.832351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.996 qpair failed and we were unable to recover it. 00:30:09.996 [2024-07-15 15:35:13.832516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.996 [2024-07-15 15:35:13.832528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.996 qpair failed and we were unable to recover it. 00:30:09.996 [2024-07-15 15:35:13.832709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.996 [2024-07-15 15:35:13.832721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.996 qpair failed and we were unable to recover it. 00:30:09.996 [2024-07-15 15:35:13.832982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.996 [2024-07-15 15:35:13.832995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.996 qpair failed and we were unable to recover it. 00:30:09.996 [2024-07-15 15:35:13.833191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.996 [2024-07-15 15:35:13.833204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.996 qpair failed and we were unable to recover it. 00:30:09.996 [2024-07-15 15:35:13.833401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.996 [2024-07-15 15:35:13.833413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.996 qpair failed and we were unable to recover it. 00:30:09.996 [2024-07-15 15:35:13.833711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.996 [2024-07-15 15:35:13.833723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.996 qpair failed and we were unable to recover it. 00:30:09.996 [2024-07-15 15:35:13.833965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.996 [2024-07-15 15:35:13.833978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.996 qpair failed and we were unable to recover it. 00:30:09.996 [2024-07-15 15:35:13.834243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.996 [2024-07-15 15:35:13.834255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.996 qpair failed and we were unable to recover it. 00:30:09.996 [2024-07-15 15:35:13.834437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.996 [2024-07-15 15:35:13.834449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.996 qpair failed and we were unable to recover it. 00:30:09.996 [2024-07-15 15:35:13.834564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.996 [2024-07-15 15:35:13.834576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.996 qpair failed and we were unable to recover it. 00:30:09.996 [2024-07-15 15:35:13.834841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.996 [2024-07-15 15:35:13.834854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.996 qpair failed and we were unable to recover it. 00:30:09.996 [2024-07-15 15:35:13.835022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.996 [2024-07-15 15:35:13.835034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.996 qpair failed and we were unable to recover it. 00:30:09.996 [2024-07-15 15:35:13.835272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.996 [2024-07-15 15:35:13.835283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.996 qpair failed and we were unable to recover it. 00:30:09.996 [2024-07-15 15:35:13.835488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.996 [2024-07-15 15:35:13.835500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.996 qpair failed and we were unable to recover it. 00:30:09.996 [2024-07-15 15:35:13.835670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.996 [2024-07-15 15:35:13.835682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.996 qpair failed and we were unable to recover it. 00:30:09.996 [2024-07-15 15:35:13.835854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.996 [2024-07-15 15:35:13.835867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.996 qpair failed and we were unable to recover it. 00:30:09.996 [2024-07-15 15:35:13.836040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.996 [2024-07-15 15:35:13.836053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.996 qpair failed and we were unable to recover it. 00:30:09.996 [2024-07-15 15:35:13.836306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.996 [2024-07-15 15:35:13.836319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.996 qpair failed and we were unable to recover it. 00:30:09.996 [2024-07-15 15:35:13.836498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.996 [2024-07-15 15:35:13.836510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.996 qpair failed and we were unable to recover it. 00:30:09.996 [2024-07-15 15:35:13.836742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.996 [2024-07-15 15:35:13.836755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.996 qpair failed and we were unable to recover it. 00:30:09.996 [2024-07-15 15:35:13.836923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.996 [2024-07-15 15:35:13.836936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.996 qpair failed and we were unable to recover it. 00:30:09.996 [2024-07-15 15:35:13.837237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.996 [2024-07-15 15:35:13.837250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.996 qpair failed and we were unable to recover it. 00:30:09.996 [2024-07-15 15:35:13.837431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.996 [2024-07-15 15:35:13.837443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.996 qpair failed and we were unable to recover it. 00:30:09.996 [2024-07-15 15:35:13.837666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.996 [2024-07-15 15:35:13.837678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.996 qpair failed and we were unable to recover it. 00:30:09.996 [2024-07-15 15:35:13.837931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.996 [2024-07-15 15:35:13.837944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.996 qpair failed and we were unable to recover it. 00:30:09.996 [2024-07-15 15:35:13.838245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.997 [2024-07-15 15:35:13.838257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.997 qpair failed and we were unable to recover it. 00:30:09.997 [2024-07-15 15:35:13.838507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.997 [2024-07-15 15:35:13.838519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.997 qpair failed and we were unable to recover it. 00:30:09.997 [2024-07-15 15:35:13.838794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.997 [2024-07-15 15:35:13.838805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.997 qpair failed and we were unable to recover it. 00:30:09.997 [2024-07-15 15:35:13.839052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.997 [2024-07-15 15:35:13.839064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.997 qpair failed and we were unable to recover it. 00:30:09.997 [2024-07-15 15:35:13.839365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.997 [2024-07-15 15:35:13.839377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.997 qpair failed and we were unable to recover it. 00:30:09.997 [2024-07-15 15:35:13.839560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.997 [2024-07-15 15:35:13.839572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.997 qpair failed and we were unable to recover it. 00:30:09.997 [2024-07-15 15:35:13.839816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.997 [2024-07-15 15:35:13.839829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.997 qpair failed and we were unable to recover it. 00:30:09.997 [2024-07-15 15:35:13.840070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.997 [2024-07-15 15:35:13.840083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.997 qpair failed and we were unable to recover it. 00:30:09.997 [2024-07-15 15:35:13.840262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.997 [2024-07-15 15:35:13.840276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.997 qpair failed and we were unable to recover it. 00:30:09.997 [2024-07-15 15:35:13.840445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.997 [2024-07-15 15:35:13.840457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.997 qpair failed and we were unable to recover it. 00:30:09.997 [2024-07-15 15:35:13.840636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.997 [2024-07-15 15:35:13.840648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.997 qpair failed and we were unable to recover it. 00:30:09.997 [2024-07-15 15:35:13.840908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.997 [2024-07-15 15:35:13.840921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.997 qpair failed and we were unable to recover it. 00:30:09.997 [2024-07-15 15:35:13.841174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.997 [2024-07-15 15:35:13.841188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.997 qpair failed and we were unable to recover it. 00:30:09.997 [2024-07-15 15:35:13.841371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.997 [2024-07-15 15:35:13.841383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.997 qpair failed and we were unable to recover it. 00:30:09.997 [2024-07-15 15:35:13.841549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.997 [2024-07-15 15:35:13.841563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.997 qpair failed and we were unable to recover it. 00:30:09.997 [2024-07-15 15:35:13.841858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.997 [2024-07-15 15:35:13.841872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.997 qpair failed and we were unable to recover it. 00:30:09.997 [2024-07-15 15:35:13.841971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.997 [2024-07-15 15:35:13.841983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.997 qpair failed and we were unable to recover it. 00:30:09.997 [2024-07-15 15:35:13.842072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.997 [2024-07-15 15:35:13.842084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.997 qpair failed and we were unable to recover it. 00:30:09.997 [2024-07-15 15:35:13.842262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.997 [2024-07-15 15:35:13.842275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.997 qpair failed and we were unable to recover it. 00:30:09.997 [2024-07-15 15:35:13.842466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.997 [2024-07-15 15:35:13.842478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.997 qpair failed and we were unable to recover it. 00:30:09.997 [2024-07-15 15:35:13.842735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.997 [2024-07-15 15:35:13.842747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.997 qpair failed and we were unable to recover it. 00:30:09.997 [2024-07-15 15:35:13.843104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.997 [2024-07-15 15:35:13.843116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.997 qpair failed and we were unable to recover it. 00:30:09.997 [2024-07-15 15:35:13.843283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.997 [2024-07-15 15:35:13.843295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.997 qpair failed and we were unable to recover it. 00:30:09.997 [2024-07-15 15:35:13.843488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.997 [2024-07-15 15:35:13.843499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.997 qpair failed and we were unable to recover it. 00:30:09.997 [2024-07-15 15:35:13.843756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.997 [2024-07-15 15:35:13.843769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.997 qpair failed and we were unable to recover it. 00:30:09.997 [2024-07-15 15:35:13.844074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.997 [2024-07-15 15:35:13.844086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.997 qpair failed and we were unable to recover it. 00:30:09.997 [2024-07-15 15:35:13.844394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.997 [2024-07-15 15:35:13.844406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.997 qpair failed and we were unable to recover it. 00:30:09.997 [2024-07-15 15:35:13.844525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.997 [2024-07-15 15:35:13.844537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.997 qpair failed and we were unable to recover it. 00:30:09.997 [2024-07-15 15:35:13.844772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.997 [2024-07-15 15:35:13.844784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.997 qpair failed and we were unable to recover it. 00:30:09.997 [2024-07-15 15:35:13.845036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.997 [2024-07-15 15:35:13.845048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.997 qpair failed and we were unable to recover it. 00:30:09.997 [2024-07-15 15:35:13.845340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.997 [2024-07-15 15:35:13.845351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.997 qpair failed and we were unable to recover it. 00:30:09.997 [2024-07-15 15:35:13.845675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.997 [2024-07-15 15:35:13.845686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.997 qpair failed and we were unable to recover it. 00:30:09.997 [2024-07-15 15:35:13.845932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.997 [2024-07-15 15:35:13.845945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.997 qpair failed and we were unable to recover it. 00:30:09.997 [2024-07-15 15:35:13.846183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.997 [2024-07-15 15:35:13.846196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.997 qpair failed and we were unable to recover it. 00:30:09.997 [2024-07-15 15:35:13.846470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.997 [2024-07-15 15:35:13.846482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.997 qpair failed and we were unable to recover it. 00:30:09.997 [2024-07-15 15:35:13.846726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.997 [2024-07-15 15:35:13.846738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.997 qpair failed and we were unable to recover it. 00:30:09.997 [2024-07-15 15:35:13.846981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.997 [2024-07-15 15:35:13.846993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.997 qpair failed and we were unable to recover it. 00:30:09.997 [2024-07-15 15:35:13.847173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.997 [2024-07-15 15:35:13.847185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.997 qpair failed and we were unable to recover it. 00:30:09.997 [2024-07-15 15:35:13.847358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.997 [2024-07-15 15:35:13.847369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.997 qpair failed and we were unable to recover it. 00:30:09.997 [2024-07-15 15:35:13.847608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.997 [2024-07-15 15:35:13.847620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.997 qpair failed and we were unable to recover it. 00:30:09.997 [2024-07-15 15:35:13.847812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.997 [2024-07-15 15:35:13.847823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-15 15:35:13.848025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-15 15:35:13.848038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-15 15:35:13.848290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-15 15:35:13.848302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-15 15:35:13.848640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-15 15:35:13.848653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-15 15:35:13.848957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-15 15:35:13.848969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-15 15:35:13.849205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-15 15:35:13.849217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-15 15:35:13.849542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-15 15:35:13.849554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-15 15:35:13.849856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-15 15:35:13.849868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-15 15:35:13.850125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-15 15:35:13.850137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-15 15:35:13.850290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-15 15:35:13.850302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-15 15:35:13.850618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-15 15:35:13.850633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-15 15:35:13.850882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-15 15:35:13.850895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-15 15:35:13.851098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-15 15:35:13.851110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-15 15:35:13.851354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-15 15:35:13.851367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-15 15:35:13.851548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-15 15:35:13.851562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-15 15:35:13.851864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-15 15:35:13.851877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-15 15:35:13.852072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-15 15:35:13.852084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-15 15:35:13.852285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-15 15:35:13.852297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-15 15:35:13.852470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-15 15:35:13.852482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-15 15:35:13.852740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-15 15:35:13.852752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-15 15:35:13.853004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-15 15:35:13.853016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-15 15:35:13.853197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-15 15:35:13.853210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-15 15:35:13.853440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-15 15:35:13.853452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-15 15:35:13.853686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-15 15:35:13.853698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-15 15:35:13.854003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-15 15:35:13.854015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-15 15:35:13.854182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-15 15:35:13.854194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-15 15:35:13.854428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-15 15:35:13.854439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-15 15:35:13.854743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-15 15:35:13.854755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-15 15:35:13.854954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-15 15:35:13.854967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-15 15:35:13.855208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-15 15:35:13.855221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-15 15:35:13.855524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-15 15:35:13.855537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-15 15:35:13.855694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-15 15:35:13.855706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-15 15:35:13.855997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-15 15:35:13.856011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-15 15:35:13.856364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-15 15:35:13.856376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-15 15:35:13.856623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-15 15:35:13.856635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.998 [2024-07-15 15:35:13.856796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.998 [2024-07-15 15:35:13.856808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.998 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-15 15:35:13.856993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-15 15:35:13.857007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-15 15:35:13.857285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-15 15:35:13.857297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-15 15:35:13.857481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-15 15:35:13.857493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-15 15:35:13.857746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-15 15:35:13.857759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-15 15:35:13.857997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-15 15:35:13.858009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-15 15:35:13.858252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-15 15:35:13.858264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-15 15:35:13.858531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-15 15:35:13.858543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-15 15:35:13.858781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-15 15:35:13.858793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-15 15:35:13.859025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-15 15:35:13.859038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-15 15:35:13.859275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-15 15:35:13.859288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-15 15:35:13.859591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-15 15:35:13.859603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-15 15:35:13.859742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-15 15:35:13.859753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-15 15:35:13.859946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-15 15:35:13.859960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-15 15:35:13.860088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-15 15:35:13.860100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-15 15:35:13.860228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-15 15:35:13.860240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-15 15:35:13.860411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-15 15:35:13.860423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-15 15:35:13.860526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-15 15:35:13.860538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-15 15:35:13.860773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-15 15:35:13.860785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-15 15:35:13.860967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-15 15:35:13.860983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-15 15:35:13.861148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-15 15:35:13.861160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-15 15:35:13.861466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-15 15:35:13.861478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-15 15:35:13.861653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-15 15:35:13.861665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-15 15:35:13.861965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-15 15:35:13.861978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-15 15:35:13.862102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-15 15:35:13.862114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-15 15:35:13.862366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-15 15:35:13.862378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-15 15:35:13.862593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-15 15:35:13.862605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-15 15:35:13.862838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-15 15:35:13.862851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-15 15:35:13.863043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-15 15:35:13.863056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-15 15:35:13.863320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-15 15:35:13.863332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-15 15:35:13.863459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-15 15:35:13.863471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-15 15:35:13.863714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-15 15:35:13.863726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-15 15:35:13.863905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-15 15:35:13.863919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-15 15:35:13.864210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-15 15:35:13.864222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-15 15:35:13.864465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-15 15:35:13.864478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-15 15:35:13.864657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-15 15:35:13.864669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-15 15:35:13.864852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-15 15:35:13.864864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-15 15:35:13.865107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-15 15:35:13.865119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-15 15:35:13.865361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-15 15:35:13.865373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-15 15:35:13.865617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-15 15:35:13.865629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-15 15:35:13.865900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.999 [2024-07-15 15:35:13.865912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:09.999 qpair failed and we were unable to recover it. 00:30:09.999 [2024-07-15 15:35:13.866146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.000 [2024-07-15 15:35:13.866158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.000 qpair failed and we were unable to recover it. 00:30:10.000 [2024-07-15 15:35:13.866406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.000 [2024-07-15 15:35:13.866419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.000 qpair failed and we were unable to recover it. 00:30:10.000 [2024-07-15 15:35:13.866608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.000 [2024-07-15 15:35:13.866620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.000 qpair failed and we were unable to recover it. 00:30:10.000 [2024-07-15 15:35:13.866810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.000 [2024-07-15 15:35:13.866822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.000 qpair failed and we were unable to recover it. 00:30:10.000 [2024-07-15 15:35:13.867072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.000 [2024-07-15 15:35:13.867084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.000 qpair failed and we were unable to recover it. 00:30:10.000 [2024-07-15 15:35:13.867325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.000 [2024-07-15 15:35:13.867337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.000 qpair failed and we were unable to recover it. 00:30:10.000 [2024-07-15 15:35:13.867604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.000 [2024-07-15 15:35:13.867616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.000 qpair failed and we were unable to recover it. 00:30:10.000 [2024-07-15 15:35:13.867878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.000 [2024-07-15 15:35:13.867891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.000 qpair failed and we were unable to recover it. 00:30:10.000 [2024-07-15 15:35:13.868056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.000 [2024-07-15 15:35:13.868068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.000 qpair failed and we were unable to recover it. 00:30:10.000 [2024-07-15 15:35:13.868252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.000 [2024-07-15 15:35:13.868265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.000 qpair failed and we were unable to recover it. 00:30:10.000 [2024-07-15 15:35:13.868434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.000 [2024-07-15 15:35:13.868446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.000 qpair failed and we were unable to recover it. 00:30:10.000 [2024-07-15 15:35:13.868766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.000 [2024-07-15 15:35:13.868778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.000 qpair failed and we were unable to recover it. 00:30:10.000 [2024-07-15 15:35:13.869095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.000 [2024-07-15 15:35:13.869109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.000 qpair failed and we were unable to recover it. 00:30:10.000 [2024-07-15 15:35:13.869298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.000 [2024-07-15 15:35:13.869310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.000 qpair failed and we were unable to recover it. 00:30:10.000 [2024-07-15 15:35:13.869543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.000 [2024-07-15 15:35:13.869555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.000 qpair failed and we were unable to recover it. 00:30:10.000 [2024-07-15 15:35:13.869726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.000 [2024-07-15 15:35:13.869737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.000 qpair failed and we were unable to recover it. 00:30:10.000 [2024-07-15 15:35:13.869916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.000 [2024-07-15 15:35:13.869928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.000 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-15 15:35:13.870159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-15 15:35:13.870171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-15 15:35:13.870420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-15 15:35:13.870434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-15 15:35:13.870665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-15 15:35:13.870678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-15 15:35:13.870914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-15 15:35:13.870927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-15 15:35:13.871121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-15 15:35:13.871132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-15 15:35:13.871260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-15 15:35:13.871272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-15 15:35:13.871504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-15 15:35:13.871516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-15 15:35:13.871848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-15 15:35:13.871861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-15 15:35:13.872059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-15 15:35:13.872071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-15 15:35:13.872380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-15 15:35:13.872392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-15 15:35:13.872648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-15 15:35:13.872660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-15 15:35:13.872921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-15 15:35:13.872933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-15 15:35:13.873136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-15 15:35:13.873148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-15 15:35:13.873348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-15 15:35:13.873360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-15 15:35:13.873672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-15 15:35:13.873684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-15 15:35:13.873929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-15 15:35:13.873941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-15 15:35:13.874139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-15 15:35:13.874150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-15 15:35:13.874345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-15 15:35:13.874357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-15 15:35:13.874529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-15 15:35:13.874541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-15 15:35:13.874713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-15 15:35:13.874725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-15 15:35:13.874964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-15 15:35:13.874977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-15 15:35:13.875304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-15 15:35:13.875317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-15 15:35:13.875560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-15 15:35:13.875571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-15 15:35:13.875820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-15 15:35:13.875835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-15 15:35:13.876088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-15 15:35:13.876100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-15 15:35:13.876268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-15 15:35:13.876280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-15 15:35:13.876459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-15 15:35:13.876470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-15 15:35:13.876635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-15 15:35:13.876647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-15 15:35:13.876860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-15 15:35:13.876873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-15 15:35:13.877058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-15 15:35:13.877070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.269 [2024-07-15 15:35:13.877247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.269 [2024-07-15 15:35:13.877259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.269 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-15 15:35:13.877455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-15 15:35:13.877467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-15 15:35:13.877723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-15 15:35:13.877735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-15 15:35:13.877994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-15 15:35:13.878007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-15 15:35:13.878252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-15 15:35:13.878264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-15 15:35:13.878453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-15 15:35:13.878465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-15 15:35:13.878715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-15 15:35:13.878727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-15 15:35:13.878839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-15 15:35:13.878852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-15 15:35:13.879088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-15 15:35:13.879100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-15 15:35:13.879350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-15 15:35:13.879362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-15 15:35:13.879619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-15 15:35:13.879632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-15 15:35:13.879889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-15 15:35:13.879904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-15 15:35:13.880076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-15 15:35:13.880091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-15 15:35:13.880273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-15 15:35:13.880285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-15 15:35:13.880460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-15 15:35:13.880472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-15 15:35:13.880735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-15 15:35:13.880747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-15 15:35:13.880940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-15 15:35:13.880953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-15 15:35:13.881140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-15 15:35:13.881152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-15 15:35:13.881383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-15 15:35:13.881395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-15 15:35:13.881649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-15 15:35:13.881661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-15 15:35:13.881893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-15 15:35:13.881906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-15 15:35:13.882097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-15 15:35:13.882109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-15 15:35:13.882299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-15 15:35:13.882312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-15 15:35:13.882493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-15 15:35:13.882506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-15 15:35:13.882617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-15 15:35:13.882629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-15 15:35:13.882805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-15 15:35:13.882817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-15 15:35:13.883052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-15 15:35:13.883065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-15 15:35:13.883313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-15 15:35:13.883326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-15 15:35:13.883605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-15 15:35:13.883617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-15 15:35:13.883859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-15 15:35:13.883872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-15 15:35:13.884118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-15 15:35:13.884131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-15 15:35:13.884310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-15 15:35:13.884322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-15 15:35:13.884556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-15 15:35:13.884568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-15 15:35:13.884813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-15 15:35:13.884826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-15 15:35:13.885018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-15 15:35:13.885031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-15 15:35:13.885339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-15 15:35:13.885352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-15 15:35:13.885528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-15 15:35:13.885540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-15 15:35:13.885856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-15 15:35:13.885868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-15 15:35:13.886054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-15 15:35:13.886066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-15 15:35:13.886347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-15 15:35:13.886360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-15 15:35:13.886611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-15 15:35:13.886623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-15 15:35:13.886861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-15 15:35:13.886873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-15 15:35:13.887038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-15 15:35:13.887050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-15 15:35:13.887230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-15 15:35:13.887242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-15 15:35:13.887488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-15 15:35:13.887500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-15 15:35:13.887677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-15 15:35:13.887689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-15 15:35:13.887955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-15 15:35:13.887967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-15 15:35:13.888300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-15 15:35:13.888312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-15 15:35:13.888509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-15 15:35:13.888521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-15 15:35:13.888779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-15 15:35:13.888792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-15 15:35:13.888976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-15 15:35:13.888988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-15 15:35:13.889220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-15 15:35:13.889235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-15 15:35:13.889470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-15 15:35:13.889482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-15 15:35:13.889644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-15 15:35:13.889657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-15 15:35:13.889821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-15 15:35:13.889838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-15 15:35:13.890006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-15 15:35:13.890018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-15 15:35:13.890255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-15 15:35:13.890267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-15 15:35:13.890438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-15 15:35:13.890451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-15 15:35:13.890616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-15 15:35:13.890629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-15 15:35:13.890933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.270 [2024-07-15 15:35:13.890946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.270 qpair failed and we were unable to recover it. 00:30:10.270 [2024-07-15 15:35:13.891190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-15 15:35:13.891202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-15 15:35:13.891441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-15 15:35:13.891454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-15 15:35:13.891617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-15 15:35:13.891629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-15 15:35:13.891959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-15 15:35:13.891972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-15 15:35:13.892172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-15 15:35:13.892184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-15 15:35:13.892413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-15 15:35:13.892425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-15 15:35:13.892620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-15 15:35:13.892633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-15 15:35:13.892813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-15 15:35:13.892825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-15 15:35:13.893017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-15 15:35:13.893029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-15 15:35:13.893276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-15 15:35:13.893289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-15 15:35:13.893461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-15 15:35:13.893473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-15 15:35:13.893729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-15 15:35:13.893741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-15 15:35:13.893974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-15 15:35:13.893987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-15 15:35:13.894217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-15 15:35:13.894229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-15 15:35:13.894395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-15 15:35:13.894407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-15 15:35:13.894602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-15 15:35:13.894615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-15 15:35:13.894866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-15 15:35:13.894879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-15 15:35:13.895179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-15 15:35:13.895192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-15 15:35:13.895478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-15 15:35:13.895490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-15 15:35:13.895728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-15 15:35:13.895741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-15 15:35:13.896012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-15 15:35:13.896025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-15 15:35:13.896258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-15 15:35:13.896271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-15 15:35:13.896511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-15 15:35:13.896523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-15 15:35:13.896719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-15 15:35:13.896732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-15 15:35:13.896965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-15 15:35:13.896978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-15 15:35:13.897158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-15 15:35:13.897170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-15 15:35:13.897406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-15 15:35:13.897417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-15 15:35:13.897604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-15 15:35:13.897616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-15 15:35:13.897795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-15 15:35:13.897807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-15 15:35:13.898070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-15 15:35:13.898082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-15 15:35:13.898317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-15 15:35:13.898330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-15 15:35:13.898522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-15 15:35:13.898537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-15 15:35:13.898709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-15 15:35:13.898721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-15 15:35:13.898966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-15 15:35:13.898977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-15 15:35:13.899247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-15 15:35:13.899259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-15 15:35:13.899515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-15 15:35:13.899528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-15 15:35:13.899790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-15 15:35:13.899803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-15 15:35:13.900078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-15 15:35:13.900090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-15 15:35:13.900258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-15 15:35:13.900271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-15 15:35:13.900474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-15 15:35:13.900487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-15 15:35:13.900730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-15 15:35:13.900742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-15 15:35:13.901074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-15 15:35:13.901086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-15 15:35:13.901282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-15 15:35:13.901295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-15 15:35:13.901493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-15 15:35:13.901505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-15 15:35:13.901742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-15 15:35:13.901755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-15 15:35:13.902016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-15 15:35:13.902028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-15 15:35:13.902269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-15 15:35:13.902282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-15 15:35:13.902517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-15 15:35:13.902531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-15 15:35:13.902708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-15 15:35:13.902720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-15 15:35:13.902886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-15 15:35:13.902898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-15 15:35:13.903199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-15 15:35:13.903211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-15 15:35:13.903446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-15 15:35:13.903459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-15 15:35:13.903636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-15 15:35:13.903648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-15 15:35:13.903979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-15 15:35:13.903991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-15 15:35:13.904166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-15 15:35:13.904178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-15 15:35:13.904361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-15 15:35:13.904374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-15 15:35:13.904642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-15 15:35:13.904655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-15 15:35:13.904903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-15 15:35:13.904915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-15 15:35:13.905034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-15 15:35:13.905046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.271 qpair failed and we were unable to recover it. 00:30:10.271 [2024-07-15 15:35:13.905295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.271 [2024-07-15 15:35:13.905308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-15 15:35:13.905481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-15 15:35:13.905494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-15 15:35:13.905667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-15 15:35:13.905679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-15 15:35:13.905987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-15 15:35:13.905999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-15 15:35:13.906268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-15 15:35:13.906280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-15 15:35:13.906590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-15 15:35:13.906603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-15 15:35:13.906855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-15 15:35:13.906867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-15 15:35:13.907059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-15 15:35:13.907072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-15 15:35:13.907351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-15 15:35:13.907364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-15 15:35:13.907483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-15 15:35:13.907495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-15 15:35:13.907680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-15 15:35:13.907692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-15 15:35:13.907861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-15 15:35:13.907874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-15 15:35:13.908045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-15 15:35:13.908061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-15 15:35:13.908353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-15 15:35:13.908366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-15 15:35:13.908634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-15 15:35:13.908646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-15 15:35:13.908819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-15 15:35:13.908842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-15 15:35:13.909014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-15 15:35:13.909027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-15 15:35:13.909208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-15 15:35:13.909219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-15 15:35:13.909466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-15 15:35:13.909479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-15 15:35:13.909653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-15 15:35:13.909665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-15 15:35:13.909864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-15 15:35:13.909877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-15 15:35:13.910109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-15 15:35:13.910121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-15 15:35:13.910285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-15 15:35:13.910297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-15 15:35:13.910538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-15 15:35:13.910550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-15 15:35:13.910716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-15 15:35:13.910728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-15 15:35:13.910910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-15 15:35:13.910923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-15 15:35:13.911095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-15 15:35:13.911108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-15 15:35:13.911304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-15 15:35:13.911317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-15 15:35:13.911503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-15 15:35:13.911516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-15 15:35:13.911764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-15 15:35:13.911776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-15 15:35:13.911951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-15 15:35:13.911964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-15 15:35:13.912145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-15 15:35:13.912157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-15 15:35:13.912348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-15 15:35:13.912361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-15 15:35:13.912621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-15 15:35:13.912634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-15 15:35:13.912870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-15 15:35:13.912882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-15 15:35:13.913062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-15 15:35:13.913074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-15 15:35:13.913267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-15 15:35:13.913280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-15 15:35:13.913531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-15 15:35:13.913543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-15 15:35:13.913774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-15 15:35:13.913787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-15 15:35:13.914024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-15 15:35:13.914037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-15 15:35:13.914270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-15 15:35:13.914282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-15 15:35:13.914535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-15 15:35:13.914547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-15 15:35:13.914847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-15 15:35:13.914860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-15 15:35:13.914968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-15 15:35:13.914980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-15 15:35:13.915304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-15 15:35:13.915316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-15 15:35:13.915516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-15 15:35:13.915528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-15 15:35:13.915656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-15 15:35:13.915668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-15 15:35:13.915849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-15 15:35:13.915862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-15 15:35:13.916049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-15 15:35:13.916062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-15 15:35:13.916242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-15 15:35:13.916254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-15 15:35:13.916509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.272 [2024-07-15 15:35:13.916522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.272 qpair failed and we were unable to recover it. 00:30:10.272 [2024-07-15 15:35:13.916687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-15 15:35:13.916699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-15 15:35:13.916979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-15 15:35:13.916994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-15 15:35:13.917259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-15 15:35:13.917271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-15 15:35:13.917511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-15 15:35:13.917523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-15 15:35:13.917711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-15 15:35:13.917724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-15 15:35:13.917971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-15 15:35:13.917985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-15 15:35:13.918263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-15 15:35:13.918276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-15 15:35:13.918510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-15 15:35:13.918523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-15 15:35:13.918831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-15 15:35:13.918847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-15 15:35:13.919021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-15 15:35:13.919033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-15 15:35:13.919195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-15 15:35:13.919207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-15 15:35:13.919441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-15 15:35:13.919454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-15 15:35:13.919628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-15 15:35:13.919641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-15 15:35:13.919945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-15 15:35:13.919958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-15 15:35:13.920136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-15 15:35:13.920148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-15 15:35:13.920421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-15 15:35:13.920434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-15 15:35:13.920678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-15 15:35:13.920691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-15 15:35:13.920878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-15 15:35:13.920892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-15 15:35:13.921203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-15 15:35:13.921215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-15 15:35:13.921515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-15 15:35:13.921527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-15 15:35:13.921706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-15 15:35:13.921718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-15 15:35:13.922022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-15 15:35:13.922034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-15 15:35:13.922282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-15 15:35:13.922294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-15 15:35:13.922559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-15 15:35:13.922571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-15 15:35:13.922749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-15 15:35:13.922761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-15 15:35:13.923065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-15 15:35:13.923078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-15 15:35:13.923331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-15 15:35:13.923343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-15 15:35:13.923694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-15 15:35:13.923706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-15 15:35:13.923892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-15 15:35:13.923905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-15 15:35:13.924230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-15 15:35:13.924243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-15 15:35:13.924425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-15 15:35:13.924437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-15 15:35:13.924619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-15 15:35:13.924632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-15 15:35:13.924956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-15 15:35:13.924969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-15 15:35:13.925223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-15 15:35:13.925235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-15 15:35:13.925528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-15 15:35:13.925540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-15 15:35:13.925719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-15 15:35:13.925732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-15 15:35:13.925974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-15 15:35:13.925987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-15 15:35:13.926272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-15 15:35:13.926284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-15 15:35:13.926550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-15 15:35:13.926562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-15 15:35:13.926815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-15 15:35:13.926827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-15 15:35:13.927026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-15 15:35:13.927038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-15 15:35:13.927344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-15 15:35:13.927359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-15 15:35:13.927625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-15 15:35:13.927637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-15 15:35:13.927756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-15 15:35:13.927770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-15 15:35:13.927954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-15 15:35:13.927966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-15 15:35:13.928212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-15 15:35:13.928225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-15 15:35:13.928399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-15 15:35:13.928411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-15 15:35:13.928654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-15 15:35:13.928666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-15 15:35:13.928921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-15 15:35:13.928933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-15 15:35:13.929264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-15 15:35:13.929276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-15 15:35:13.929479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-15 15:35:13.929491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-15 15:35:13.929800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-15 15:35:13.929813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-15 15:35:13.930061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-15 15:35:13.930075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-15 15:35:13.930268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-15 15:35:13.930280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-15 15:35:13.930522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-15 15:35:13.930534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-15 15:35:13.930698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-15 15:35:13.930710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-15 15:35:13.930946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-15 15:35:13.930959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-15 15:35:13.931229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-15 15:35:13.931242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-15 15:35:13.931514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-15 15:35:13.931527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-15 15:35:13.931638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-15 15:35:13.931651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-15 15:35:13.931923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-15 15:35:13.931936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.273 qpair failed and we were unable to recover it. 00:30:10.273 [2024-07-15 15:35:13.932125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.273 [2024-07-15 15:35:13.932138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-15 15:35:13.932319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-15 15:35:13.932331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-15 15:35:13.932565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-15 15:35:13.932578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-15 15:35:13.932698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-15 15:35:13.932710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-15 15:35:13.932899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-15 15:35:13.932912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-15 15:35:13.933082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-15 15:35:13.933095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-15 15:35:13.933294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-15 15:35:13.933306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-15 15:35:13.933565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-15 15:35:13.933578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-15 15:35:13.933772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-15 15:35:13.933785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-15 15:35:13.934017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-15 15:35:13.934030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-15 15:35:13.934275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-15 15:35:13.934288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-15 15:35:13.934565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-15 15:35:13.934578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-15 15:35:13.934817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-15 15:35:13.934830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-15 15:35:13.934999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-15 15:35:13.935011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-15 15:35:13.935207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-15 15:35:13.935220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-15 15:35:13.935462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-15 15:35:13.935475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-15 15:35:13.935659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-15 15:35:13.935672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-15 15:35:13.935858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-15 15:35:13.935870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-15 15:35:13.936193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-15 15:35:13.936205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-15 15:35:13.936473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-15 15:35:13.936485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-15 15:35:13.936590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-15 15:35:13.936604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-15 15:35:13.936788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-15 15:35:13.936799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-15 15:35:13.937030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-15 15:35:13.937042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-15 15:35:13.937294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-15 15:35:13.937308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-15 15:35:13.937610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-15 15:35:13.937622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-15 15:35:13.937868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-15 15:35:13.937881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-15 15:35:13.938158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-15 15:35:13.938171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-15 15:35:13.938356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-15 15:35:13.938368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-15 15:35:13.938661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-15 15:35:13.938673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-15 15:35:13.938842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-15 15:35:13.938854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-15 15:35:13.939103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-15 15:35:13.939114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-15 15:35:13.939345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-15 15:35:13.939356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-15 15:35:13.939596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-15 15:35:13.939607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-15 15:35:13.939772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-15 15:35:13.939785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-15 15:35:13.940036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-15 15:35:13.940049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-15 15:35:13.940217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-15 15:35:13.940229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-15 15:35:13.940394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-15 15:35:13.940407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-15 15:35:13.940654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-15 15:35:13.940666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-15 15:35:13.940937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-15 15:35:13.940950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-15 15:35:13.941298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-15 15:35:13.941311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-15 15:35:13.941559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-15 15:35:13.941572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-15 15:35:13.941758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-15 15:35:13.941770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-15 15:35:13.942075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-15 15:35:13.942087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-15 15:35:13.942391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-15 15:35:13.942403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-15 15:35:13.942641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-15 15:35:13.942654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-15 15:35:13.942966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-15 15:35:13.942978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-15 15:35:13.943221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-15 15:35:13.943234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-15 15:35:13.943429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-15 15:35:13.943441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-15 15:35:13.943693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-15 15:35:13.943705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-15 15:35:13.943814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-15 15:35:13.943825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-15 15:35:13.944071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-15 15:35:13.944084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-15 15:35:13.944384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-15 15:35:13.944397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-15 15:35:13.944569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-15 15:35:13.944582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-15 15:35:13.944813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-15 15:35:13.944825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-15 15:35:13.945002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-15 15:35:13.945014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-15 15:35:13.945202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-15 15:35:13.945215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-15 15:35:13.945399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-15 15:35:13.945412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-15 15:35:13.945584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-15 15:35:13.945596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-15 15:35:13.945780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-15 15:35:13.945792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-15 15:35:13.945987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-15 15:35:13.945999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.274 [2024-07-15 15:35:13.946230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.274 [2024-07-15 15:35:13.946245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.274 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-15 15:35:13.946571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-15 15:35:13.946586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-15 15:35:13.946771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-15 15:35:13.946784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-15 15:35:13.946984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-15 15:35:13.946997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-15 15:35:13.947306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-15 15:35:13.947318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-15 15:35:13.947435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-15 15:35:13.947447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-15 15:35:13.950089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-15 15:35:13.950103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-15 15:35:13.950380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-15 15:35:13.950392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-15 15:35:13.950556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-15 15:35:13.950569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-15 15:35:13.950873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-15 15:35:13.950887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-15 15:35:13.951202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-15 15:35:13.951216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-15 15:35:13.951395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-15 15:35:13.951408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-15 15:35:13.951610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-15 15:35:13.951623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-15 15:35:13.951791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-15 15:35:13.951803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-15 15:35:13.952049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-15 15:35:13.952062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-15 15:35:13.952297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-15 15:35:13.952310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-15 15:35:13.952489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-15 15:35:13.952502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-15 15:35:13.952748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-15 15:35:13.952760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-15 15:35:13.952956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-15 15:35:13.952969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-15 15:35:13.953141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-15 15:35:13.953153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-15 15:35:13.953403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-15 15:35:13.953415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-15 15:35:13.953743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-15 15:35:13.953757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-15 15:35:13.953952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-15 15:35:13.953966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-15 15:35:13.954132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-15 15:35:13.954145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-15 15:35:13.954319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-15 15:35:13.954331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-15 15:35:13.954578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-15 15:35:13.954590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-15 15:35:13.954826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-15 15:35:13.954842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-15 15:35:13.955042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-15 15:35:13.955054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-15 15:35:13.955239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-15 15:35:13.955251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-15 15:35:13.955366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-15 15:35:13.955378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-15 15:35:13.955625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-15 15:35:13.955637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-15 15:35:13.955941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-15 15:35:13.955954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-15 15:35:13.956232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-15 15:35:13.956244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-15 15:35:13.956426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-15 15:35:13.956438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-15 15:35:13.956685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-15 15:35:13.956697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-15 15:35:13.956999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-15 15:35:13.957012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-15 15:35:13.957187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-15 15:35:13.957200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-15 15:35:13.957449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-15 15:35:13.957461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-15 15:35:13.957581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-15 15:35:13.957593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-15 15:35:13.957827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-15 15:35:13.957845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-15 15:35:13.958028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-15 15:35:13.958041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-15 15:35:13.958305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-15 15:35:13.958317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-15 15:35:13.958553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-15 15:35:13.958566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-15 15:35:13.958751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-15 15:35:13.958763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-15 15:35:13.959005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-15 15:35:13.959018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-15 15:35:13.959203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-15 15:35:13.959215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-15 15:35:13.959402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-15 15:35:13.959414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-15 15:35:13.959660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-15 15:35:13.959673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-15 15:35:13.959929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-15 15:35:13.959942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-15 15:35:13.960175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-15 15:35:13.960187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-15 15:35:13.960488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-15 15:35:13.960501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-15 15:35:13.960737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-15 15:35:13.960750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.275 [2024-07-15 15:35:13.960937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.275 [2024-07-15 15:35:13.960950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.275 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-15 15:35:13.961256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-15 15:35:13.961269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-15 15:35:13.961456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-15 15:35:13.961468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-15 15:35:13.961779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-15 15:35:13.961791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-15 15:35:13.961975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-15 15:35:13.961988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-15 15:35:13.962248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-15 15:35:13.962260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-15 15:35:13.962512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-15 15:35:13.962524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-15 15:35:13.962694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-15 15:35:13.962706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-15 15:35:13.962965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-15 15:35:13.962977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-15 15:35:13.963151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-15 15:35:13.963164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-15 15:35:13.963411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-15 15:35:13.963422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-15 15:35:13.963658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-15 15:35:13.963670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-15 15:35:13.963916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-15 15:35:13.963928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-15 15:35:13.964173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-15 15:35:13.964187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-15 15:35:13.964490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-15 15:35:13.964502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-15 15:35:13.964824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-15 15:35:13.964841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-15 15:35:13.965025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-15 15:35:13.965037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-15 15:35:13.965291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-15 15:35:13.965303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-15 15:35:13.965492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-15 15:35:13.965504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-15 15:35:13.965767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-15 15:35:13.965779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-15 15:35:13.965972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-15 15:35:13.965985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-15 15:35:13.966235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-15 15:35:13.966247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-15 15:35:13.966437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-15 15:35:13.966450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-15 15:35:13.966618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-15 15:35:13.966631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-15 15:35:13.966879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-15 15:35:13.966892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-15 15:35:13.967142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-15 15:35:13.967154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-15 15:35:13.967341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-15 15:35:13.967353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-15 15:35:13.967529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-15 15:35:13.967541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-15 15:35:13.967788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-15 15:35:13.967800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-15 15:35:13.967998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-15 15:35:13.968011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-15 15:35:13.968249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-15 15:35:13.968261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-15 15:35:13.968437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-15 15:35:13.968450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-15 15:35:13.968697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-15 15:35:13.968710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-15 15:35:13.968977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-15 15:35:13.968990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-15 15:35:13.969236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-15 15:35:13.969248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-15 15:35:13.969417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-15 15:35:13.969430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-15 15:35:13.969678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-15 15:35:13.969690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-15 15:35:13.969935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-15 15:35:13.969947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-15 15:35:13.970151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-15 15:35:13.970163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-15 15:35:13.970430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-15 15:35:13.970442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-15 15:35:13.970621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-15 15:35:13.970634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-15 15:35:13.970813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-15 15:35:13.970826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-15 15:35:13.971079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-15 15:35:13.971091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-15 15:35:13.971270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-15 15:35:13.971282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-15 15:35:13.971496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-15 15:35:13.971510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-15 15:35:13.971743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-15 15:35:13.971755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-15 15:35:13.971945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-15 15:35:13.971957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-15 15:35:13.972216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-15 15:35:13.972229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-15 15:35:13.972412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-15 15:35:13.972425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-15 15:35:13.972748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-15 15:35:13.972761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-15 15:35:13.973000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-15 15:35:13.973013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-15 15:35:13.973259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-15 15:35:13.973271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-15 15:35:13.973544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-15 15:35:13.973557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-15 15:35:13.973763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-15 15:35:13.973775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-15 15:35:13.973921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-15 15:35:13.973933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-15 15:35:13.974221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-15 15:35:13.974236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-15 15:35:13.974565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-15 15:35:13.974578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-15 15:35:13.974906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-15 15:35:13.974919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-15 15:35:13.975175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-15 15:35:13.975188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-15 15:35:13.975511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-15 15:35:13.975523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.276 qpair failed and we were unable to recover it. 00:30:10.276 [2024-07-15 15:35:13.975761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.276 [2024-07-15 15:35:13.975773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-15 15:35:13.976017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-15 15:35:13.976030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-15 15:35:13.976264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-15 15:35:13.976276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-15 15:35:13.976461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-15 15:35:13.976475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-15 15:35:13.976717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-15 15:35:13.976729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-15 15:35:13.977030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-15 15:35:13.977042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-15 15:35:13.977277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-15 15:35:13.977289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-15 15:35:13.977615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-15 15:35:13.977627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-15 15:35:13.977861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-15 15:35:13.977874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-15 15:35:13.978120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-15 15:35:13.978132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-15 15:35:13.978319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-15 15:35:13.978331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-15 15:35:13.978588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-15 15:35:13.978600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-15 15:35:13.978760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-15 15:35:13.978772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-15 15:35:13.979099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-15 15:35:13.979111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-15 15:35:13.979432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-15 15:35:13.979444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-15 15:35:13.979678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-15 15:35:13.979690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-15 15:35:13.979927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-15 15:35:13.979939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-15 15:35:13.980122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-15 15:35:13.980134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-15 15:35:13.980433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-15 15:35:13.980445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-15 15:35:13.980626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-15 15:35:13.980638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-15 15:35:13.980872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-15 15:35:13.980885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-15 15:35:13.981155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-15 15:35:13.981166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-15 15:35:13.981345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-15 15:35:13.981357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-15 15:35:13.981467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-15 15:35:13.981478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-15 15:35:13.981626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-15 15:35:13.981638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-15 15:35:13.981875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-15 15:35:13.981888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-15 15:35:13.982060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-15 15:35:13.982072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-15 15:35:13.982324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-15 15:35:13.982336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-15 15:35:13.982585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-15 15:35:13.982597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-15 15:35:13.982844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-15 15:35:13.982856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-15 15:35:13.983099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-15 15:35:13.983111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-15 15:35:13.983354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-15 15:35:13.983366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-15 15:35:13.983602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-15 15:35:13.983614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-15 15:35:13.983772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-15 15:35:13.983783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-15 15:35:13.984018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-15 15:35:13.984030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-15 15:35:13.984226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-15 15:35:13.984240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-15 15:35:13.984442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-15 15:35:13.984454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-15 15:35:13.984691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-15 15:35:13.984702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-15 15:35:13.984804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-15 15:35:13.984816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-15 15:35:13.985060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-15 15:35:13.985072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-15 15:35:13.985322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-15 15:35:13.985334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-15 15:35:13.985568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-15 15:35:13.985580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-15 15:35:13.985820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-15 15:35:13.985836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-15 15:35:13.986079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-15 15:35:13.986091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-15 15:35:13.986358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-15 15:35:13.986371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-15 15:35:13.986671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-15 15:35:13.986682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-15 15:35:13.986953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-15 15:35:13.986965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-15 15:35:13.987228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-15 15:35:13.987240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-15 15:35:13.987488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-15 15:35:13.987500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-15 15:35:13.987742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-15 15:35:13.987753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-15 15:35:13.987990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-15 15:35:13.988002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-15 15:35:13.988172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-15 15:35:13.988184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-15 15:35:13.988351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-15 15:35:13.988363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-15 15:35:13.988529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-15 15:35:13.988541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-15 15:35:13.988716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-15 15:35:13.988728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-15 15:35:13.988972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-15 15:35:13.988984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-15 15:35:13.989234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-15 15:35:13.989246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-15 15:35:13.989360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-15 15:35:13.989371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-15 15:35:13.989631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-15 15:35:13.989643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-15 15:35:13.989881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-15 15:35:13.989893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-15 15:35:13.990215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-15 15:35:13.990227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.277 qpair failed and we were unable to recover it. 00:30:10.277 [2024-07-15 15:35:13.990478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.277 [2024-07-15 15:35:13.990489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-15 15:35:13.990722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-15 15:35:13.990734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-15 15:35:13.990999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-15 15:35:13.991011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-15 15:35:13.991182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-15 15:35:13.991194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-15 15:35:13.991392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-15 15:35:13.991404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-15 15:35:13.991688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-15 15:35:13.991700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-15 15:35:13.991886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-15 15:35:13.991898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-15 15:35:13.992202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-15 15:35:13.992214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-15 15:35:13.992464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-15 15:35:13.992475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-15 15:35:13.992723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-15 15:35:13.992735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-15 15:35:13.992936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-15 15:35:13.992948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-15 15:35:13.993127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-15 15:35:13.993139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-15 15:35:13.993321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-15 15:35:13.993333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-15 15:35:13.993577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-15 15:35:13.993589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-15 15:35:13.993825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-15 15:35:13.993842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-15 15:35:13.994080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-15 15:35:13.994092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-15 15:35:13.994321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-15 15:35:13.994333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-15 15:35:13.994582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-15 15:35:13.994593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-15 15:35:13.994773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-15 15:35:13.994785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-15 15:35:13.995042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-15 15:35:13.995055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-15 15:35:13.995297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-15 15:35:13.995309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-15 15:35:13.995557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-15 15:35:13.995570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-15 15:35:13.995824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-15 15:35:13.995842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-15 15:35:13.996011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-15 15:35:13.996023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-15 15:35:13.996305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-15 15:35:13.996317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-15 15:35:13.996492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-15 15:35:13.996504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-15 15:35:13.996735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-15 15:35:13.996746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-15 15:35:13.996933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-15 15:35:13.996945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-15 15:35:13.997243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-15 15:35:13.997254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-15 15:35:13.997484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-15 15:35:13.997496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-15 15:35:13.997747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-15 15:35:13.997760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-15 15:35:13.997961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-15 15:35:13.997973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-15 15:35:13.998243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-15 15:35:13.998255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-15 15:35:13.998448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-15 15:35:13.998460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-15 15:35:13.998664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-15 15:35:13.998675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-15 15:35:13.998863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-15 15:35:13.998875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-15 15:35:13.999111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-15 15:35:13.999123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-15 15:35:13.999428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-15 15:35:13.999440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-15 15:35:13.999701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-15 15:35:13.999713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-15 15:35:13.999948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-15 15:35:13.999961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-15 15:35:14.000205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-15 15:35:14.000217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.278 [2024-07-15 15:35:14.000407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.278 [2024-07-15 15:35:14.000419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.278 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-15 15:35:14.000671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-15 15:35:14.000683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-15 15:35:14.000933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-15 15:35:14.000945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-15 15:35:14.001191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-15 15:35:14.001203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-15 15:35:14.001392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-15 15:35:14.001404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-15 15:35:14.001662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-15 15:35:14.001674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-15 15:35:14.001847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-15 15:35:14.001860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-15 15:35:14.002133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-15 15:35:14.002145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-15 15:35:14.002431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-15 15:35:14.002443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-15 15:35:14.002695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-15 15:35:14.002707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-15 15:35:14.002895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-15 15:35:14.002907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-15 15:35:14.003073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-15 15:35:14.003085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-15 15:35:14.003411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-15 15:35:14.003423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-15 15:35:14.003673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-15 15:35:14.003686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-15 15:35:14.003942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-15 15:35:14.003954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-15 15:35:14.004273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-15 15:35:14.004285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-15 15:35:14.004461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-15 15:35:14.004473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-15 15:35:14.004713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-15 15:35:14.004725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-15 15:35:14.004996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-15 15:35:14.005009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-15 15:35:14.005267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-15 15:35:14.005279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-15 15:35:14.005533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-15 15:35:14.005544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-15 15:35:14.005711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-15 15:35:14.005723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-15 15:35:14.005988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-15 15:35:14.006000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-15 15:35:14.006191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-15 15:35:14.006203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-15 15:35:14.006529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-15 15:35:14.006540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-15 15:35:14.006782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-15 15:35:14.006793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-15 15:35:14.006963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-15 15:35:14.006976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-15 15:35:14.007232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-15 15:35:14.007244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-15 15:35:14.007546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-15 15:35:14.007558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-15 15:35:14.007724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-15 15:35:14.007736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-15 15:35:14.007843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-15 15:35:14.007855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-15 15:35:14.008198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-15 15:35:14.008209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-15 15:35:14.008450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-15 15:35:14.008461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-15 15:35:14.008718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-15 15:35:14.008730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-15 15:35:14.008895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-15 15:35:14.008908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-15 15:35:14.009086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-15 15:35:14.009098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-15 15:35:14.009334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-15 15:35:14.009346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-15 15:35:14.009611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-15 15:35:14.009623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-15 15:35:14.009877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-15 15:35:14.009890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 15:35:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:10.279 15:35:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:30:10.279 15:35:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:10.279 15:35:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:10.279 15:35:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:10.279 [2024-07-15 15:35:14.011042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-15 15:35:14.011068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-15 15:35:14.011320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-15 15:35:14.011333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-15 15:35:14.011637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-15 15:35:14.011649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-15 15:35:14.011977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-15 15:35:14.011991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-15 15:35:14.012194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-15 15:35:14.012206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-15 15:35:14.012425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-15 15:35:14.012438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-15 15:35:14.012745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-15 15:35:14.012757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-15 15:35:14.013066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-15 15:35:14.013079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-15 15:35:14.013259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-15 15:35:14.013271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-15 15:35:14.013466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-15 15:35:14.013477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-15 15:35:14.013725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-15 15:35:14.013738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-15 15:35:14.013986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-15 15:35:14.013999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-15 15:35:14.014273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-15 15:35:14.014288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-15 15:35:14.014541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-15 15:35:14.014554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-15 15:35:14.014796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-15 15:35:14.014808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-15 15:35:14.015014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-15 15:35:14.015027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-15 15:35:14.015264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-15 15:35:14.015277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-15 15:35:14.015621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-15 15:35:14.015633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-15 15:35:14.015821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-15 15:35:14.015837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-15 15:35:14.016035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-15 15:35:14.016048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-15 15:35:14.016304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-15 15:35:14.016317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-15 15:35:14.016567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-15 15:35:14.016579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.279 qpair failed and we were unable to recover it. 00:30:10.279 [2024-07-15 15:35:14.016763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.279 [2024-07-15 15:35:14.016776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-15 15:35:14.016888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-15 15:35:14.016901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-15 15:35:14.017143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-15 15:35:14.017155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-15 15:35:14.017323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-15 15:35:14.017335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-15 15:35:14.017508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-15 15:35:14.017520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-15 15:35:14.017706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-15 15:35:14.017719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-15 15:35:14.017996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-15 15:35:14.018008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-15 15:35:14.018122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-15 15:35:14.018136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-15 15:35:14.018410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-15 15:35:14.018422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-15 15:35:14.018667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-15 15:35:14.018679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-15 15:35:14.018932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-15 15:35:14.018944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-15 15:35:14.019204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-15 15:35:14.019217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-15 15:35:14.019385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-15 15:35:14.019396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-15 15:35:14.019655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-15 15:35:14.019667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-15 15:35:14.019846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-15 15:35:14.019858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-15 15:35:14.020161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-15 15:35:14.020174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-15 15:35:14.020275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-15 15:35:14.020286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-15 15:35:14.020536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-15 15:35:14.020548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-15 15:35:14.020812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-15 15:35:14.020824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-15 15:35:14.021016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-15 15:35:14.021028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-15 15:35:14.021273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-15 15:35:14.021286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-15 15:35:14.021474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-15 15:35:14.021487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-15 15:35:14.021675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-15 15:35:14.021687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-15 15:35:14.021940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-15 15:35:14.021952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-15 15:35:14.022124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-15 15:35:14.022137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-15 15:35:14.022397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-15 15:35:14.022409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-15 15:35:14.022642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-15 15:35:14.022654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-15 15:35:14.022825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-15 15:35:14.022840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-15 15:35:14.023006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-15 15:35:14.023018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-15 15:35:14.023184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-15 15:35:14.023196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-15 15:35:14.023443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-15 15:35:14.023458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-15 15:35:14.023714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-15 15:35:14.023727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-15 15:35:14.023918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-15 15:35:14.023931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-15 15:35:14.024129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-15 15:35:14.024142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-15 15:35:14.024454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-15 15:35:14.024466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-15 15:35:14.024653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-15 15:35:14.024666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-15 15:35:14.024842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-15 15:35:14.024856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-15 15:35:14.025162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-15 15:35:14.025174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-15 15:35:14.025343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-15 15:35:14.025355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-15 15:35:14.025595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-15 15:35:14.025607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-15 15:35:14.025784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-15 15:35:14.025796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-15 15:35:14.026105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-15 15:35:14.026118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-15 15:35:14.026353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-15 15:35:14.026366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-15 15:35:14.026615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-15 15:35:14.026627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-15 15:35:14.026818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-15 15:35:14.026835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-15 15:35:14.027100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-15 15:35:14.027112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-15 15:35:14.027276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-15 15:35:14.027287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-15 15:35:14.027464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-15 15:35:14.027476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-15 15:35:14.027673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-15 15:35:14.027685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-15 15:35:14.027878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-15 15:35:14.027891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-15 15:35:14.028055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-15 15:35:14.028067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-15 15:35:14.028240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-15 15:35:14.028252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-15 15:35:14.028490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-15 15:35:14.028502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-15 15:35:14.028744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-15 15:35:14.028756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-15 15:35:14.028935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-15 15:35:14.028948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-15 15:35:14.029222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-15 15:35:14.029234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-15 15:35:14.029342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-15 15:35:14.029353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-15 15:35:14.029607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-15 15:35:14.029620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-15 15:35:14.029786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-15 15:35:14.029798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.280 [2024-07-15 15:35:14.030049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.280 [2024-07-15 15:35:14.030062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.280 qpair failed and we were unable to recover it. 00:30:10.281 [2024-07-15 15:35:14.030254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-07-15 15:35:14.030266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-07-15 15:35:14.030513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-07-15 15:35:14.030526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-07-15 15:35:14.030755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-07-15 15:35:14.030767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-07-15 15:35:14.030948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-07-15 15:35:14.030960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-07-15 15:35:14.031263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-07-15 15:35:14.031275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-07-15 15:35:14.031442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-07-15 15:35:14.031454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-07-15 15:35:14.031627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-07-15 15:35:14.031639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-07-15 15:35:14.031812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-07-15 15:35:14.031824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-07-15 15:35:14.032060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-07-15 15:35:14.032072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-07-15 15:35:14.032252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-07-15 15:35:14.032265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-07-15 15:35:14.032460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-07-15 15:35:14.032474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-07-15 15:35:14.032645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-07-15 15:35:14.032658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-07-15 15:35:14.032916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-07-15 15:35:14.032929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-07-15 15:35:14.033146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-07-15 15:35:14.033158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-07-15 15:35:14.033355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-07-15 15:35:14.033368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-07-15 15:35:14.033545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-07-15 15:35:14.033558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-07-15 15:35:14.033666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-07-15 15:35:14.033678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-07-15 15:35:14.033853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-07-15 15:35:14.033867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-07-15 15:35:14.034048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-07-15 15:35:14.034060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-07-15 15:35:14.034249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-07-15 15:35:14.034261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-07-15 15:35:14.034434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-07-15 15:35:14.034446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-07-15 15:35:14.034690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-07-15 15:35:14.034702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-07-15 15:35:14.034896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-07-15 15:35:14.034908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-07-15 15:35:14.035093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-07-15 15:35:14.035105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-07-15 15:35:14.035339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-07-15 15:35:14.035352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-07-15 15:35:14.035534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-07-15 15:35:14.035546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-07-15 15:35:14.035782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-07-15 15:35:14.035794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-07-15 15:35:14.035965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-07-15 15:35:14.035977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-07-15 15:35:14.036142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-07-15 15:35:14.036153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-07-15 15:35:14.036334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-07-15 15:35:14.036346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-07-15 15:35:14.036583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-07-15 15:35:14.036595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-07-15 15:35:14.036874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-07-15 15:35:14.036886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-07-15 15:35:14.037058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-07-15 15:35:14.037070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-07-15 15:35:14.037249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-07-15 15:35:14.037261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-07-15 15:35:14.037509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-07-15 15:35:14.037521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-07-15 15:35:14.037733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-07-15 15:35:14.037745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-07-15 15:35:14.037910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-07-15 15:35:14.037922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-07-15 15:35:14.038176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-07-15 15:35:14.038188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-07-15 15:35:14.038430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-07-15 15:35:14.038441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-07-15 15:35:14.038696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-07-15 15:35:14.038708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-07-15 15:35:14.038888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-07-15 15:35:14.038907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-07-15 15:35:14.039150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-07-15 15:35:14.039162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-07-15 15:35:14.039339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-07-15 15:35:14.039351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-07-15 15:35:14.039528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-07-15 15:35:14.039540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-07-15 15:35:14.039782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-07-15 15:35:14.039795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-07-15 15:35:14.040052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-07-15 15:35:14.040065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-07-15 15:35:14.040311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-07-15 15:35:14.040323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-07-15 15:35:14.040504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-07-15 15:35:14.040516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-07-15 15:35:14.040692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-07-15 15:35:14.040705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-07-15 15:35:14.040953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-07-15 15:35:14.040968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-07-15 15:35:14.041142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-07-15 15:35:14.041157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-07-15 15:35:14.041352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-07-15 15:35:14.041365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-07-15 15:35:14.041550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-07-15 15:35:14.041562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.281 [2024-07-15 15:35:14.041726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.281 [2024-07-15 15:35:14.041738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.281 qpair failed and we were unable to recover it. 00:30:10.282 [2024-07-15 15:35:14.041941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-07-15 15:35:14.041953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-07-15 15:35:14.042139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-07-15 15:35:14.042151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-07-15 15:35:14.042318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-07-15 15:35:14.042329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-07-15 15:35:14.042502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-07-15 15:35:14.042515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-07-15 15:35:14.042691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-07-15 15:35:14.042703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-07-15 15:35:14.042874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-07-15 15:35:14.042887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-07-15 15:35:14.043051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-07-15 15:35:14.043063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-07-15 15:35:14.043234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-07-15 15:35:14.043247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-07-15 15:35:14.043409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-07-15 15:35:14.043421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-07-15 15:35:14.043603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-07-15 15:35:14.043615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-07-15 15:35:14.043801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-07-15 15:35:14.043812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-07-15 15:35:14.043981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-07-15 15:35:14.043993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-07-15 15:35:14.044176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-07-15 15:35:14.044189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-07-15 15:35:14.044351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-07-15 15:35:14.044363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-07-15 15:35:14.044535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-07-15 15:35:14.044547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-07-15 15:35:14.044779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-07-15 15:35:14.044791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-07-15 15:35:14.044958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-07-15 15:35:14.044971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-07-15 15:35:14.045135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-07-15 15:35:14.045146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-07-15 15:35:14.045314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-07-15 15:35:14.045326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-07-15 15:35:14.045509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-07-15 15:35:14.045522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-07-15 15:35:14.045697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-07-15 15:35:14.045709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-07-15 15:35:14.045889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-07-15 15:35:14.045901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-07-15 15:35:14.046096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-07-15 15:35:14.046107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-07-15 15:35:14.046282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-07-15 15:35:14.046294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-07-15 15:35:14.046622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-07-15 15:35:14.046634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-07-15 15:35:14.046825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-07-15 15:35:14.046842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-07-15 15:35:14.047080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-07-15 15:35:14.047092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-07-15 15:35:14.047266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-07-15 15:35:14.047278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-07-15 15:35:14.047371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-07-15 15:35:14.047383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-07-15 15:35:14.047557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-07-15 15:35:14.047569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-07-15 15:35:14.047736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-07-15 15:35:14.047748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-07-15 15:35:14.047924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-07-15 15:35:14.047938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-07-15 15:35:14.048111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-07-15 15:35:14.048123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-07-15 15:35:14.048355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-07-15 15:35:14.048368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-07-15 15:35:14.048620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-07-15 15:35:14.048632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-07-15 15:35:14.048805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-07-15 15:35:14.048817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-07-15 15:35:14.049056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-07-15 15:35:14.049070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-07-15 15:35:14.049258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-07-15 15:35:14.049272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-07-15 15:35:14.049450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-07-15 15:35:14.049463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-07-15 15:35:14.049708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-07-15 15:35:14.049720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-07-15 15:35:14.049889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-07-15 15:35:14.049902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-07-15 15:35:14.050141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-07-15 15:35:14.050154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-07-15 15:35:14.050336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-07-15 15:35:14.050347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-07-15 15:35:14.050521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-07-15 15:35:14.050533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-07-15 15:35:14.050773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-07-15 15:35:14.050785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-07-15 15:35:14.051024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-07-15 15:35:14.051037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-07-15 15:35:14.051271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-07-15 15:35:14.051283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-07-15 15:35:14.051523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-07-15 15:35:14.051536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-07-15 15:35:14.051707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-07-15 15:35:14.051719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-07-15 15:35:14.051892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-07-15 15:35:14.051904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-07-15 15:35:14.052097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-07-15 15:35:14.052109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-07-15 15:35:14.052359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-07-15 15:35:14.052372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-07-15 15:35:14.052532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-07-15 15:35:14.052544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-07-15 15:35:14.052725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-07-15 15:35:14.052737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-07-15 15:35:14.052918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-07-15 15:35:14.052930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-07-15 15:35:14.053108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-07-15 15:35:14.053120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-07-15 15:35:14.053296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-07-15 15:35:14.053308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-07-15 15:35:14.053554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-07-15 15:35:14.053566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-07-15 15:35:14.053755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-07-15 15:35:14.053767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 [2024-07-15 15:35:14.053967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.282 [2024-07-15 15:35:14.053982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.282 qpair failed and we were unable to recover it. 00:30:10.282 15:35:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:10.282 [2024-07-15 15:35:14.054164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-07-15 15:35:14.054177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-07-15 15:35:14.054430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-07-15 15:35:14.054443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 15:35:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:10.283 [2024-07-15 15:35:14.054679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-07-15 15:35:14.054693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 15:35:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.283 [2024-07-15 15:35:14.054889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-07-15 15:35:14.054903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 15:35:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:10.283 [2024-07-15 15:35:14.055142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-07-15 15:35:14.055155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-07-15 15:35:14.055334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-07-15 15:35:14.055347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-07-15 15:35:14.055525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-07-15 15:35:14.055537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-07-15 15:35:14.055714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-07-15 15:35:14.055727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-07-15 15:35:14.055922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-07-15 15:35:14.055935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-07-15 15:35:14.056114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-07-15 15:35:14.056126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-07-15 15:35:14.056317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-07-15 15:35:14.056329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-07-15 15:35:14.056511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-07-15 15:35:14.056523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-07-15 15:35:14.056694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-07-15 15:35:14.056706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-07-15 15:35:14.056941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-07-15 15:35:14.056953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-07-15 15:35:14.057117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-07-15 15:35:14.057129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-07-15 15:35:14.057371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-07-15 15:35:14.057383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-07-15 15:35:14.057621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-07-15 15:35:14.057633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-07-15 15:35:14.057823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-07-15 15:35:14.057839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-07-15 15:35:14.058017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-07-15 15:35:14.058029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-07-15 15:35:14.058264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-07-15 15:35:14.058276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-07-15 15:35:14.058523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-07-15 15:35:14.058535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-07-15 15:35:14.058726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-07-15 15:35:14.058738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-07-15 15:35:14.059043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-07-15 15:35:14.059055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-07-15 15:35:14.059198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-07-15 15:35:14.059210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-07-15 15:35:14.059396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-07-15 15:35:14.059408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-07-15 15:35:14.059588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-07-15 15:35:14.059600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-07-15 15:35:14.059781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-07-15 15:35:14.059793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-07-15 15:35:14.059982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-07-15 15:35:14.059995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-07-15 15:35:14.060230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-07-15 15:35:14.060242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-07-15 15:35:14.060480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-07-15 15:35:14.060492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-07-15 15:35:14.060659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-07-15 15:35:14.060671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-07-15 15:35:14.060843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-07-15 15:35:14.060856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-07-15 15:35:14.061031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-07-15 15:35:14.061043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-07-15 15:35:14.061283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-07-15 15:35:14.061294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-07-15 15:35:14.061530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-07-15 15:35:14.061543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-07-15 15:35:14.061791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-07-15 15:35:14.061803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-07-15 15:35:14.061992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-07-15 15:35:14.062005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-07-15 15:35:14.062286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-07-15 15:35:14.062298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-07-15 15:35:14.062527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-07-15 15:35:14.062540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-07-15 15:35:14.062707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-07-15 15:35:14.062720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-07-15 15:35:14.062908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-07-15 15:35:14.062920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-07-15 15:35:14.063094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-07-15 15:35:14.063108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-07-15 15:35:14.063283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-07-15 15:35:14.063294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-07-15 15:35:14.063465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-07-15 15:35:14.063478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-07-15 15:35:14.063718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-07-15 15:35:14.063730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-07-15 15:35:14.063988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-07-15 15:35:14.064001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-07-15 15:35:14.064199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-07-15 15:35:14.064211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-07-15 15:35:14.064494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-07-15 15:35:14.064507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-07-15 15:35:14.064675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-07-15 15:35:14.064687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-07-15 15:35:14.064865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-07-15 15:35:14.064877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-07-15 15:35:14.065050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-07-15 15:35:14.065062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-07-15 15:35:14.065390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-07-15 15:35:14.065403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-07-15 15:35:14.065576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-07-15 15:35:14.065589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-07-15 15:35:14.065840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-07-15 15:35:14.065853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-07-15 15:35:14.065955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-07-15 15:35:14.065967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-07-15 15:35:14.066211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-07-15 15:35:14.066224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-07-15 15:35:14.066490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-07-15 15:35:14.066503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-07-15 15:35:14.066802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-07-15 15:35:14.066815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-07-15 15:35:14.067014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-07-15 15:35:14.067026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-07-15 15:35:14.067283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-07-15 15:35:14.067295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-07-15 15:35:14.067462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-07-15 15:35:14.067474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.283 qpair failed and we were unable to recover it. 00:30:10.283 [2024-07-15 15:35:14.067660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.283 [2024-07-15 15:35:14.067673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-15 15:35:14.067908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-15 15:35:14.067922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-15 15:35:14.068126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-15 15:35:14.068139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-15 15:35:14.068323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-15 15:35:14.068335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-15 15:35:14.068508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-15 15:35:14.068521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-15 15:35:14.068847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-15 15:35:14.068860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-15 15:35:14.069028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-15 15:35:14.069040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-15 15:35:14.069249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-15 15:35:14.069288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff174000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-15 15:35:14.069521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-15 15:35:14.069559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19dd210 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-15 15:35:14.069594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eb1f0 (9): Bad file descriptor 00:30:10.284 [2024-07-15 15:35:14.069849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-15 15:35:14.069885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff164000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-15 15:35:14.070152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-15 15:35:14.070167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-15 15:35:14.070345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-15 15:35:14.070358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-15 15:35:14.070545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-15 15:35:14.070558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-15 15:35:14.070730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-15 15:35:14.070743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-15 15:35:14.070924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-15 15:35:14.070937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-15 15:35:14.071118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-15 15:35:14.071131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-15 15:35:14.071371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-15 15:35:14.071385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-15 15:35:14.071549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-15 15:35:14.071562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-15 15:35:14.071733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-15 15:35:14.071746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-15 15:35:14.071982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-15 15:35:14.071995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-15 15:35:14.072195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-15 15:35:14.072208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-15 15:35:14.072385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-15 15:35:14.072399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-15 15:35:14.072597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-15 15:35:14.072610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-15 15:35:14.072789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-15 15:35:14.072813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-15 15:35:14.072990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-15 15:35:14.073003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-15 15:35:14.073255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-15 15:35:14.073268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-15 15:35:14.073438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-15 15:35:14.073450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-15 15:35:14.073726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-15 15:35:14.073739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-15 15:35:14.073974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-15 15:35:14.073987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 Malloc0 00:30:10.284 15:35:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.284 15:35:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:10.284 15:35:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.284 15:35:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:10.284 [2024-07-15 15:35:14.074829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-15 15:35:14.074860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-15 15:35:14.075123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-15 15:35:14.075138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-15 15:35:14.075382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-15 15:35:14.075396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-15 15:35:14.075583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-15 15:35:14.075596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-15 15:35:14.075778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-15 15:35:14.075791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-15 15:35:14.076094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-15 15:35:14.076107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-15 15:35:14.076279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-15 15:35:14.076293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-15 15:35:14.076468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-15 15:35:14.076481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-15 15:35:14.076721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-15 15:35:14.076734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-15 15:35:14.076934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-15 15:35:14.076947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-15 15:35:14.077179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-15 15:35:14.077193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-15 15:35:14.077369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-15 15:35:14.077382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-15 15:35:14.077553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-15 15:35:14.077566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-15 15:35:14.077733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-15 15:35:14.077747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-15 15:35:14.077817] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:10.284 [2024-07-15 15:35:14.077855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-15 15:35:14.077868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-15 15:35:14.078107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-15 15:35:14.078118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-15 15:35:14.078292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-15 15:35:14.078305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-15 15:35:14.078467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-15 15:35:14.078480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-15 15:35:14.078658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-15 15:35:14.078672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-15 15:35:14.078972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-15 15:35:14.078985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-15 15:35:14.079153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-15 15:35:14.079166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-15 15:35:14.079333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-15 15:35:14.079346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.284 qpair failed and we were unable to recover it. 00:30:10.284 [2024-07-15 15:35:14.079531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.284 [2024-07-15 15:35:14.079544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-15 15:35:14.079706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-15 15:35:14.079719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-15 15:35:14.079954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-15 15:35:14.079967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-15 15:35:14.080268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-15 15:35:14.080281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-15 15:35:14.080456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-15 15:35:14.080469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-15 15:35:14.080708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-15 15:35:14.080722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-15 15:35:14.080828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-15 15:35:14.080847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-15 15:35:14.081031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-15 15:35:14.081044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-15 15:35:14.081277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-15 15:35:14.081290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-15 15:35:14.081538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-15 15:35:14.081551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-15 15:35:14.081722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-15 15:35:14.081736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-15 15:35:14.081919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-15 15:35:14.081932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-15 15:35:14.082100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-15 15:35:14.082114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-15 15:35:14.082349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-15 15:35:14.082363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-15 15:35:14.082602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-15 15:35:14.082616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-15 15:35:14.082788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-15 15:35:14.082801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-15 15:35:14.083105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-15 15:35:14.083119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-15 15:35:14.083301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-15 15:35:14.083314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-15 15:35:14.083561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-15 15:35:14.083574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-15 15:35:14.083669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-15 15:35:14.083682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-15 15:35:14.083917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-15 15:35:14.083933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-15 15:35:14.084108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-15 15:35:14.084121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-15 15:35:14.084298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-15 15:35:14.084312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-15 15:35:14.084519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-15 15:35:14.084532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-15 15:35:14.084713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-15 15:35:14.084726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-15 15:35:14.084961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-15 15:35:14.084975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-15 15:35:14.085170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-15 15:35:14.085184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-15 15:35:14.085366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-15 15:35:14.085379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-15 15:35:14.085562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-15 15:35:14.085575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 15:35:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.285 15:35:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:10.285 15:35:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.285 15:35:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:10.285 [2024-07-15 15:35:14.086436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-15 15:35:14.086457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-15 15:35:14.086737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-15 15:35:14.086751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-15 15:35:14.087122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-15 15:35:14.087135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-15 15:35:14.087307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-15 15:35:14.087320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-15 15:35:14.087556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-15 15:35:14.087568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-15 15:35:14.087822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-15 15:35:14.087847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-15 15:35:14.088022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-15 15:35:14.088035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-15 15:35:14.088203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-15 15:35:14.088216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-15 15:35:14.088386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-15 15:35:14.088400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-15 15:35:14.088643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-15 15:35:14.088656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-15 15:35:14.088849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-15 15:35:14.088862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-15 15:35:14.089062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-15 15:35:14.089076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-15 15:35:14.089239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-15 15:35:14.089252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-15 15:35:14.089419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-15 15:35:14.089433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-15 15:35:14.089595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-15 15:35:14.089608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-15 15:35:14.089827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-15 15:35:14.089844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-15 15:35:14.090015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-15 15:35:14.090031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-15 15:35:14.090198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-15 15:35:14.090212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-15 15:35:14.090399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-15 15:35:14.090412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-15 15:35:14.090737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-15 15:35:14.090751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-15 15:35:14.090987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-15 15:35:14.091001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-15 15:35:14.091126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-15 15:35:14.091139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-15 15:35:14.091318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-15 15:35:14.091332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-15 15:35:14.091576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-15 15:35:14.091589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-15 15:35:14.091879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-15 15:35:14.091893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-15 15:35:14.092083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-15 15:35:14.092097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-15 15:35:14.092334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-15 15:35:14.092346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-15 15:35:14.092523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-15 15:35:14.092536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-15 15:35:14.092719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-15 15:35:14.092733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-15 15:35:14.092912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-15 15:35:14.092926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-15 15:35:14.093104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-15 15:35:14.093117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.285 [2024-07-15 15:35:14.093355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.285 [2024-07-15 15:35:14.093369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.285 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-15 15:35:14.093532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-15 15:35:14.093545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-15 15:35:14.093873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-15 15:35:14.093887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-15 15:35:14.093998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-15 15:35:14.094011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-15 15:35:14.094199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-15 15:35:14.094213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-15 15:35:14.094398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-15 15:35:14.094411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 15:35:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.286 [2024-07-15 15:35:14.094646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-15 15:35:14.094659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-15 15:35:14.094856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-15 15:35:14.094869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 15:35:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:10.286 [2024-07-15 15:35:14.095045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-15 15:35:14.095059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 15:35:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.286 [2024-07-15 15:35:14.095325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-15 15:35:14.095342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-15 15:35:14.095511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-15 15:35:14.095524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 15:35:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:10.286 [2024-07-15 15:35:14.095691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-15 15:35:14.095705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-15 15:35:14.095952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-15 15:35:14.095966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-15 15:35:14.096196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-15 15:35:14.096209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-15 15:35:14.096381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-15 15:35:14.096394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-15 15:35:14.096508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-15 15:35:14.096521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-15 15:35:14.096707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-15 15:35:14.096720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-15 15:35:14.096900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-15 15:35:14.096913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-15 15:35:14.097143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-15 15:35:14.097157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-15 15:35:14.097396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-15 15:35:14.097409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-15 15:35:14.097655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-15 15:35:14.097669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-15 15:35:14.097835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-15 15:35:14.097849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-15 15:35:14.098032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-15 15:35:14.098046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-15 15:35:14.098230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-15 15:35:14.098244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-15 15:35:14.098476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-15 15:35:14.098489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-15 15:35:14.098760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-15 15:35:14.098773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-15 15:35:14.099081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-15 15:35:14.099094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-15 15:35:14.099270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-15 15:35:14.099283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-15 15:35:14.099523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-15 15:35:14.099536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-15 15:35:14.099773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-15 15:35:14.099787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-15 15:35:14.100021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-15 15:35:14.100035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-15 15:35:14.100228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-15 15:35:14.100242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-15 15:35:14.100424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-15 15:35:14.100437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-15 15:35:14.100603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-15 15:35:14.100616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-15 15:35:14.100872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-15 15:35:14.100886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-15 15:35:14.101134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-15 15:35:14.101147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-15 15:35:14.101429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-15 15:35:14.101443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-15 15:35:14.101622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-15 15:35:14.101636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-15 15:35:14.101740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-15 15:35:14.101754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-15 15:35:14.101946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-15 15:35:14.101960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-15 15:35:14.102135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-15 15:35:14.102148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-15 15:35:14.102252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-15 15:35:14.102265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-15 15:35:14.102443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-15 15:35:14.102456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 15:35:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.286 [2024-07-15 15:35:14.102633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-15 15:35:14.102646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-15 15:35:14.102914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-15 15:35:14.102928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 15:35:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:10.286 [2024-07-15 15:35:14.103095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-15 15:35:14.103108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 15:35:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.286 [2024-07-15 15:35:14.103362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-15 15:35:14.103376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 15:35:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:10.286 [2024-07-15 15:35:14.103677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-15 15:35:14.103691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-15 15:35:14.103869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-15 15:35:14.103885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-15 15:35:14.104069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-15 15:35:14.104082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-15 15:35:14.104324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-15 15:35:14.104338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-15 15:35:14.104604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-15 15:35:14.104617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-15 15:35:14.104847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-15 15:35:14.104860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-15 15:35:14.105057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-15 15:35:14.105070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-15 15:35:14.105235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-15 15:35:14.105248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-15 15:35:14.105503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.286 [2024-07-15 15:35:14.105517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.286 qpair failed and we were unable to recover it. 00:30:10.286 [2024-07-15 15:35:14.105751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-07-15 15:35:14.105764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.287 qpair failed and we were unable to recover it. 00:30:10.287 [2024-07-15 15:35:14.106097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.287 [2024-07-15 15:35:14.106101] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:10.287 [2024-07-15 15:35:14.106110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff16c000b90 with addr=10.0.0.2, port=4420 00:30:10.287 qpair failed and we were unable to recover it. 00:30:10.287 [2024-07-15 15:35:14.108377] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.287 [2024-07-15 15:35:14.108475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.287 [2024-07-15 15:35:14.108495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.287 [2024-07-15 15:35:14.108506] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.287 [2024-07-15 15:35:14.108515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:10.287 [2024-07-15 15:35:14.108537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.287 qpair failed and we were unable to recover it. 00:30:10.287 15:35:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.287 15:35:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:10.287 15:35:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.287 15:35:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:10.287 [2024-07-15 15:35:14.118338] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.287 [2024-07-15 15:35:14.118527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.287 [2024-07-15 15:35:14.118548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.287 [2024-07-15 15:35:14.118558] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.287 [2024-07-15 15:35:14.118568] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:10.287 [2024-07-15 15:35:14.118588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.287 qpair failed and we were unable to recover it. 00:30:10.287 15:35:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.287 15:35:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3226061 00:30:10.287 [2024-07-15 15:35:14.128400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.287 [2024-07-15 15:35:14.128486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.287 [2024-07-15 15:35:14.128504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.287 [2024-07-15 15:35:14.128515] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.287 [2024-07-15 15:35:14.128524] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:10.287 [2024-07-15 15:35:14.128542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.287 qpair failed and we were unable to recover it. 00:30:10.287 [2024-07-15 15:35:14.138335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.287 [2024-07-15 15:35:14.138452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.287 [2024-07-15 15:35:14.138478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.287 [2024-07-15 15:35:14.138489] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.287 [2024-07-15 15:35:14.138498] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:10.287 [2024-07-15 15:35:14.138517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.287 qpair failed and we were unable to recover it. 00:30:10.287 [2024-07-15 15:35:14.148400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.287 [2024-07-15 15:35:14.148500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.287 [2024-07-15 15:35:14.148517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.287 [2024-07-15 15:35:14.148527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.287 [2024-07-15 15:35:14.148536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:10.287 [2024-07-15 15:35:14.148558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.287 qpair failed and we were unable to recover it. 00:30:10.287 [2024-07-15 15:35:14.158401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.287 [2024-07-15 15:35:14.158485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.287 [2024-07-15 15:35:14.158505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.287 [2024-07-15 15:35:14.158515] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.287 [2024-07-15 15:35:14.158525] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:10.287 [2024-07-15 15:35:14.158545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.287 qpair failed and we were unable to recover it. 00:30:10.545 [2024-07-15 15:35:14.168423] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.545 [2024-07-15 15:35:14.168510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.545 [2024-07-15 15:35:14.168528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.545 [2024-07-15 15:35:14.168538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.545 [2024-07-15 15:35:14.168546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:10.545 [2024-07-15 15:35:14.168566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.545 qpair failed and we were unable to recover it. 00:30:10.545 [2024-07-15 15:35:14.178402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.545 [2024-07-15 15:35:14.178492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.545 [2024-07-15 15:35:14.178510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.545 [2024-07-15 15:35:14.178520] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.545 [2024-07-15 15:35:14.178529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:10.545 [2024-07-15 15:35:14.178549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.545 qpair failed and we were unable to recover it. 00:30:10.545 [2024-07-15 15:35:14.188484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.545 [2024-07-15 15:35:14.188571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.545 [2024-07-15 15:35:14.188589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.545 [2024-07-15 15:35:14.188599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.545 [2024-07-15 15:35:14.188608] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:10.545 [2024-07-15 15:35:14.188628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.545 qpair failed and we were unable to recover it. 00:30:10.545 [2024-07-15 15:35:14.198448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.545 [2024-07-15 15:35:14.198533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.545 [2024-07-15 15:35:14.198555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.545 [2024-07-15 15:35:14.198565] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.545 [2024-07-15 15:35:14.198573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:10.545 [2024-07-15 15:35:14.198594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.545 qpair failed and we were unable to recover it. 00:30:10.545 [2024-07-15 15:35:14.208499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.545 [2024-07-15 15:35:14.208581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.545 [2024-07-15 15:35:14.208599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.545 [2024-07-15 15:35:14.208610] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.545 [2024-07-15 15:35:14.208618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:10.545 [2024-07-15 15:35:14.208637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.545 qpair failed and we were unable to recover it. 00:30:10.545 [2024-07-15 15:35:14.218498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.545 [2024-07-15 15:35:14.218584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.545 [2024-07-15 15:35:14.218604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.545 [2024-07-15 15:35:14.218614] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.545 [2024-07-15 15:35:14.218624] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:10.545 [2024-07-15 15:35:14.218644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.545 qpair failed and we were unable to recover it. 00:30:10.545 [2024-07-15 15:35:14.228600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.546 [2024-07-15 15:35:14.228778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.546 [2024-07-15 15:35:14.228797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.546 [2024-07-15 15:35:14.228807] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.546 [2024-07-15 15:35:14.228817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:10.546 [2024-07-15 15:35:14.228840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.546 qpair failed and we were unable to recover it. 00:30:10.546 [2024-07-15 15:35:14.238571] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.546 [2024-07-15 15:35:14.238655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.546 [2024-07-15 15:35:14.238674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.546 [2024-07-15 15:35:14.238684] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.546 [2024-07-15 15:35:14.238696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:10.546 [2024-07-15 15:35:14.238716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.546 qpair failed and we were unable to recover it. 00:30:10.546 [2024-07-15 15:35:14.248645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.546 [2024-07-15 15:35:14.248728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.546 [2024-07-15 15:35:14.248747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.546 [2024-07-15 15:35:14.248757] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.546 [2024-07-15 15:35:14.248766] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:10.546 [2024-07-15 15:35:14.248787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.546 qpair failed and we were unable to recover it. 00:30:10.546 [2024-07-15 15:35:14.258634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.546 [2024-07-15 15:35:14.258719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.546 [2024-07-15 15:35:14.258737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.546 [2024-07-15 15:35:14.258747] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.546 [2024-07-15 15:35:14.258755] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:10.546 [2024-07-15 15:35:14.258774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.546 qpair failed and we were unable to recover it. 00:30:10.546 [2024-07-15 15:35:14.268745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.546 [2024-07-15 15:35:14.268857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.546 [2024-07-15 15:35:14.268874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.546 [2024-07-15 15:35:14.268884] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.546 [2024-07-15 15:35:14.268892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:10.546 [2024-07-15 15:35:14.268911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.546 qpair failed and we were unable to recover it. 00:30:10.546 [2024-07-15 15:35:14.278736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.546 [2024-07-15 15:35:14.278828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.546 [2024-07-15 15:35:14.278850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.546 [2024-07-15 15:35:14.278860] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.546 [2024-07-15 15:35:14.278869] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:10.546 [2024-07-15 15:35:14.278887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.546 qpair failed and we were unable to recover it. 00:30:10.546 [2024-07-15 15:35:14.288745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.546 [2024-07-15 15:35:14.288851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.546 [2024-07-15 15:35:14.288869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.546 [2024-07-15 15:35:14.288878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.546 [2024-07-15 15:35:14.288887] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:10.546 [2024-07-15 15:35:14.288906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.546 qpair failed and we were unable to recover it. 00:30:10.546 [2024-07-15 15:35:14.298754] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.546 [2024-07-15 15:35:14.298855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.546 [2024-07-15 15:35:14.298873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.546 [2024-07-15 15:35:14.298883] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.546 [2024-07-15 15:35:14.298892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:10.546 [2024-07-15 15:35:14.298912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.546 qpair failed and we were unable to recover it. 00:30:10.546 [2024-07-15 15:35:14.308808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.546 [2024-07-15 15:35:14.308909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.546 [2024-07-15 15:35:14.308927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.546 [2024-07-15 15:35:14.308936] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.546 [2024-07-15 15:35:14.308945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:10.546 [2024-07-15 15:35:14.308964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.546 qpair failed and we were unable to recover it. 00:30:10.546 [2024-07-15 15:35:14.318999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.546 [2024-07-15 15:35:14.319095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.546 [2024-07-15 15:35:14.319113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.546 [2024-07-15 15:35:14.319122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.546 [2024-07-15 15:35:14.319131] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:10.546 [2024-07-15 15:35:14.319150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.546 qpair failed and we were unable to recover it. 00:30:10.546 [2024-07-15 15:35:14.328958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.546 [2024-07-15 15:35:14.329040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.546 [2024-07-15 15:35:14.329058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.546 [2024-07-15 15:35:14.329073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.546 [2024-07-15 15:35:14.329082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:10.546 [2024-07-15 15:35:14.329101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.546 qpair failed and we were unable to recover it. 00:30:10.546 [2024-07-15 15:35:14.338931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.546 [2024-07-15 15:35:14.339015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.546 [2024-07-15 15:35:14.339033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.546 [2024-07-15 15:35:14.339042] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.546 [2024-07-15 15:35:14.339051] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:10.546 [2024-07-15 15:35:14.339070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.546 qpair failed and we were unable to recover it. 00:30:10.546 [2024-07-15 15:35:14.348964] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.546 [2024-07-15 15:35:14.349046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.546 [2024-07-15 15:35:14.349065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.546 [2024-07-15 15:35:14.349075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.546 [2024-07-15 15:35:14.349084] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:10.546 [2024-07-15 15:35:14.349102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.546 qpair failed and we were unable to recover it. 00:30:10.546 [2024-07-15 15:35:14.358900] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.546 [2024-07-15 15:35:14.358985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.546 [2024-07-15 15:35:14.359003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.546 [2024-07-15 15:35:14.359014] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.546 [2024-07-15 15:35:14.359024] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:10.546 [2024-07-15 15:35:14.359044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.546 qpair failed and we were unable to recover it. 00:30:10.546 [2024-07-15 15:35:14.369044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.546 [2024-07-15 15:35:14.369143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.546 [2024-07-15 15:35:14.369162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.546 [2024-07-15 15:35:14.369173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.546 [2024-07-15 15:35:14.369184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:10.547 [2024-07-15 15:35:14.369204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.547 qpair failed and we were unable to recover it. 00:30:10.547 [2024-07-15 15:35:14.379003] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.547 [2024-07-15 15:35:14.379091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.547 [2024-07-15 15:35:14.379109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.547 [2024-07-15 15:35:14.379118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.547 [2024-07-15 15:35:14.379128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:10.547 [2024-07-15 15:35:14.379147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.547 qpair failed and we were unable to recover it. 00:30:10.547 [2024-07-15 15:35:14.389037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.547 [2024-07-15 15:35:14.389120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.547 [2024-07-15 15:35:14.389138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.547 [2024-07-15 15:35:14.389147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.547 [2024-07-15 15:35:14.389156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:10.547 [2024-07-15 15:35:14.389174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.547 qpair failed and we were unable to recover it. 00:30:10.547 [2024-07-15 15:35:14.399081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.547 [2024-07-15 15:35:14.399162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.547 [2024-07-15 15:35:14.399180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.547 [2024-07-15 15:35:14.399190] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.547 [2024-07-15 15:35:14.399199] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:10.547 [2024-07-15 15:35:14.399217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.547 qpair failed and we were unable to recover it. 00:30:10.547 [2024-07-15 15:35:14.409110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.547 [2024-07-15 15:35:14.409197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.547 [2024-07-15 15:35:14.409215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.547 [2024-07-15 15:35:14.409225] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.547 [2024-07-15 15:35:14.409233] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:10.547 [2024-07-15 15:35:14.409252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.547 qpair failed and we were unable to recover it. 00:30:10.547 [2024-07-15 15:35:14.419096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.547 [2024-07-15 15:35:14.419183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.547 [2024-07-15 15:35:14.419200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.547 [2024-07-15 15:35:14.419213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.547 [2024-07-15 15:35:14.419222] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:10.547 [2024-07-15 15:35:14.419240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.547 qpair failed and we were unable to recover it. 00:30:10.547 [2024-07-15 15:35:14.429144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.547 [2024-07-15 15:35:14.429230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.547 [2024-07-15 15:35:14.429248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.547 [2024-07-15 15:35:14.429257] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.547 [2024-07-15 15:35:14.429266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:10.547 [2024-07-15 15:35:14.429285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.547 qpair failed and we were unable to recover it. 00:30:10.547 [2024-07-15 15:35:14.439173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.547 [2024-07-15 15:35:14.439255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.547 [2024-07-15 15:35:14.439273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.547 [2024-07-15 15:35:14.439283] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.547 [2024-07-15 15:35:14.439291] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:10.547 [2024-07-15 15:35:14.439309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.547 qpair failed and we were unable to recover it. 00:30:10.547 [2024-07-15 15:35:14.449199] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.547 [2024-07-15 15:35:14.449282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.547 [2024-07-15 15:35:14.449299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.547 [2024-07-15 15:35:14.449309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.547 [2024-07-15 15:35:14.449317] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:10.547 [2024-07-15 15:35:14.449336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.547 qpair failed and we were unable to recover it. 00:30:10.805 [2024-07-15 15:35:14.459208] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.805 [2024-07-15 15:35:14.459310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.805 [2024-07-15 15:35:14.459328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.805 [2024-07-15 15:35:14.459337] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.805 [2024-07-15 15:35:14.459346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:10.805 [2024-07-15 15:35:14.459365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.805 qpair failed and we were unable to recover it. 00:30:10.805 [2024-07-15 15:35:14.469303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.805 [2024-07-15 15:35:14.469416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.805 [2024-07-15 15:35:14.469433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.805 [2024-07-15 15:35:14.469443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.805 [2024-07-15 15:35:14.469452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:10.805 [2024-07-15 15:35:14.469471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.805 qpair failed and we were unable to recover it. 00:30:10.805 [2024-07-15 15:35:14.479281] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.805 [2024-07-15 15:35:14.479361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.805 [2024-07-15 15:35:14.479379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.805 [2024-07-15 15:35:14.479390] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.805 [2024-07-15 15:35:14.479399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:10.805 [2024-07-15 15:35:14.479417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.805 qpair failed and we were unable to recover it. 00:30:10.805 [2024-07-15 15:35:14.489324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.805 [2024-07-15 15:35:14.489402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.805 [2024-07-15 15:35:14.489420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.805 [2024-07-15 15:35:14.489430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.805 [2024-07-15 15:35:14.489438] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:10.805 [2024-07-15 15:35:14.489456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.806 qpair failed and we were unable to recover it. 00:30:10.806 [2024-07-15 15:35:14.499317] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.806 [2024-07-15 15:35:14.499419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.806 [2024-07-15 15:35:14.499436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.806 [2024-07-15 15:35:14.499446] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.806 [2024-07-15 15:35:14.499455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:10.806 [2024-07-15 15:35:14.499473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.806 qpair failed and we were unable to recover it. 00:30:10.806 [2024-07-15 15:35:14.509364] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.806 [2024-07-15 15:35:14.509453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.806 [2024-07-15 15:35:14.509474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.806 [2024-07-15 15:35:14.509484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.806 [2024-07-15 15:35:14.509492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:10.806 [2024-07-15 15:35:14.509511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.806 qpair failed and we were unable to recover it. 00:30:10.806 [2024-07-15 15:35:14.519361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.806 [2024-07-15 15:35:14.519445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.806 [2024-07-15 15:35:14.519464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.806 [2024-07-15 15:35:14.519473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.806 [2024-07-15 15:35:14.519482] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:10.806 [2024-07-15 15:35:14.519501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.806 qpair failed and we were unable to recover it. 00:30:10.806 [2024-07-15 15:35:14.529423] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.806 [2024-07-15 15:35:14.529708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.806 [2024-07-15 15:35:14.529727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.806 [2024-07-15 15:35:14.529737] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.806 [2024-07-15 15:35:14.529746] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:10.806 [2024-07-15 15:35:14.529765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.806 qpair failed and we were unable to recover it. 00:30:10.806 [2024-07-15 15:35:14.539463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.806 [2024-07-15 15:35:14.539546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.806 [2024-07-15 15:35:14.539563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.806 [2024-07-15 15:35:14.539573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.806 [2024-07-15 15:35:14.539582] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:10.806 [2024-07-15 15:35:14.539601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.806 qpair failed and we were unable to recover it. 00:30:10.806 [2024-07-15 15:35:14.549460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.806 [2024-07-15 15:35:14.549558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.806 [2024-07-15 15:35:14.549576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.806 [2024-07-15 15:35:14.549585] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.806 [2024-07-15 15:35:14.549594] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:10.806 [2024-07-15 15:35:14.549617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.806 qpair failed and we were unable to recover it. 00:30:10.806 [2024-07-15 15:35:14.559506] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.806 [2024-07-15 15:35:14.559589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.806 [2024-07-15 15:35:14.559607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.806 [2024-07-15 15:35:14.559617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.806 [2024-07-15 15:35:14.559625] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:10.806 [2024-07-15 15:35:14.559643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.806 qpair failed and we were unable to recover it. 00:30:10.806 [2024-07-15 15:35:14.569633] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.806 [2024-07-15 15:35:14.569804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.806 [2024-07-15 15:35:14.569823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.806 [2024-07-15 15:35:14.569836] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.806 [2024-07-15 15:35:14.569845] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:10.806 [2024-07-15 15:35:14.569865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.806 qpair failed and we were unable to recover it. 00:30:10.806 [2024-07-15 15:35:14.579541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.806 [2024-07-15 15:35:14.579637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.806 [2024-07-15 15:35:14.579654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.806 [2024-07-15 15:35:14.579663] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.806 [2024-07-15 15:35:14.579672] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:10.806 [2024-07-15 15:35:14.579690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.806 qpair failed and we were unable to recover it. 00:30:10.806 [2024-07-15 15:35:14.589592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.806 [2024-07-15 15:35:14.589671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.806 [2024-07-15 15:35:14.589688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.806 [2024-07-15 15:35:14.589698] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.806 [2024-07-15 15:35:14.589707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:10.806 [2024-07-15 15:35:14.589725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.806 qpair failed and we were unable to recover it. 00:30:10.806 [2024-07-15 15:35:14.599641] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.806 [2024-07-15 15:35:14.599748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.806 [2024-07-15 15:35:14.599769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.806 [2024-07-15 15:35:14.599779] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.806 [2024-07-15 15:35:14.599788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:10.806 [2024-07-15 15:35:14.599808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.806 qpair failed and we were unable to recover it. 00:30:10.806 [2024-07-15 15:35:14.609651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.806 [2024-07-15 15:35:14.609736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.806 [2024-07-15 15:35:14.609753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.806 [2024-07-15 15:35:14.609763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.806 [2024-07-15 15:35:14.609771] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:10.806 [2024-07-15 15:35:14.609790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.806 qpair failed and we were unable to recover it. 00:30:10.806 [2024-07-15 15:35:14.619616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.806 [2024-07-15 15:35:14.619788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.806 [2024-07-15 15:35:14.619808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.806 [2024-07-15 15:35:14.619818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.806 [2024-07-15 15:35:14.619827] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:10.807 [2024-07-15 15:35:14.619851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.807 qpair failed and we were unable to recover it. 00:30:10.807 [2024-07-15 15:35:14.629720] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.807 [2024-07-15 15:35:14.629809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.807 [2024-07-15 15:35:14.629827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.807 [2024-07-15 15:35:14.629841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.807 [2024-07-15 15:35:14.629849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:10.807 [2024-07-15 15:35:14.629868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.807 qpair failed and we were unable to recover it. 00:30:10.807 [2024-07-15 15:35:14.639734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.807 [2024-07-15 15:35:14.639816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.807 [2024-07-15 15:35:14.639838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.807 [2024-07-15 15:35:14.639848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.807 [2024-07-15 15:35:14.639861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:10.807 [2024-07-15 15:35:14.639880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.807 qpair failed and we were unable to recover it. 00:30:10.807 [2024-07-15 15:35:14.649769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.807 [2024-07-15 15:35:14.649858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.807 [2024-07-15 15:35:14.649875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.807 [2024-07-15 15:35:14.649885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.807 [2024-07-15 15:35:14.649894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:10.807 [2024-07-15 15:35:14.649912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.807 qpair failed and we were unable to recover it. 00:30:10.807 [2024-07-15 15:35:14.659771] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.807 [2024-07-15 15:35:14.659908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.807 [2024-07-15 15:35:14.659927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.807 [2024-07-15 15:35:14.659937] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.807 [2024-07-15 15:35:14.659946] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:10.807 [2024-07-15 15:35:14.659964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.807 qpair failed and we were unable to recover it. 00:30:10.807 [2024-07-15 15:35:14.669822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.807 [2024-07-15 15:35:14.669912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.807 [2024-07-15 15:35:14.669930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.807 [2024-07-15 15:35:14.669939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.807 [2024-07-15 15:35:14.669948] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:10.807 [2024-07-15 15:35:14.669966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.807 qpair failed and we were unable to recover it. 00:30:10.807 [2024-07-15 15:35:14.679817] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.807 [2024-07-15 15:35:14.679914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.807 [2024-07-15 15:35:14.679932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.807 [2024-07-15 15:35:14.679942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.807 [2024-07-15 15:35:14.679951] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:10.807 [2024-07-15 15:35:14.679970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.807 qpair failed and we were unable to recover it. 00:30:10.807 [2024-07-15 15:35:14.689886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.807 [2024-07-15 15:35:14.689969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.807 [2024-07-15 15:35:14.689987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.807 [2024-07-15 15:35:14.689996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.807 [2024-07-15 15:35:14.690005] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:10.807 [2024-07-15 15:35:14.690024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.807 qpair failed and we were unable to recover it. 00:30:10.807 [2024-07-15 15:35:14.699883] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.807 [2024-07-15 15:35:14.699980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.807 [2024-07-15 15:35:14.699997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.807 [2024-07-15 15:35:14.700007] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.807 [2024-07-15 15:35:14.700016] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:10.807 [2024-07-15 15:35:14.700035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.807 qpair failed and we were unable to recover it. 00:30:10.807 [2024-07-15 15:35:14.709942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:10.807 [2024-07-15 15:35:14.710029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:10.807 [2024-07-15 15:35:14.710047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:10.807 [2024-07-15 15:35:14.710057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.807 [2024-07-15 15:35:14.710065] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:10.807 [2024-07-15 15:35:14.710084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:10.807 qpair failed and we were unable to recover it. 00:30:11.066 [2024-07-15 15:35:14.719977] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.066 [2024-07-15 15:35:14.720065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.066 [2024-07-15 15:35:14.720083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.066 [2024-07-15 15:35:14.720093] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.066 [2024-07-15 15:35:14.720101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.066 [2024-07-15 15:35:14.720120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.066 qpair failed and we were unable to recover it. 00:30:11.066 [2024-07-15 15:35:14.729991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.066 [2024-07-15 15:35:14.730070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.066 [2024-07-15 15:35:14.730089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.066 [2024-07-15 15:35:14.730098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.066 [2024-07-15 15:35:14.730111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.066 [2024-07-15 15:35:14.730130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.066 qpair failed and we were unable to recover it. 00:30:11.066 [2024-07-15 15:35:14.740006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.066 [2024-07-15 15:35:14.740098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.066 [2024-07-15 15:35:14.740115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.066 [2024-07-15 15:35:14.740125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.066 [2024-07-15 15:35:14.740133] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.066 [2024-07-15 15:35:14.740152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.066 qpair failed and we were unable to recover it. 00:30:11.066 [2024-07-15 15:35:14.750065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.066 [2024-07-15 15:35:14.750151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.066 [2024-07-15 15:35:14.750168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.066 [2024-07-15 15:35:14.750178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.066 [2024-07-15 15:35:14.750186] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.066 [2024-07-15 15:35:14.750205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.066 qpair failed and we were unable to recover it. 00:30:11.066 [2024-07-15 15:35:14.760070] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.066 [2024-07-15 15:35:14.760152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.066 [2024-07-15 15:35:14.760170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.066 [2024-07-15 15:35:14.760180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.066 [2024-07-15 15:35:14.760188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.066 [2024-07-15 15:35:14.760206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.066 qpair failed and we were unable to recover it. 00:30:11.066 [2024-07-15 15:35:14.770090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.066 [2024-07-15 15:35:14.770174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.066 [2024-07-15 15:35:14.770192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.066 [2024-07-15 15:35:14.770201] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.066 [2024-07-15 15:35:14.770210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.066 [2024-07-15 15:35:14.770228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.066 qpair failed and we were unable to recover it. 00:30:11.066 [2024-07-15 15:35:14.780125] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.066 [2024-07-15 15:35:14.780224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.066 [2024-07-15 15:35:14.780241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.066 [2024-07-15 15:35:14.780251] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.066 [2024-07-15 15:35:14.780260] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.066 [2024-07-15 15:35:14.780279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.066 qpair failed and we were unable to recover it. 00:30:11.066 [2024-07-15 15:35:14.790167] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.067 [2024-07-15 15:35:14.790254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.067 [2024-07-15 15:35:14.790271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.067 [2024-07-15 15:35:14.790281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.067 [2024-07-15 15:35:14.790290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.067 [2024-07-15 15:35:14.790308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.067 qpair failed and we were unable to recover it. 00:30:11.067 [2024-07-15 15:35:14.800163] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.067 [2024-07-15 15:35:14.800255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.067 [2024-07-15 15:35:14.800272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.067 [2024-07-15 15:35:14.800282] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.067 [2024-07-15 15:35:14.800290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.067 [2024-07-15 15:35:14.800309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.067 qpair failed and we were unable to recover it. 00:30:11.067 [2024-07-15 15:35:14.810308] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.067 [2024-07-15 15:35:14.810393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.067 [2024-07-15 15:35:14.810410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.067 [2024-07-15 15:35:14.810420] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.067 [2024-07-15 15:35:14.810428] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.067 [2024-07-15 15:35:14.810447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.067 qpair failed and we were unable to recover it. 00:30:11.067 [2024-07-15 15:35:14.820234] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.067 [2024-07-15 15:35:14.820323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.067 [2024-07-15 15:35:14.820340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.067 [2024-07-15 15:35:14.820353] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.067 [2024-07-15 15:35:14.820362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.067 [2024-07-15 15:35:14.820381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.067 qpair failed and we were unable to recover it. 00:30:11.067 [2024-07-15 15:35:14.830258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.067 [2024-07-15 15:35:14.830341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.067 [2024-07-15 15:35:14.830360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.067 [2024-07-15 15:35:14.830370] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.067 [2024-07-15 15:35:14.830379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.067 [2024-07-15 15:35:14.830398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.067 qpair failed and we were unable to recover it. 00:30:11.067 [2024-07-15 15:35:14.840309] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.067 [2024-07-15 15:35:14.840392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.067 [2024-07-15 15:35:14.840409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.067 [2024-07-15 15:35:14.840418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.067 [2024-07-15 15:35:14.840427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.067 [2024-07-15 15:35:14.840445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.067 qpair failed and we were unable to recover it. 00:30:11.067 [2024-07-15 15:35:14.850334] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.067 [2024-07-15 15:35:14.850417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.067 [2024-07-15 15:35:14.850434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.067 [2024-07-15 15:35:14.850444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.067 [2024-07-15 15:35:14.850452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.067 [2024-07-15 15:35:14.850471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.067 qpair failed and we were unable to recover it. 00:30:11.067 [2024-07-15 15:35:14.860353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.067 [2024-07-15 15:35:14.860443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.067 [2024-07-15 15:35:14.860463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.067 [2024-07-15 15:35:14.860473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.067 [2024-07-15 15:35:14.860482] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.067 [2024-07-15 15:35:14.860500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.067 qpair failed and we were unable to recover it. 00:30:11.067 [2024-07-15 15:35:14.870397] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.067 [2024-07-15 15:35:14.870485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.067 [2024-07-15 15:35:14.870502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.067 [2024-07-15 15:35:14.870511] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.067 [2024-07-15 15:35:14.870520] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.067 [2024-07-15 15:35:14.870539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.067 qpair failed and we were unable to recover it. 00:30:11.067 [2024-07-15 15:35:14.880429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.067 [2024-07-15 15:35:14.880511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.067 [2024-07-15 15:35:14.880528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.067 [2024-07-15 15:35:14.880538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.067 [2024-07-15 15:35:14.880547] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.067 [2024-07-15 15:35:14.880565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.067 qpair failed and we were unable to recover it. 00:30:11.067 [2024-07-15 15:35:14.890481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.067 [2024-07-15 15:35:14.890563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.067 [2024-07-15 15:35:14.890580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.067 [2024-07-15 15:35:14.890590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.067 [2024-07-15 15:35:14.890599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.067 [2024-07-15 15:35:14.890617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.067 qpair failed and we were unable to recover it. 00:30:11.067 [2024-07-15 15:35:14.900449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.067 [2024-07-15 15:35:14.900532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.067 [2024-07-15 15:35:14.900549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.067 [2024-07-15 15:35:14.900559] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.067 [2024-07-15 15:35:14.900567] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.067 [2024-07-15 15:35:14.900586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.067 qpair failed and we were unable to recover it. 00:30:11.067 [2024-07-15 15:35:14.910502] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.067 [2024-07-15 15:35:14.910593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.067 [2024-07-15 15:35:14.910615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.067 [2024-07-15 15:35:14.910625] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.067 [2024-07-15 15:35:14.910633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.067 [2024-07-15 15:35:14.910652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.067 qpair failed and we were unable to recover it. 00:30:11.067 [2024-07-15 15:35:14.920444] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.067 [2024-07-15 15:35:14.920537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.067 [2024-07-15 15:35:14.920555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.067 [2024-07-15 15:35:14.920564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.067 [2024-07-15 15:35:14.920573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.067 [2024-07-15 15:35:14.920591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.067 qpair failed and we were unable to recover it. 00:30:11.067 [2024-07-15 15:35:14.930552] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.067 [2024-07-15 15:35:14.930635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.067 [2024-07-15 15:35:14.930652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.068 [2024-07-15 15:35:14.930662] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.068 [2024-07-15 15:35:14.930671] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.068 [2024-07-15 15:35:14.930689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.068 qpair failed and we were unable to recover it. 00:30:11.068 [2024-07-15 15:35:14.940506] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.068 [2024-07-15 15:35:14.940588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.068 [2024-07-15 15:35:14.940605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.068 [2024-07-15 15:35:14.940615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.068 [2024-07-15 15:35:14.940623] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.068 [2024-07-15 15:35:14.940642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.068 qpair failed and we were unable to recover it. 00:30:11.068 [2024-07-15 15:35:14.950607] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.068 [2024-07-15 15:35:14.950689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.068 [2024-07-15 15:35:14.950705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.068 [2024-07-15 15:35:14.950715] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.068 [2024-07-15 15:35:14.950724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.068 [2024-07-15 15:35:14.950745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.068 qpair failed and we were unable to recover it. 00:30:11.068 [2024-07-15 15:35:14.960644] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.068 [2024-07-15 15:35:14.960736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.068 [2024-07-15 15:35:14.960753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.068 [2024-07-15 15:35:14.960763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.068 [2024-07-15 15:35:14.960771] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.068 [2024-07-15 15:35:14.960790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.068 qpair failed and we were unable to recover it. 00:30:11.068 [2024-07-15 15:35:14.970713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.068 [2024-07-15 15:35:14.970818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.068 [2024-07-15 15:35:14.970839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.068 [2024-07-15 15:35:14.970849] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.068 [2024-07-15 15:35:14.970858] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.068 [2024-07-15 15:35:14.970876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.068 qpair failed and we were unable to recover it. 00:30:11.329 [2024-07-15 15:35:14.980744] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.329 [2024-07-15 15:35:14.980847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.329 [2024-07-15 15:35:14.980865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.329 [2024-07-15 15:35:14.980874] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.329 [2024-07-15 15:35:14.980883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.329 [2024-07-15 15:35:14.980902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.329 qpair failed and we were unable to recover it. 00:30:11.329 [2024-07-15 15:35:14.990749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.329 [2024-07-15 15:35:14.990861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.329 [2024-07-15 15:35:14.990879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.329 [2024-07-15 15:35:14.990889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.329 [2024-07-15 15:35:14.990898] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.329 [2024-07-15 15:35:14.990917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.329 qpair failed and we were unable to recover it. 00:30:11.329 [2024-07-15 15:35:15.000763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.329 [2024-07-15 15:35:15.000851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.329 [2024-07-15 15:35:15.000872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.329 [2024-07-15 15:35:15.000882] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.329 [2024-07-15 15:35:15.000891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.329 [2024-07-15 15:35:15.000909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.329 qpair failed and we were unable to recover it. 00:30:11.329 [2024-07-15 15:35:15.010799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.329 [2024-07-15 15:35:15.010886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.329 [2024-07-15 15:35:15.010904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.329 [2024-07-15 15:35:15.010914] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.329 [2024-07-15 15:35:15.010922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.329 [2024-07-15 15:35:15.010941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.329 qpair failed and we were unable to recover it. 00:30:11.329 [2024-07-15 15:35:15.020807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.329 [2024-07-15 15:35:15.020901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.329 [2024-07-15 15:35:15.020918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.329 [2024-07-15 15:35:15.020928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.329 [2024-07-15 15:35:15.020936] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.329 [2024-07-15 15:35:15.020954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.329 qpair failed and we were unable to recover it. 00:30:11.329 [2024-07-15 15:35:15.030836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.329 [2024-07-15 15:35:15.030925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.329 [2024-07-15 15:35:15.030943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.329 [2024-07-15 15:35:15.030953] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.329 [2024-07-15 15:35:15.030961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.329 [2024-07-15 15:35:15.030980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.329 qpair failed and we were unable to recover it. 00:30:11.329 [2024-07-15 15:35:15.040828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.329 [2024-07-15 15:35:15.040920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.329 [2024-07-15 15:35:15.040937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.329 [2024-07-15 15:35:15.040947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.329 [2024-07-15 15:35:15.040959] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.329 [2024-07-15 15:35:15.040978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.329 qpair failed and we were unable to recover it. 00:30:11.329 [2024-07-15 15:35:15.050896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.329 [2024-07-15 15:35:15.050990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.329 [2024-07-15 15:35:15.051007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.329 [2024-07-15 15:35:15.051016] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.329 [2024-07-15 15:35:15.051025] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.329 [2024-07-15 15:35:15.051043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.329 qpair failed and we were unable to recover it. 00:30:11.329 [2024-07-15 15:35:15.060936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.329 [2024-07-15 15:35:15.061034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.329 [2024-07-15 15:35:15.061051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.329 [2024-07-15 15:35:15.061060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.329 [2024-07-15 15:35:15.061069] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.329 [2024-07-15 15:35:15.061088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.329 qpair failed and we were unable to recover it. 00:30:11.329 [2024-07-15 15:35:15.071012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.329 [2024-07-15 15:35:15.071094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.329 [2024-07-15 15:35:15.071112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.329 [2024-07-15 15:35:15.071122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.329 [2024-07-15 15:35:15.071130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.329 [2024-07-15 15:35:15.071148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.329 qpair failed and we were unable to recover it. 00:30:11.329 [2024-07-15 15:35:15.081009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.329 [2024-07-15 15:35:15.081103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.329 [2024-07-15 15:35:15.081120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.329 [2024-07-15 15:35:15.081130] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.329 [2024-07-15 15:35:15.081139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.329 [2024-07-15 15:35:15.081157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.329 qpair failed and we were unable to recover it. 00:30:11.329 [2024-07-15 15:35:15.091042] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.329 [2024-07-15 15:35:15.091128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.329 [2024-07-15 15:35:15.091145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.329 [2024-07-15 15:35:15.091155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.329 [2024-07-15 15:35:15.091164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.329 [2024-07-15 15:35:15.091182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.329 qpair failed and we were unable to recover it. 00:30:11.329 [2024-07-15 15:35:15.101054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.329 [2024-07-15 15:35:15.101137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.330 [2024-07-15 15:35:15.101155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.330 [2024-07-15 15:35:15.101165] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.330 [2024-07-15 15:35:15.101174] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.330 [2024-07-15 15:35:15.101192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.330 qpair failed and we were unable to recover it. 00:30:11.330 [2024-07-15 15:35:15.111064] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.330 [2024-07-15 15:35:15.111145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.330 [2024-07-15 15:35:15.111163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.330 [2024-07-15 15:35:15.111174] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.330 [2024-07-15 15:35:15.111182] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.330 [2024-07-15 15:35:15.111201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.330 qpair failed and we were unable to recover it. 00:30:11.330 [2024-07-15 15:35:15.121070] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.330 [2024-07-15 15:35:15.121165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.330 [2024-07-15 15:35:15.121182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.330 [2024-07-15 15:35:15.121192] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.330 [2024-07-15 15:35:15.121200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.330 [2024-07-15 15:35:15.121219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.330 qpair failed and we were unable to recover it. 00:30:11.330 [2024-07-15 15:35:15.131171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.330 [2024-07-15 15:35:15.131264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.330 [2024-07-15 15:35:15.131282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.330 [2024-07-15 15:35:15.131291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.330 [2024-07-15 15:35:15.131304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.330 [2024-07-15 15:35:15.131322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.330 qpair failed and we were unable to recover it. 00:30:11.330 [2024-07-15 15:35:15.141162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.330 [2024-07-15 15:35:15.141249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.330 [2024-07-15 15:35:15.141266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.330 [2024-07-15 15:35:15.141276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.330 [2024-07-15 15:35:15.141284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.330 [2024-07-15 15:35:15.141303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.330 qpair failed and we were unable to recover it. 00:30:11.330 [2024-07-15 15:35:15.151177] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.330 [2024-07-15 15:35:15.151263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.330 [2024-07-15 15:35:15.151280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.330 [2024-07-15 15:35:15.151290] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.330 [2024-07-15 15:35:15.151298] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.330 [2024-07-15 15:35:15.151317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.330 qpair failed and we were unable to recover it. 00:30:11.330 [2024-07-15 15:35:15.161167] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.330 [2024-07-15 15:35:15.161289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.330 [2024-07-15 15:35:15.161308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.330 [2024-07-15 15:35:15.161318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.330 [2024-07-15 15:35:15.161327] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.330 [2024-07-15 15:35:15.161345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.330 qpair failed and we were unable to recover it. 00:30:11.330 [2024-07-15 15:35:15.171231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.330 [2024-07-15 15:35:15.171318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.330 [2024-07-15 15:35:15.171335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.330 [2024-07-15 15:35:15.171345] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.330 [2024-07-15 15:35:15.171353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.330 [2024-07-15 15:35:15.171372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.330 qpair failed and we were unable to recover it. 00:30:11.330 [2024-07-15 15:35:15.181271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.330 [2024-07-15 15:35:15.181351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.330 [2024-07-15 15:35:15.181369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.330 [2024-07-15 15:35:15.181379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.330 [2024-07-15 15:35:15.181388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.330 [2024-07-15 15:35:15.181405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.330 qpair failed and we were unable to recover it. 00:30:11.330 [2024-07-15 15:35:15.191435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.330 [2024-07-15 15:35:15.191600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.330 [2024-07-15 15:35:15.191619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.330 [2024-07-15 15:35:15.191628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.330 [2024-07-15 15:35:15.191637] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.330 [2024-07-15 15:35:15.191656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.330 qpair failed and we were unable to recover it. 00:30:11.330 [2024-07-15 15:35:15.201352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.330 [2024-07-15 15:35:15.201434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.330 [2024-07-15 15:35:15.201452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.330 [2024-07-15 15:35:15.201461] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.330 [2024-07-15 15:35:15.201470] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.330 [2024-07-15 15:35:15.201488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.330 qpair failed and we were unable to recover it. 00:30:11.330 [2024-07-15 15:35:15.211301] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.330 [2024-07-15 15:35:15.211385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.330 [2024-07-15 15:35:15.211402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.330 [2024-07-15 15:35:15.211412] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.330 [2024-07-15 15:35:15.211421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.330 [2024-07-15 15:35:15.211440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.330 qpair failed and we were unable to recover it. 00:30:11.330 [2024-07-15 15:35:15.221391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.330 [2024-07-15 15:35:15.221476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.330 [2024-07-15 15:35:15.221493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.330 [2024-07-15 15:35:15.221507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.330 [2024-07-15 15:35:15.221515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.330 [2024-07-15 15:35:15.221534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.330 qpair failed and we were unable to recover it. 00:30:11.330 [2024-07-15 15:35:15.231447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.330 [2024-07-15 15:35:15.231529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.330 [2024-07-15 15:35:15.231547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.330 [2024-07-15 15:35:15.231559] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.330 [2024-07-15 15:35:15.231568] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.330 [2024-07-15 15:35:15.231586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.330 qpair failed and we were unable to recover it. 00:30:11.590 [2024-07-15 15:35:15.241458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.590 [2024-07-15 15:35:15.241545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.590 [2024-07-15 15:35:15.241563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.590 [2024-07-15 15:35:15.241572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.590 [2024-07-15 15:35:15.241581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.590 [2024-07-15 15:35:15.241599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.590 qpair failed and we were unable to recover it. 00:30:11.590 [2024-07-15 15:35:15.251498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.590 [2024-07-15 15:35:15.251579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.590 [2024-07-15 15:35:15.251597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.590 [2024-07-15 15:35:15.251607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.590 [2024-07-15 15:35:15.251616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.590 [2024-07-15 15:35:15.251634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.590 qpair failed and we were unable to recover it. 00:30:11.590 [2024-07-15 15:35:15.261504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.590 [2024-07-15 15:35:15.261681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.590 [2024-07-15 15:35:15.261701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.590 [2024-07-15 15:35:15.261712] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.590 [2024-07-15 15:35:15.261721] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.590 [2024-07-15 15:35:15.261740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.590 qpair failed and we were unable to recover it. 00:30:11.590 [2024-07-15 15:35:15.271549] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.590 [2024-07-15 15:35:15.271648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.590 [2024-07-15 15:35:15.271666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.590 [2024-07-15 15:35:15.271675] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.590 [2024-07-15 15:35:15.271684] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.590 [2024-07-15 15:35:15.271703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.590 qpair failed and we were unable to recover it. 00:30:11.590 [2024-07-15 15:35:15.281581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.590 [2024-07-15 15:35:15.281665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.590 [2024-07-15 15:35:15.281684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.590 [2024-07-15 15:35:15.281694] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.590 [2024-07-15 15:35:15.281703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.590 [2024-07-15 15:35:15.281721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.590 qpair failed and we were unable to recover it. 00:30:11.590 [2024-07-15 15:35:15.291694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.590 [2024-07-15 15:35:15.291777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.590 [2024-07-15 15:35:15.291795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.590 [2024-07-15 15:35:15.291804] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.590 [2024-07-15 15:35:15.291814] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.590 [2024-07-15 15:35:15.291839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.590 qpair failed and we were unable to recover it. 00:30:11.590 [2024-07-15 15:35:15.301617] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.590 [2024-07-15 15:35:15.301787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.590 [2024-07-15 15:35:15.301806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.590 [2024-07-15 15:35:15.301818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.590 [2024-07-15 15:35:15.301827] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.590 [2024-07-15 15:35:15.301851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.590 qpair failed and we were unable to recover it. 00:30:11.590 [2024-07-15 15:35:15.311637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.590 [2024-07-15 15:35:15.311716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.590 [2024-07-15 15:35:15.311786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.590 [2024-07-15 15:35:15.311796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.590 [2024-07-15 15:35:15.311805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.590 [2024-07-15 15:35:15.311824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.590 qpair failed and we were unable to recover it. 00:30:11.590 [2024-07-15 15:35:15.321632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.590 [2024-07-15 15:35:15.321708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.590 [2024-07-15 15:35:15.321726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.590 [2024-07-15 15:35:15.321736] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.590 [2024-07-15 15:35:15.321744] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.590 [2024-07-15 15:35:15.321763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.590 qpair failed and we were unable to recover it. 00:30:11.590 [2024-07-15 15:35:15.331755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.590 [2024-07-15 15:35:15.331878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.590 [2024-07-15 15:35:15.331898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.590 [2024-07-15 15:35:15.331908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.590 [2024-07-15 15:35:15.331917] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.590 [2024-07-15 15:35:15.331936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.590 qpair failed and we were unable to recover it. 00:30:11.590 [2024-07-15 15:35:15.341726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.590 [2024-07-15 15:35:15.341810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.590 [2024-07-15 15:35:15.341829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.591 [2024-07-15 15:35:15.341843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.591 [2024-07-15 15:35:15.341852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.591 [2024-07-15 15:35:15.341872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.591 qpair failed and we were unable to recover it. 00:30:11.591 [2024-07-15 15:35:15.351819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.591 [2024-07-15 15:35:15.351940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.591 [2024-07-15 15:35:15.351958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.591 [2024-07-15 15:35:15.351967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.591 [2024-07-15 15:35:15.351976] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.591 [2024-07-15 15:35:15.351997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.591 qpair failed and we were unable to recover it. 00:30:11.591 [2024-07-15 15:35:15.361805] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.591 [2024-07-15 15:35:15.361917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.591 [2024-07-15 15:35:15.361936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.591 [2024-07-15 15:35:15.361946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.591 [2024-07-15 15:35:15.361955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.591 [2024-07-15 15:35:15.361975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.591 qpair failed and we were unable to recover it. 00:30:11.591 [2024-07-15 15:35:15.371803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.591 [2024-07-15 15:35:15.371892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.591 [2024-07-15 15:35:15.371910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.591 [2024-07-15 15:35:15.371920] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.591 [2024-07-15 15:35:15.371929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.591 [2024-07-15 15:35:15.371948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.591 qpair failed and we were unable to recover it. 00:30:11.591 [2024-07-15 15:35:15.381850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.591 [2024-07-15 15:35:15.381979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.591 [2024-07-15 15:35:15.381997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.591 [2024-07-15 15:35:15.382007] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.591 [2024-07-15 15:35:15.382016] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.591 [2024-07-15 15:35:15.382035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.591 qpair failed and we were unable to recover it. 00:30:11.591 [2024-07-15 15:35:15.391892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.591 [2024-07-15 15:35:15.391977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.591 [2024-07-15 15:35:15.391995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.591 [2024-07-15 15:35:15.392006] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.591 [2024-07-15 15:35:15.392014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.591 [2024-07-15 15:35:15.392033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.591 qpair failed and we were unable to recover it. 00:30:11.591 [2024-07-15 15:35:15.401927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.591 [2024-07-15 15:35:15.402106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.591 [2024-07-15 15:35:15.402128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.591 [2024-07-15 15:35:15.402137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.591 [2024-07-15 15:35:15.402146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.591 [2024-07-15 15:35:15.402165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.591 qpair failed and we were unable to recover it. 00:30:11.591 [2024-07-15 15:35:15.411958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.591 [2024-07-15 15:35:15.412072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.591 [2024-07-15 15:35:15.412090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.591 [2024-07-15 15:35:15.412100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.591 [2024-07-15 15:35:15.412109] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.591 [2024-07-15 15:35:15.412128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.591 qpair failed and we were unable to recover it. 00:30:11.591 [2024-07-15 15:35:15.421990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.591 [2024-07-15 15:35:15.422118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.591 [2024-07-15 15:35:15.422137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.591 [2024-07-15 15:35:15.422147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.591 [2024-07-15 15:35:15.422156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.591 [2024-07-15 15:35:15.422175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.591 qpair failed and we were unable to recover it. 00:30:11.591 [2024-07-15 15:35:15.431994] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.591 [2024-07-15 15:35:15.432079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.591 [2024-07-15 15:35:15.432097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.591 [2024-07-15 15:35:15.432107] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.591 [2024-07-15 15:35:15.432115] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.591 [2024-07-15 15:35:15.432134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.591 qpair failed and we were unable to recover it. 00:30:11.591 [2024-07-15 15:35:15.441967] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.591 [2024-07-15 15:35:15.442051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.591 [2024-07-15 15:35:15.442069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.591 [2024-07-15 15:35:15.442078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.591 [2024-07-15 15:35:15.442087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.591 [2024-07-15 15:35:15.442109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.591 qpair failed and we were unable to recover it. 00:30:11.591 [2024-07-15 15:35:15.452077] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.591 [2024-07-15 15:35:15.452202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.591 [2024-07-15 15:35:15.452222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.591 [2024-07-15 15:35:15.452233] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.591 [2024-07-15 15:35:15.452242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.591 [2024-07-15 15:35:15.452262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.591 qpair failed and we were unable to recover it. 00:30:11.591 [2024-07-15 15:35:15.462030] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.591 [2024-07-15 15:35:15.462111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.591 [2024-07-15 15:35:15.462129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.591 [2024-07-15 15:35:15.462139] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.591 [2024-07-15 15:35:15.462147] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.591 [2024-07-15 15:35:15.462166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.591 qpair failed and we were unable to recover it. 00:30:11.591 [2024-07-15 15:35:15.472097] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.591 [2024-07-15 15:35:15.472203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.591 [2024-07-15 15:35:15.472221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.591 [2024-07-15 15:35:15.472231] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.591 [2024-07-15 15:35:15.472240] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.591 [2024-07-15 15:35:15.472259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.591 qpair failed and we were unable to recover it. 00:30:11.591 [2024-07-15 15:35:15.482125] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.591 [2024-07-15 15:35:15.482209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.591 [2024-07-15 15:35:15.482226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.591 [2024-07-15 15:35:15.482236] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.591 [2024-07-15 15:35:15.482244] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.591 [2024-07-15 15:35:15.482262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.591 qpair failed and we were unable to recover it. 00:30:11.592 [2024-07-15 15:35:15.492096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.592 [2024-07-15 15:35:15.492193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.592 [2024-07-15 15:35:15.492210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.592 [2024-07-15 15:35:15.492220] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.592 [2024-07-15 15:35:15.492229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.592 [2024-07-15 15:35:15.492248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.592 qpair failed and we were unable to recover it. 00:30:11.851 [2024-07-15 15:35:15.502142] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.851 [2024-07-15 15:35:15.502229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.851 [2024-07-15 15:35:15.502245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.851 [2024-07-15 15:35:15.502255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.851 [2024-07-15 15:35:15.502264] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.851 [2024-07-15 15:35:15.502283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.851 qpair failed and we were unable to recover it. 00:30:11.851 [2024-07-15 15:35:15.512171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.851 [2024-07-15 15:35:15.512267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.851 [2024-07-15 15:35:15.512285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.851 [2024-07-15 15:35:15.512294] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.851 [2024-07-15 15:35:15.512303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.851 [2024-07-15 15:35:15.512322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.851 qpair failed and we were unable to recover it. 00:30:11.851 [2024-07-15 15:35:15.522258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.851 [2024-07-15 15:35:15.522341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.851 [2024-07-15 15:35:15.522358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.851 [2024-07-15 15:35:15.522368] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.851 [2024-07-15 15:35:15.522377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.851 [2024-07-15 15:35:15.522395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.851 qpair failed and we were unable to recover it. 00:30:11.851 [2024-07-15 15:35:15.532283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.851 [2024-07-15 15:35:15.532366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.851 [2024-07-15 15:35:15.532385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.851 [2024-07-15 15:35:15.532395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.851 [2024-07-15 15:35:15.532408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.851 [2024-07-15 15:35:15.532427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.851 qpair failed and we were unable to recover it. 00:30:11.851 [2024-07-15 15:35:15.542290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.851 [2024-07-15 15:35:15.542376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.851 [2024-07-15 15:35:15.542393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.851 [2024-07-15 15:35:15.542403] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.851 [2024-07-15 15:35:15.542411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.851 [2024-07-15 15:35:15.542430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.851 qpair failed and we were unable to recover it. 00:30:11.851 [2024-07-15 15:35:15.552349] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.851 [2024-07-15 15:35:15.552435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.851 [2024-07-15 15:35:15.552453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.851 [2024-07-15 15:35:15.552463] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.851 [2024-07-15 15:35:15.552471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.851 [2024-07-15 15:35:15.552490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.851 qpair failed and we were unable to recover it. 00:30:11.851 [2024-07-15 15:35:15.562307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.851 [2024-07-15 15:35:15.562395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.851 [2024-07-15 15:35:15.562412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.851 [2024-07-15 15:35:15.562422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.851 [2024-07-15 15:35:15.562431] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.851 [2024-07-15 15:35:15.562450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.851 qpair failed and we were unable to recover it. 00:30:11.851 [2024-07-15 15:35:15.572340] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.851 [2024-07-15 15:35:15.572470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.851 [2024-07-15 15:35:15.572491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.851 [2024-07-15 15:35:15.572503] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.851 [2024-07-15 15:35:15.572513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.851 [2024-07-15 15:35:15.572534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.851 qpair failed and we were unable to recover it. 00:30:11.851 [2024-07-15 15:35:15.582413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.851 [2024-07-15 15:35:15.582497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.851 [2024-07-15 15:35:15.582515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.851 [2024-07-15 15:35:15.582525] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.851 [2024-07-15 15:35:15.582533] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.851 [2024-07-15 15:35:15.582551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.852 qpair failed and we were unable to recover it. 00:30:11.852 [2024-07-15 15:35:15.592450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.852 [2024-07-15 15:35:15.592563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.852 [2024-07-15 15:35:15.592580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.852 [2024-07-15 15:35:15.592590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.852 [2024-07-15 15:35:15.592599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.852 [2024-07-15 15:35:15.592618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.852 qpair failed and we were unable to recover it. 00:30:11.852 [2024-07-15 15:35:15.602475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.852 [2024-07-15 15:35:15.602558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.852 [2024-07-15 15:35:15.602576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.852 [2024-07-15 15:35:15.602585] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.852 [2024-07-15 15:35:15.602594] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.852 [2024-07-15 15:35:15.602613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.852 qpair failed and we were unable to recover it. 00:30:11.852 [2024-07-15 15:35:15.612543] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.852 [2024-07-15 15:35:15.612625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.852 [2024-07-15 15:35:15.612643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.852 [2024-07-15 15:35:15.612653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.852 [2024-07-15 15:35:15.612661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.852 [2024-07-15 15:35:15.612680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.852 qpair failed and we were unable to recover it. 00:30:11.852 [2024-07-15 15:35:15.622566] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.852 [2024-07-15 15:35:15.622688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.852 [2024-07-15 15:35:15.622707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.852 [2024-07-15 15:35:15.622720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.852 [2024-07-15 15:35:15.622730] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.852 [2024-07-15 15:35:15.622748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.852 qpair failed and we were unable to recover it. 00:30:11.852 [2024-07-15 15:35:15.632562] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.852 [2024-07-15 15:35:15.632640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.852 [2024-07-15 15:35:15.632659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.852 [2024-07-15 15:35:15.632668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.852 [2024-07-15 15:35:15.632677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.852 [2024-07-15 15:35:15.632696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.852 qpair failed and we were unable to recover it. 00:30:11.852 [2024-07-15 15:35:15.642593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.852 [2024-07-15 15:35:15.642672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.852 [2024-07-15 15:35:15.642689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.852 [2024-07-15 15:35:15.642699] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.852 [2024-07-15 15:35:15.642708] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.852 [2024-07-15 15:35:15.642727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.852 qpair failed and we were unable to recover it. 00:30:11.852 [2024-07-15 15:35:15.652604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.852 [2024-07-15 15:35:15.652687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.852 [2024-07-15 15:35:15.652705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.852 [2024-07-15 15:35:15.652714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.852 [2024-07-15 15:35:15.652723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.852 [2024-07-15 15:35:15.652741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.852 qpair failed and we were unable to recover it. 00:30:11.852 [2024-07-15 15:35:15.662634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.852 [2024-07-15 15:35:15.662724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.852 [2024-07-15 15:35:15.662741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.852 [2024-07-15 15:35:15.662751] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.852 [2024-07-15 15:35:15.662759] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.852 [2024-07-15 15:35:15.662778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.852 qpair failed and we were unable to recover it. 00:30:11.852 [2024-07-15 15:35:15.672701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.852 [2024-07-15 15:35:15.672783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.852 [2024-07-15 15:35:15.672801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.852 [2024-07-15 15:35:15.672810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.852 [2024-07-15 15:35:15.672819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.852 [2024-07-15 15:35:15.672841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.852 qpair failed and we were unable to recover it. 00:30:11.852 [2024-07-15 15:35:15.682771] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.852 [2024-07-15 15:35:15.682952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.852 [2024-07-15 15:35:15.682971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.852 [2024-07-15 15:35:15.682981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.852 [2024-07-15 15:35:15.682990] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.852 [2024-07-15 15:35:15.683009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.852 qpair failed and we were unable to recover it. 00:30:11.852 [2024-07-15 15:35:15.692785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.852 [2024-07-15 15:35:15.692874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.852 [2024-07-15 15:35:15.692892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.852 [2024-07-15 15:35:15.692902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.852 [2024-07-15 15:35:15.692910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.852 [2024-07-15 15:35:15.692930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.852 qpair failed and we were unable to recover it. 00:30:11.852 [2024-07-15 15:35:15.702803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.852 [2024-07-15 15:35:15.702942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.852 [2024-07-15 15:35:15.702968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.852 [2024-07-15 15:35:15.702979] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.852 [2024-07-15 15:35:15.702988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.852 [2024-07-15 15:35:15.703006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.852 qpair failed and we were unable to recover it. 00:30:11.852 [2024-07-15 15:35:15.712795] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.852 [2024-07-15 15:35:15.712883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.852 [2024-07-15 15:35:15.712901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.852 [2024-07-15 15:35:15.712916] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.852 [2024-07-15 15:35:15.712924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.852 [2024-07-15 15:35:15.712944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.852 qpair failed and we were unable to recover it. 00:30:11.852 [2024-07-15 15:35:15.722823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.852 [2024-07-15 15:35:15.722915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.852 [2024-07-15 15:35:15.722933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.852 [2024-07-15 15:35:15.722943] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.852 [2024-07-15 15:35:15.722951] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.852 [2024-07-15 15:35:15.722970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.852 qpair failed and we were unable to recover it. 00:30:11.852 [2024-07-15 15:35:15.732859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.853 [2024-07-15 15:35:15.732942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.853 [2024-07-15 15:35:15.732960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.853 [2024-07-15 15:35:15.732970] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.853 [2024-07-15 15:35:15.732979] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.853 [2024-07-15 15:35:15.732998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.853 qpair failed and we were unable to recover it. 00:30:11.853 [2024-07-15 15:35:15.742871] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.853 [2024-07-15 15:35:15.742956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.853 [2024-07-15 15:35:15.742973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.853 [2024-07-15 15:35:15.742983] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.853 [2024-07-15 15:35:15.742992] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.853 [2024-07-15 15:35:15.743010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.853 qpair failed and we were unable to recover it. 00:30:11.853 [2024-07-15 15:35:15.752935] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.853 [2024-07-15 15:35:15.753111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.853 [2024-07-15 15:35:15.753129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.853 [2024-07-15 15:35:15.753139] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.853 [2024-07-15 15:35:15.753147] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:11.853 [2024-07-15 15:35:15.753166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.853 qpair failed and we were unable to recover it. 00:30:12.112 [2024-07-15 15:35:15.762939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.112 [2024-07-15 15:35:15.763023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.112 [2024-07-15 15:35:15.763040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.112 [2024-07-15 15:35:15.763050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.112 [2024-07-15 15:35:15.763059] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.112 [2024-07-15 15:35:15.763078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.112 qpair failed and we were unable to recover it. 00:30:12.112 [2024-07-15 15:35:15.772988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.112 [2024-07-15 15:35:15.773072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.112 [2024-07-15 15:35:15.773090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.112 [2024-07-15 15:35:15.773100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.112 [2024-07-15 15:35:15.773108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.112 [2024-07-15 15:35:15.773128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.112 qpair failed and we were unable to recover it. 00:30:12.113 [2024-07-15 15:35:15.782982] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.113 [2024-07-15 15:35:15.783071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.113 [2024-07-15 15:35:15.783088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.113 [2024-07-15 15:35:15.783098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.113 [2024-07-15 15:35:15.783106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.113 [2024-07-15 15:35:15.783125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.113 qpair failed and we were unable to recover it. 00:30:12.113 [2024-07-15 15:35:15.793006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.113 [2024-07-15 15:35:15.793098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.113 [2024-07-15 15:35:15.793115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.113 [2024-07-15 15:35:15.793125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.113 [2024-07-15 15:35:15.793134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.113 [2024-07-15 15:35:15.793152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.113 qpair failed and we were unable to recover it. 00:30:12.113 [2024-07-15 15:35:15.802987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.113 [2024-07-15 15:35:15.803064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.113 [2024-07-15 15:35:15.803084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.113 [2024-07-15 15:35:15.803095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.113 [2024-07-15 15:35:15.803103] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.113 [2024-07-15 15:35:15.803121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.113 qpair failed and we were unable to recover it. 00:30:12.113 [2024-07-15 15:35:15.813066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.113 [2024-07-15 15:35:15.813150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.113 [2024-07-15 15:35:15.813168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.113 [2024-07-15 15:35:15.813178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.113 [2024-07-15 15:35:15.813186] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.113 [2024-07-15 15:35:15.813205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.113 qpair failed and we were unable to recover it. 00:30:12.113 [2024-07-15 15:35:15.823081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.113 [2024-07-15 15:35:15.823167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.113 [2024-07-15 15:35:15.823184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.113 [2024-07-15 15:35:15.823194] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.113 [2024-07-15 15:35:15.823203] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.113 [2024-07-15 15:35:15.823221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.113 qpair failed and we were unable to recover it. 00:30:12.113 [2024-07-15 15:35:15.833056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.113 [2024-07-15 15:35:15.833142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.113 [2024-07-15 15:35:15.833160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.113 [2024-07-15 15:35:15.833170] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.113 [2024-07-15 15:35:15.833179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.113 [2024-07-15 15:35:15.833197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.113 qpair failed and we were unable to recover it. 00:30:12.113 [2024-07-15 15:35:15.843138] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.113 [2024-07-15 15:35:15.843231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.113 [2024-07-15 15:35:15.843250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.113 [2024-07-15 15:35:15.843259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.113 [2024-07-15 15:35:15.843268] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.113 [2024-07-15 15:35:15.843290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.113 qpair failed and we were unable to recover it. 00:30:12.113 [2024-07-15 15:35:15.853203] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.113 [2024-07-15 15:35:15.853285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.113 [2024-07-15 15:35:15.853304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.113 [2024-07-15 15:35:15.853314] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.113 [2024-07-15 15:35:15.853324] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.113 [2024-07-15 15:35:15.853342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.113 qpair failed and we were unable to recover it. 00:30:12.113 [2024-07-15 15:35:15.863202] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.113 [2024-07-15 15:35:15.863313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.113 [2024-07-15 15:35:15.863331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.113 [2024-07-15 15:35:15.863343] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.113 [2024-07-15 15:35:15.863352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.113 [2024-07-15 15:35:15.863370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.113 qpair failed and we were unable to recover it. 00:30:12.113 [2024-07-15 15:35:15.873164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.113 [2024-07-15 15:35:15.873244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.113 [2024-07-15 15:35:15.873262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.113 [2024-07-15 15:35:15.873272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.113 [2024-07-15 15:35:15.873280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.113 [2024-07-15 15:35:15.873299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.113 qpair failed and we were unable to recover it. 00:30:12.113 [2024-07-15 15:35:15.883269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.113 [2024-07-15 15:35:15.883348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.113 [2024-07-15 15:35:15.883365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.113 [2024-07-15 15:35:15.883375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.113 [2024-07-15 15:35:15.883384] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.113 [2024-07-15 15:35:15.883402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.113 qpair failed and we were unable to recover it. 00:30:12.113 [2024-07-15 15:35:15.893330] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.114 [2024-07-15 15:35:15.893533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.114 [2024-07-15 15:35:15.893554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.114 [2024-07-15 15:35:15.893565] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.114 [2024-07-15 15:35:15.893574] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.114 [2024-07-15 15:35:15.893592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.114 qpair failed and we were unable to recover it. 00:30:12.114 [2024-07-15 15:35:15.903314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.114 [2024-07-15 15:35:15.903400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.114 [2024-07-15 15:35:15.903417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.114 [2024-07-15 15:35:15.903427] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.114 [2024-07-15 15:35:15.903435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.114 [2024-07-15 15:35:15.903453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.114 qpair failed and we were unable to recover it. 00:30:12.114 [2024-07-15 15:35:15.913374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.114 [2024-07-15 15:35:15.913458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.114 [2024-07-15 15:35:15.913475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.114 [2024-07-15 15:35:15.913485] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.114 [2024-07-15 15:35:15.913493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.114 [2024-07-15 15:35:15.913511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.114 qpair failed and we were unable to recover it. 00:30:12.114 [2024-07-15 15:35:15.923427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.114 [2024-07-15 15:35:15.923512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.114 [2024-07-15 15:35:15.923530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.114 [2024-07-15 15:35:15.923540] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.114 [2024-07-15 15:35:15.923549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.114 [2024-07-15 15:35:15.923568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.114 qpair failed and we were unable to recover it. 00:30:12.114 [2024-07-15 15:35:15.933357] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.114 [2024-07-15 15:35:15.933441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.114 [2024-07-15 15:35:15.933459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.114 [2024-07-15 15:35:15.933468] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.114 [2024-07-15 15:35:15.933480] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.114 [2024-07-15 15:35:15.933499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.114 qpair failed and we were unable to recover it. 00:30:12.114 [2024-07-15 15:35:15.943433] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.114 [2024-07-15 15:35:15.943526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.114 [2024-07-15 15:35:15.943544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.114 [2024-07-15 15:35:15.943553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.114 [2024-07-15 15:35:15.943562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.114 [2024-07-15 15:35:15.943581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.114 qpair failed and we were unable to recover it. 00:30:12.114 [2024-07-15 15:35:15.953466] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.114 [2024-07-15 15:35:15.953641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.114 [2024-07-15 15:35:15.953660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.114 [2024-07-15 15:35:15.953670] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.114 [2024-07-15 15:35:15.953679] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.114 [2024-07-15 15:35:15.953698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.114 qpair failed and we were unable to recover it. 00:30:12.114 [2024-07-15 15:35:15.963475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.114 [2024-07-15 15:35:15.963561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.114 [2024-07-15 15:35:15.963580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.114 [2024-07-15 15:35:15.963590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.114 [2024-07-15 15:35:15.963599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.114 [2024-07-15 15:35:15.963618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.114 qpair failed and we were unable to recover it. 00:30:12.114 [2024-07-15 15:35:15.973524] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.114 [2024-07-15 15:35:15.973648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.114 [2024-07-15 15:35:15.973667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.114 [2024-07-15 15:35:15.973677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.114 [2024-07-15 15:35:15.973686] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.114 [2024-07-15 15:35:15.973706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.114 qpair failed and we were unable to recover it. 00:30:12.114 [2024-07-15 15:35:15.983543] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.114 [2024-07-15 15:35:15.983659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.114 [2024-07-15 15:35:15.983678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.114 [2024-07-15 15:35:15.983688] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.114 [2024-07-15 15:35:15.983697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.114 [2024-07-15 15:35:15.983716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.114 qpair failed and we were unable to recover it. 00:30:12.114 [2024-07-15 15:35:15.993589] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.114 [2024-07-15 15:35:15.993672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.114 [2024-07-15 15:35:15.993689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.114 [2024-07-15 15:35:15.993700] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.114 [2024-07-15 15:35:15.993709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.115 [2024-07-15 15:35:15.993727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.115 qpair failed and we were unable to recover it. 00:30:12.115 [2024-07-15 15:35:16.003591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.115 [2024-07-15 15:35:16.003677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.115 [2024-07-15 15:35:16.003695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.115 [2024-07-15 15:35:16.003705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.115 [2024-07-15 15:35:16.003714] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.115 [2024-07-15 15:35:16.003733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.115 qpair failed and we were unable to recover it. 00:30:12.115 [2024-07-15 15:35:16.013650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.115 [2024-07-15 15:35:16.013736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.115 [2024-07-15 15:35:16.013753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.115 [2024-07-15 15:35:16.013763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.115 [2024-07-15 15:35:16.013771] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.115 [2024-07-15 15:35:16.013790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.115 qpair failed and we were unable to recover it. 00:30:12.374 [2024-07-15 15:35:16.023686] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.374 [2024-07-15 15:35:16.023819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.374 [2024-07-15 15:35:16.023842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.374 [2024-07-15 15:35:16.023856] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.374 [2024-07-15 15:35:16.023866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.374 [2024-07-15 15:35:16.023885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.374 qpair failed and we were unable to recover it. 00:30:12.374 [2024-07-15 15:35:16.033714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.374 [2024-07-15 15:35:16.033802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.374 [2024-07-15 15:35:16.033820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.374 [2024-07-15 15:35:16.033829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.374 [2024-07-15 15:35:16.033842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.374 [2024-07-15 15:35:16.033861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.374 qpair failed and we were unable to recover it. 00:30:12.374 [2024-07-15 15:35:16.043715] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.374 [2024-07-15 15:35:16.043798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.374 [2024-07-15 15:35:16.043816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.374 [2024-07-15 15:35:16.043826] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.374 [2024-07-15 15:35:16.043837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.374 [2024-07-15 15:35:16.043856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.374 qpair failed and we were unable to recover it. 00:30:12.374 [2024-07-15 15:35:16.053682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.374 [2024-07-15 15:35:16.053766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.374 [2024-07-15 15:35:16.053783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.374 [2024-07-15 15:35:16.053793] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.374 [2024-07-15 15:35:16.053802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.374 [2024-07-15 15:35:16.053820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.374 qpair failed and we were unable to recover it. 00:30:12.374 [2024-07-15 15:35:16.063772] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.374 [2024-07-15 15:35:16.063860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.374 [2024-07-15 15:35:16.063877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.374 [2024-07-15 15:35:16.063887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.374 [2024-07-15 15:35:16.063895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.374 [2024-07-15 15:35:16.063914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.374 qpair failed and we were unable to recover it. 00:30:12.374 [2024-07-15 15:35:16.073830] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.374 [2024-07-15 15:35:16.073949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.374 [2024-07-15 15:35:16.073967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.374 [2024-07-15 15:35:16.073977] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.374 [2024-07-15 15:35:16.073985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.374 [2024-07-15 15:35:16.074004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.374 qpair failed and we were unable to recover it. 00:30:12.374 [2024-07-15 15:35:16.083855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.374 [2024-07-15 15:35:16.083951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.374 [2024-07-15 15:35:16.083968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.374 [2024-07-15 15:35:16.083978] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.374 [2024-07-15 15:35:16.083987] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.374 [2024-07-15 15:35:16.084005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.374 qpair failed and we were unable to recover it. 00:30:12.374 [2024-07-15 15:35:16.093910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.374 [2024-07-15 15:35:16.094084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.374 [2024-07-15 15:35:16.094103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.374 [2024-07-15 15:35:16.094112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.374 [2024-07-15 15:35:16.094121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.374 [2024-07-15 15:35:16.094140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.374 qpair failed and we were unable to recover it. 00:30:12.374 [2024-07-15 15:35:16.103939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.374 [2024-07-15 15:35:16.104070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.374 [2024-07-15 15:35:16.104088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.374 [2024-07-15 15:35:16.104098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.374 [2024-07-15 15:35:16.104107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.374 [2024-07-15 15:35:16.104126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.374 qpair failed and we were unable to recover it. 00:30:12.374 [2024-07-15 15:35:16.113932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.374 [2024-07-15 15:35:16.114019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.374 [2024-07-15 15:35:16.114036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.374 [2024-07-15 15:35:16.114049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.374 [2024-07-15 15:35:16.114058] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.374 [2024-07-15 15:35:16.114076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.374 qpair failed and we were unable to recover it. 00:30:12.374 [2024-07-15 15:35:16.123992] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.374 [2024-07-15 15:35:16.124097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.374 [2024-07-15 15:35:16.124115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.374 [2024-07-15 15:35:16.124125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.374 [2024-07-15 15:35:16.124134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.374 [2024-07-15 15:35:16.124154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.374 qpair failed and we were unable to recover it. 00:30:12.375 [2024-07-15 15:35:16.133988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.375 [2024-07-15 15:35:16.134072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.375 [2024-07-15 15:35:16.134089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.375 [2024-07-15 15:35:16.134099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.375 [2024-07-15 15:35:16.134107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.375 [2024-07-15 15:35:16.134126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.375 qpair failed and we were unable to recover it. 00:30:12.375 [2024-07-15 15:35:16.144000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.375 [2024-07-15 15:35:16.144084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.375 [2024-07-15 15:35:16.144101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.375 [2024-07-15 15:35:16.144111] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.375 [2024-07-15 15:35:16.144120] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.375 [2024-07-15 15:35:16.144138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.375 qpair failed and we were unable to recover it. 00:30:12.375 [2024-07-15 15:35:16.154084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.375 [2024-07-15 15:35:16.154198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.375 [2024-07-15 15:35:16.154216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.375 [2024-07-15 15:35:16.154226] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.375 [2024-07-15 15:35:16.154234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.375 [2024-07-15 15:35:16.154252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.375 qpair failed and we were unable to recover it. 00:30:12.375 [2024-07-15 15:35:16.164076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.375 [2024-07-15 15:35:16.164160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.375 [2024-07-15 15:35:16.164178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.375 [2024-07-15 15:35:16.164187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.375 [2024-07-15 15:35:16.164196] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.375 [2024-07-15 15:35:16.164215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.375 qpair failed and we were unable to recover it. 00:30:12.375 [2024-07-15 15:35:16.174106] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.375 [2024-07-15 15:35:16.174221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.375 [2024-07-15 15:35:16.174239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.375 [2024-07-15 15:35:16.174249] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.375 [2024-07-15 15:35:16.174258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.375 [2024-07-15 15:35:16.174276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.375 qpair failed and we were unable to recover it. 00:30:12.375 [2024-07-15 15:35:16.184164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.375 [2024-07-15 15:35:16.184257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.375 [2024-07-15 15:35:16.184274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.375 [2024-07-15 15:35:16.184284] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.375 [2024-07-15 15:35:16.184292] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.375 [2024-07-15 15:35:16.184311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.375 qpair failed and we were unable to recover it. 00:30:12.375 [2024-07-15 15:35:16.194135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.375 [2024-07-15 15:35:16.194244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.375 [2024-07-15 15:35:16.194261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.375 [2024-07-15 15:35:16.194271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.375 [2024-07-15 15:35:16.194281] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.375 [2024-07-15 15:35:16.194300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.375 qpair failed and we were unable to recover it. 00:30:12.375 [2024-07-15 15:35:16.204178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.375 [2024-07-15 15:35:16.204268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.375 [2024-07-15 15:35:16.204289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.375 [2024-07-15 15:35:16.204298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.375 [2024-07-15 15:35:16.204307] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.375 [2024-07-15 15:35:16.204325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.375 qpair failed and we were unable to recover it. 00:30:12.375 [2024-07-15 15:35:16.214140] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.375 [2024-07-15 15:35:16.214253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.375 [2024-07-15 15:35:16.214272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.375 [2024-07-15 15:35:16.214281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.375 [2024-07-15 15:35:16.214290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.375 [2024-07-15 15:35:16.214309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.375 qpair failed and we were unable to recover it. 00:30:12.375 [2024-07-15 15:35:16.224176] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.375 [2024-07-15 15:35:16.224259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.375 [2024-07-15 15:35:16.224276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.375 [2024-07-15 15:35:16.224286] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.375 [2024-07-15 15:35:16.224295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.375 [2024-07-15 15:35:16.224312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.375 qpair failed and we were unable to recover it. 00:30:12.375 [2024-07-15 15:35:16.234262] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.375 [2024-07-15 15:35:16.234348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.375 [2024-07-15 15:35:16.234365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.375 [2024-07-15 15:35:16.234374] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.375 [2024-07-15 15:35:16.234383] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.375 [2024-07-15 15:35:16.234400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.375 qpair failed and we were unable to recover it. 00:30:12.375 [2024-07-15 15:35:16.244241] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.375 [2024-07-15 15:35:16.244325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.375 [2024-07-15 15:35:16.244342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.375 [2024-07-15 15:35:16.244352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.375 [2024-07-15 15:35:16.244360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.375 [2024-07-15 15:35:16.244382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.375 qpair failed and we were unable to recover it. 00:30:12.375 [2024-07-15 15:35:16.254320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.375 [2024-07-15 15:35:16.254403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.375 [2024-07-15 15:35:16.254421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.375 [2024-07-15 15:35:16.254430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.375 [2024-07-15 15:35:16.254439] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.375 [2024-07-15 15:35:16.254457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.375 qpair failed and we were unable to recover it. 00:30:12.375 [2024-07-15 15:35:16.264360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.375 [2024-07-15 15:35:16.264442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.375 [2024-07-15 15:35:16.264459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.375 [2024-07-15 15:35:16.264469] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.375 [2024-07-15 15:35:16.264478] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.375 [2024-07-15 15:35:16.264496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.375 qpair failed and we were unable to recover it. 00:30:12.375 [2024-07-15 15:35:16.274372] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.375 [2024-07-15 15:35:16.274456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.376 [2024-07-15 15:35:16.274473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.376 [2024-07-15 15:35:16.274483] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.376 [2024-07-15 15:35:16.274491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.376 [2024-07-15 15:35:16.274511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.376 qpair failed and we were unable to recover it. 00:30:12.635 [2024-07-15 15:35:16.284414] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.635 [2024-07-15 15:35:16.284497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.635 [2024-07-15 15:35:16.284515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.635 [2024-07-15 15:35:16.284525] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.635 [2024-07-15 15:35:16.284534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.635 [2024-07-15 15:35:16.284552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.635 qpair failed and we were unable to recover it. 00:30:12.635 [2024-07-15 15:35:16.294435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.635 [2024-07-15 15:35:16.294517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.635 [2024-07-15 15:35:16.294537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.635 [2024-07-15 15:35:16.294547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.635 [2024-07-15 15:35:16.294555] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.635 [2024-07-15 15:35:16.294577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.635 qpair failed and we were unable to recover it. 00:30:12.635 [2024-07-15 15:35:16.304462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.635 [2024-07-15 15:35:16.304558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.635 [2024-07-15 15:35:16.304575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.635 [2024-07-15 15:35:16.304585] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.635 [2024-07-15 15:35:16.304594] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.635 [2024-07-15 15:35:16.304613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.635 qpair failed and we were unable to recover it. 00:30:12.635 [2024-07-15 15:35:16.314506] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.635 [2024-07-15 15:35:16.314589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.635 [2024-07-15 15:35:16.314606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.635 [2024-07-15 15:35:16.314616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.635 [2024-07-15 15:35:16.314625] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.635 [2024-07-15 15:35:16.314643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.635 qpair failed and we were unable to recover it. 00:30:12.635 [2024-07-15 15:35:16.324677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.635 [2024-07-15 15:35:16.324771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.635 [2024-07-15 15:35:16.324788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.635 [2024-07-15 15:35:16.324798] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.635 [2024-07-15 15:35:16.324806] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.635 [2024-07-15 15:35:16.324826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.635 qpair failed and we were unable to recover it. 00:30:12.635 [2024-07-15 15:35:16.334639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.635 [2024-07-15 15:35:16.334732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.635 [2024-07-15 15:35:16.334750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.635 [2024-07-15 15:35:16.334759] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.635 [2024-07-15 15:35:16.334771] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.635 [2024-07-15 15:35:16.334790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.635 qpair failed and we were unable to recover it. 00:30:12.635 [2024-07-15 15:35:16.344630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.635 [2024-07-15 15:35:16.344725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.635 [2024-07-15 15:35:16.344742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.635 [2024-07-15 15:35:16.344752] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.635 [2024-07-15 15:35:16.344761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.635 [2024-07-15 15:35:16.344779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.635 qpair failed and we were unable to recover it. 00:30:12.635 [2024-07-15 15:35:16.354664] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.635 [2024-07-15 15:35:16.354746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.635 [2024-07-15 15:35:16.354764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.635 [2024-07-15 15:35:16.354773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.635 [2024-07-15 15:35:16.354782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.635 [2024-07-15 15:35:16.354800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.635 qpair failed and we were unable to recover it. 00:30:12.635 [2024-07-15 15:35:16.364673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.635 [2024-07-15 15:35:16.364758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.635 [2024-07-15 15:35:16.364776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.635 [2024-07-15 15:35:16.364785] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.635 [2024-07-15 15:35:16.364794] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.635 [2024-07-15 15:35:16.364812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.635 qpair failed and we were unable to recover it. 00:30:12.635 [2024-07-15 15:35:16.374649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.635 [2024-07-15 15:35:16.374827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.635 [2024-07-15 15:35:16.374850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.635 [2024-07-15 15:35:16.374862] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.635 [2024-07-15 15:35:16.374871] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.635 [2024-07-15 15:35:16.374890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.635 qpair failed and we were unable to recover it. 00:30:12.635 [2024-07-15 15:35:16.384687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.635 [2024-07-15 15:35:16.384775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.635 [2024-07-15 15:35:16.384793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.635 [2024-07-15 15:35:16.384804] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.635 [2024-07-15 15:35:16.384812] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.636 [2024-07-15 15:35:16.384835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.636 qpair failed and we were unable to recover it. 00:30:12.636 [2024-07-15 15:35:16.394717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.636 [2024-07-15 15:35:16.394846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.636 [2024-07-15 15:35:16.394866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.636 [2024-07-15 15:35:16.394875] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.636 [2024-07-15 15:35:16.394884] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.636 [2024-07-15 15:35:16.394903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.636 qpair failed and we were unable to recover it. 00:30:12.636 [2024-07-15 15:35:16.404679] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.636 [2024-07-15 15:35:16.404783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.636 [2024-07-15 15:35:16.404801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.636 [2024-07-15 15:35:16.404810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.636 [2024-07-15 15:35:16.404819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.636 [2024-07-15 15:35:16.404842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.636 qpair failed and we were unable to recover it. 00:30:12.636 [2024-07-15 15:35:16.414716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.636 [2024-07-15 15:35:16.414809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.636 [2024-07-15 15:35:16.414827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.636 [2024-07-15 15:35:16.414842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.636 [2024-07-15 15:35:16.414851] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.636 [2024-07-15 15:35:16.414870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.636 qpair failed and we were unable to recover it. 00:30:12.636 [2024-07-15 15:35:16.424802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.636 [2024-07-15 15:35:16.424889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.636 [2024-07-15 15:35:16.424906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.636 [2024-07-15 15:35:16.424916] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.636 [2024-07-15 15:35:16.424928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.636 [2024-07-15 15:35:16.424947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.636 qpair failed and we were unable to recover it. 00:30:12.636 [2024-07-15 15:35:16.434848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.636 [2024-07-15 15:35:16.434936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.636 [2024-07-15 15:35:16.434955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.636 [2024-07-15 15:35:16.434965] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.636 [2024-07-15 15:35:16.434973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.636 [2024-07-15 15:35:16.434993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.636 qpair failed and we were unable to recover it. 00:30:12.636 [2024-07-15 15:35:16.444844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.636 [2024-07-15 15:35:16.445012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.636 [2024-07-15 15:35:16.445030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.636 [2024-07-15 15:35:16.445040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.636 [2024-07-15 15:35:16.445049] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.636 [2024-07-15 15:35:16.445068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.636 qpair failed and we were unable to recover it. 00:30:12.636 [2024-07-15 15:35:16.454947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.636 [2024-07-15 15:35:16.455051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.636 [2024-07-15 15:35:16.455068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.636 [2024-07-15 15:35:16.455078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.636 [2024-07-15 15:35:16.455088] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.636 [2024-07-15 15:35:16.455106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.636 qpair failed and we were unable to recover it. 00:30:12.636 [2024-07-15 15:35:16.464904] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.636 [2024-07-15 15:35:16.464986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.636 [2024-07-15 15:35:16.465003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.636 [2024-07-15 15:35:16.465013] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.636 [2024-07-15 15:35:16.465022] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.636 [2024-07-15 15:35:16.465040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.636 qpair failed and we were unable to recover it. 00:30:12.636 [2024-07-15 15:35:16.474989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.636 [2024-07-15 15:35:16.475087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.636 [2024-07-15 15:35:16.475105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.636 [2024-07-15 15:35:16.475114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.636 [2024-07-15 15:35:16.475123] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.636 [2024-07-15 15:35:16.475141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.636 qpair failed and we were unable to recover it. 00:30:12.636 [2024-07-15 15:35:16.484943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.636 [2024-07-15 15:35:16.485022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.636 [2024-07-15 15:35:16.485041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.636 [2024-07-15 15:35:16.485052] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.636 [2024-07-15 15:35:16.485060] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.636 [2024-07-15 15:35:16.485078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.636 qpair failed and we were unable to recover it. 00:30:12.636 [2024-07-15 15:35:16.495006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.636 [2024-07-15 15:35:16.495094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.636 [2024-07-15 15:35:16.495112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.636 [2024-07-15 15:35:16.495121] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.636 [2024-07-15 15:35:16.495130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.636 [2024-07-15 15:35:16.495149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.636 qpair failed and we were unable to recover it. 00:30:12.636 [2024-07-15 15:35:16.505020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.636 [2024-07-15 15:35:16.505106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.636 [2024-07-15 15:35:16.505124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.636 [2024-07-15 15:35:16.505133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.636 [2024-07-15 15:35:16.505142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.636 [2024-07-15 15:35:16.505161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.636 qpair failed and we were unable to recover it. 00:30:12.636 [2024-07-15 15:35:16.515040] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.636 [2024-07-15 15:35:16.515123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.636 [2024-07-15 15:35:16.515140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.636 [2024-07-15 15:35:16.515154] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.636 [2024-07-15 15:35:16.515164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.636 [2024-07-15 15:35:16.515183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.636 qpair failed and we were unable to recover it. 00:30:12.636 [2024-07-15 15:35:16.525098] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.636 [2024-07-15 15:35:16.525205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.636 [2024-07-15 15:35:16.525223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.637 [2024-07-15 15:35:16.525233] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.637 [2024-07-15 15:35:16.525242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.637 [2024-07-15 15:35:16.525261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.637 qpair failed and we were unable to recover it. 00:30:12.637 [2024-07-15 15:35:16.535131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.637 [2024-07-15 15:35:16.535214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.637 [2024-07-15 15:35:16.535232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.637 [2024-07-15 15:35:16.535242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.637 [2024-07-15 15:35:16.535251] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.637 [2024-07-15 15:35:16.535270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.637 qpair failed and we were unable to recover it. 00:30:12.896 [2024-07-15 15:35:16.545129] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.896 [2024-07-15 15:35:16.545209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.896 [2024-07-15 15:35:16.545227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.896 [2024-07-15 15:35:16.545237] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.896 [2024-07-15 15:35:16.545245] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.896 [2024-07-15 15:35:16.545263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.896 qpair failed and we were unable to recover it. 00:30:12.896 [2024-07-15 15:35:16.555206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.896 [2024-07-15 15:35:16.555322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.896 [2024-07-15 15:35:16.555341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.896 [2024-07-15 15:35:16.555353] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.896 [2024-07-15 15:35:16.555363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.896 [2024-07-15 15:35:16.555382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.896 qpair failed and we were unable to recover it. 00:30:12.896 [2024-07-15 15:35:16.565196] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.897 [2024-07-15 15:35:16.565276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.897 [2024-07-15 15:35:16.565294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.897 [2024-07-15 15:35:16.565303] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.897 [2024-07-15 15:35:16.565313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.897 [2024-07-15 15:35:16.565332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.897 qpair failed and we were unable to recover it. 00:30:12.897 [2024-07-15 15:35:16.575226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.897 [2024-07-15 15:35:16.575308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.897 [2024-07-15 15:35:16.575325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.897 [2024-07-15 15:35:16.575335] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.897 [2024-07-15 15:35:16.575343] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.897 [2024-07-15 15:35:16.575362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.897 qpair failed and we were unable to recover it. 00:30:12.897 [2024-07-15 15:35:16.585240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.897 [2024-07-15 15:35:16.585334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.897 [2024-07-15 15:35:16.585352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.897 [2024-07-15 15:35:16.585361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.897 [2024-07-15 15:35:16.585370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.897 [2024-07-15 15:35:16.585389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.897 qpair failed and we were unable to recover it. 00:30:12.897 [2024-07-15 15:35:16.595275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.897 [2024-07-15 15:35:16.595364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.897 [2024-07-15 15:35:16.595382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.897 [2024-07-15 15:35:16.595391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.897 [2024-07-15 15:35:16.595400] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.897 [2024-07-15 15:35:16.595419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.897 qpair failed and we were unable to recover it. 00:30:12.897 [2024-07-15 15:35:16.605313] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.897 [2024-07-15 15:35:16.605399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.897 [2024-07-15 15:35:16.605420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.897 [2024-07-15 15:35:16.605430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.897 [2024-07-15 15:35:16.605438] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.897 [2024-07-15 15:35:16.605457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.897 qpair failed and we were unable to recover it. 00:30:12.897 [2024-07-15 15:35:16.615342] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.897 [2024-07-15 15:35:16.615457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.897 [2024-07-15 15:35:16.615476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.897 [2024-07-15 15:35:16.615485] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.897 [2024-07-15 15:35:16.615494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.897 [2024-07-15 15:35:16.615513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.897 qpair failed and we were unable to recover it. 00:30:12.897 [2024-07-15 15:35:16.625377] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.897 [2024-07-15 15:35:16.625463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.897 [2024-07-15 15:35:16.625480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.897 [2024-07-15 15:35:16.625490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.897 [2024-07-15 15:35:16.625498] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.897 [2024-07-15 15:35:16.625517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.897 qpair failed and we were unable to recover it. 00:30:12.897 [2024-07-15 15:35:16.635393] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.897 [2024-07-15 15:35:16.635477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.897 [2024-07-15 15:35:16.635494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.897 [2024-07-15 15:35:16.635504] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.897 [2024-07-15 15:35:16.635512] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.897 [2024-07-15 15:35:16.635530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.897 qpair failed and we were unable to recover it. 00:30:12.897 [2024-07-15 15:35:16.645423] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.897 [2024-07-15 15:35:16.645501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.897 [2024-07-15 15:35:16.645518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.897 [2024-07-15 15:35:16.645528] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.897 [2024-07-15 15:35:16.645537] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.897 [2024-07-15 15:35:16.645558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.897 qpair failed and we were unable to recover it. 00:30:12.897 [2024-07-15 15:35:16.655458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.897 [2024-07-15 15:35:16.655541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.897 [2024-07-15 15:35:16.655559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.897 [2024-07-15 15:35:16.655568] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.897 [2024-07-15 15:35:16.655577] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.897 [2024-07-15 15:35:16.655595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.897 qpair failed and we were unable to recover it. 00:30:12.897 [2024-07-15 15:35:16.665471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.897 [2024-07-15 15:35:16.665571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.897 [2024-07-15 15:35:16.665589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.897 [2024-07-15 15:35:16.665599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.897 [2024-07-15 15:35:16.665608] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.897 [2024-07-15 15:35:16.665629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.897 qpair failed and we were unable to recover it. 00:30:12.897 [2024-07-15 15:35:16.675519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.897 [2024-07-15 15:35:16.675602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.897 [2024-07-15 15:35:16.675620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.897 [2024-07-15 15:35:16.675629] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.897 [2024-07-15 15:35:16.675638] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.897 [2024-07-15 15:35:16.675657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.897 qpair failed and we were unable to recover it. 00:30:12.897 [2024-07-15 15:35:16.685535] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.897 [2024-07-15 15:35:16.685619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.897 [2024-07-15 15:35:16.685637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.897 [2024-07-15 15:35:16.685647] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.897 [2024-07-15 15:35:16.685656] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.897 [2024-07-15 15:35:16.685675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.897 qpair failed and we were unable to recover it. 00:30:12.897 [2024-07-15 15:35:16.695580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.897 [2024-07-15 15:35:16.695659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.897 [2024-07-15 15:35:16.695680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.897 [2024-07-15 15:35:16.695690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.897 [2024-07-15 15:35:16.695699] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.897 [2024-07-15 15:35:16.695718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.897 qpair failed and we were unable to recover it. 00:30:12.897 [2024-07-15 15:35:16.705593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.898 [2024-07-15 15:35:16.705677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.898 [2024-07-15 15:35:16.705695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.898 [2024-07-15 15:35:16.705705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.898 [2024-07-15 15:35:16.705714] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.898 [2024-07-15 15:35:16.705732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.898 qpair failed and we were unable to recover it. 00:30:12.898 [2024-07-15 15:35:16.715637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.898 [2024-07-15 15:35:16.715724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.898 [2024-07-15 15:35:16.715742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.898 [2024-07-15 15:35:16.715751] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.898 [2024-07-15 15:35:16.715760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.898 [2024-07-15 15:35:16.715778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.898 qpair failed and we were unable to recover it. 00:30:12.898 [2024-07-15 15:35:16.725726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.898 [2024-07-15 15:35:16.725809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.898 [2024-07-15 15:35:16.725826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.898 [2024-07-15 15:35:16.725840] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.898 [2024-07-15 15:35:16.725849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.898 [2024-07-15 15:35:16.725867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.898 qpair failed and we were unable to recover it. 00:30:12.898 [2024-07-15 15:35:16.735690] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.898 [2024-07-15 15:35:16.735778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.898 [2024-07-15 15:35:16.735797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.898 [2024-07-15 15:35:16.735807] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.898 [2024-07-15 15:35:16.735820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.898 [2024-07-15 15:35:16.735842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.898 qpair failed and we were unable to recover it. 00:30:12.898 [2024-07-15 15:35:16.745702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.898 [2024-07-15 15:35:16.745786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.898 [2024-07-15 15:35:16.745804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.898 [2024-07-15 15:35:16.745814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.898 [2024-07-15 15:35:16.745822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.898 [2024-07-15 15:35:16.745844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.898 qpair failed and we were unable to recover it. 00:30:12.898 [2024-07-15 15:35:16.755759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.898 [2024-07-15 15:35:16.755874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.898 [2024-07-15 15:35:16.755892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.898 [2024-07-15 15:35:16.755902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.898 [2024-07-15 15:35:16.755911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.898 [2024-07-15 15:35:16.755929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.898 qpair failed and we were unable to recover it. 00:30:12.898 [2024-07-15 15:35:16.765790] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.898 [2024-07-15 15:35:16.765902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.898 [2024-07-15 15:35:16.765920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.898 [2024-07-15 15:35:16.765930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.898 [2024-07-15 15:35:16.765939] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.898 [2024-07-15 15:35:16.765957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.898 qpair failed and we were unable to recover it. 00:30:12.898 [2024-07-15 15:35:16.775725] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.898 [2024-07-15 15:35:16.775816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.898 [2024-07-15 15:35:16.775837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.898 [2024-07-15 15:35:16.775847] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.898 [2024-07-15 15:35:16.775856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.898 [2024-07-15 15:35:16.775874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.898 qpair failed and we were unable to recover it. 00:30:12.898 [2024-07-15 15:35:16.785817] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.898 [2024-07-15 15:35:16.785905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.898 [2024-07-15 15:35:16.785923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.898 [2024-07-15 15:35:16.785933] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.898 [2024-07-15 15:35:16.785942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.898 [2024-07-15 15:35:16.785961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.898 qpair failed and we were unable to recover it. 00:30:12.898 [2024-07-15 15:35:16.795857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.898 [2024-07-15 15:35:16.795969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.898 [2024-07-15 15:35:16.795987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.898 [2024-07-15 15:35:16.795996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.898 [2024-07-15 15:35:16.796005] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:12.898 [2024-07-15 15:35:16.796024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.898 qpair failed and we were unable to recover it. 00:30:13.158 [2024-07-15 15:35:16.805870] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.158 [2024-07-15 15:35:16.805963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.158 [2024-07-15 15:35:16.805980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.158 [2024-07-15 15:35:16.805990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.158 [2024-07-15 15:35:16.805998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.158 [2024-07-15 15:35:16.806017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.158 qpair failed and we were unable to recover it. 00:30:13.158 [2024-07-15 15:35:16.815931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.158 [2024-07-15 15:35:16.816009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.158 [2024-07-15 15:35:16.816027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.158 [2024-07-15 15:35:16.816037] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.158 [2024-07-15 15:35:16.816046] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.158 [2024-07-15 15:35:16.816064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.158 qpair failed and we were unable to recover it. 00:30:13.158 [2024-07-15 15:35:16.825966] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.158 [2024-07-15 15:35:16.826093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.158 [2024-07-15 15:35:16.826110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.158 [2024-07-15 15:35:16.826120] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.158 [2024-07-15 15:35:16.826131] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.158 [2024-07-15 15:35:16.826150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.158 qpair failed and we were unable to recover it. 00:30:13.158 [2024-07-15 15:35:16.835943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.158 [2024-07-15 15:35:16.836025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.158 [2024-07-15 15:35:16.836043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.158 [2024-07-15 15:35:16.836053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.158 [2024-07-15 15:35:16.836061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.158 [2024-07-15 15:35:16.836080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.158 qpair failed and we were unable to recover it. 00:30:13.158 [2024-07-15 15:35:16.846006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.158 [2024-07-15 15:35:16.846091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.158 [2024-07-15 15:35:16.846108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.158 [2024-07-15 15:35:16.846118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.158 [2024-07-15 15:35:16.846127] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.158 [2024-07-15 15:35:16.846145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.158 qpair failed and we were unable to recover it. 00:30:13.158 [2024-07-15 15:35:16.856036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.158 [2024-07-15 15:35:16.856118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.158 [2024-07-15 15:35:16.856136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.158 [2024-07-15 15:35:16.856146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.158 [2024-07-15 15:35:16.856155] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.158 [2024-07-15 15:35:16.856173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.158 qpair failed and we were unable to recover it. 00:30:13.158 [2024-07-15 15:35:16.866046] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.158 [2024-07-15 15:35:16.866226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.158 [2024-07-15 15:35:16.866245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.158 [2024-07-15 15:35:16.866256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.158 [2024-07-15 15:35:16.866266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.158 [2024-07-15 15:35:16.866284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.158 qpair failed and we were unable to recover it. 00:30:13.158 [2024-07-15 15:35:16.876011] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.158 [2024-07-15 15:35:16.876099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.158 [2024-07-15 15:35:16.876117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.158 [2024-07-15 15:35:16.876126] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.158 [2024-07-15 15:35:16.876135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.158 [2024-07-15 15:35:16.876153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.158 qpair failed and we were unable to recover it. 00:30:13.158 [2024-07-15 15:35:16.886105] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.158 [2024-07-15 15:35:16.886188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.158 [2024-07-15 15:35:16.886207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.158 [2024-07-15 15:35:16.886217] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.158 [2024-07-15 15:35:16.886226] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.158 [2024-07-15 15:35:16.886244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.158 qpair failed and we were unable to recover it. 00:30:13.158 [2024-07-15 15:35:16.896179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.158 [2024-07-15 15:35:16.896308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.158 [2024-07-15 15:35:16.896327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.158 [2024-07-15 15:35:16.896338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.158 [2024-07-15 15:35:16.896348] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.159 [2024-07-15 15:35:16.896366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.159 qpair failed and we were unable to recover it. 00:30:13.159 [2024-07-15 15:35:16.906172] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.159 [2024-07-15 15:35:16.906256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.159 [2024-07-15 15:35:16.906274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.159 [2024-07-15 15:35:16.906284] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.159 [2024-07-15 15:35:16.906293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.159 [2024-07-15 15:35:16.906310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.159 qpair failed and we were unable to recover it. 00:30:13.159 [2024-07-15 15:35:16.916140] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.159 [2024-07-15 15:35:16.916226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.159 [2024-07-15 15:35:16.916244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.159 [2024-07-15 15:35:16.916257] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.159 [2024-07-15 15:35:16.916265] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.159 [2024-07-15 15:35:16.916283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.159 qpair failed and we were unable to recover it. 00:30:13.159 [2024-07-15 15:35:16.926260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.159 [2024-07-15 15:35:16.926346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.159 [2024-07-15 15:35:16.926364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.159 [2024-07-15 15:35:16.926374] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.159 [2024-07-15 15:35:16.926383] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.159 [2024-07-15 15:35:16.926401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.159 qpair failed and we were unable to recover it. 00:30:13.159 [2024-07-15 15:35:16.936255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.159 [2024-07-15 15:35:16.936336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.159 [2024-07-15 15:35:16.936354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.159 [2024-07-15 15:35:16.936364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.159 [2024-07-15 15:35:16.936373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.159 [2024-07-15 15:35:16.936391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.159 qpair failed and we were unable to recover it. 00:30:13.159 [2024-07-15 15:35:16.946268] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.159 [2024-07-15 15:35:16.946352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.159 [2024-07-15 15:35:16.946369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.159 [2024-07-15 15:35:16.946379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.159 [2024-07-15 15:35:16.946388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.159 [2024-07-15 15:35:16.946406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.159 qpair failed and we were unable to recover it. 00:30:13.159 [2024-07-15 15:35:16.956344] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.159 [2024-07-15 15:35:16.956445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.159 [2024-07-15 15:35:16.956462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.159 [2024-07-15 15:35:16.956472] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.159 [2024-07-15 15:35:16.956480] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.159 [2024-07-15 15:35:16.956499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.159 qpair failed and we were unable to recover it. 00:30:13.159 [2024-07-15 15:35:16.966374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.159 [2024-07-15 15:35:16.966460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.159 [2024-07-15 15:35:16.966478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.159 [2024-07-15 15:35:16.966487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.159 [2024-07-15 15:35:16.966496] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.159 [2024-07-15 15:35:16.966514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.159 qpair failed and we were unable to recover it. 00:30:13.159 [2024-07-15 15:35:16.976301] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.159 [2024-07-15 15:35:16.976385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.159 [2024-07-15 15:35:16.976402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.159 [2024-07-15 15:35:16.976412] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.159 [2024-07-15 15:35:16.976420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.159 [2024-07-15 15:35:16.976439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.159 qpair failed and we were unable to recover it. 00:30:13.159 [2024-07-15 15:35:16.986320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.159 [2024-07-15 15:35:16.986406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.159 [2024-07-15 15:35:16.986423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.159 [2024-07-15 15:35:16.986433] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.159 [2024-07-15 15:35:16.986442] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.159 [2024-07-15 15:35:16.986461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.159 qpair failed and we were unable to recover it. 00:30:13.159 [2024-07-15 15:35:16.996422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.159 [2024-07-15 15:35:16.996507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.159 [2024-07-15 15:35:16.996525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.159 [2024-07-15 15:35:16.996535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.159 [2024-07-15 15:35:16.996543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.159 [2024-07-15 15:35:16.996562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.159 qpair failed and we were unable to recover it. 00:30:13.159 [2024-07-15 15:35:17.006437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.159 [2024-07-15 15:35:17.006528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.159 [2024-07-15 15:35:17.006549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.159 [2024-07-15 15:35:17.006559] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.159 [2024-07-15 15:35:17.006567] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.159 [2024-07-15 15:35:17.006585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.159 qpair failed and we were unable to recover it. 00:30:13.159 [2024-07-15 15:35:17.016411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.159 [2024-07-15 15:35:17.016544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.159 [2024-07-15 15:35:17.016561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.159 [2024-07-15 15:35:17.016571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.159 [2024-07-15 15:35:17.016580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.159 [2024-07-15 15:35:17.016598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.159 qpair failed and we were unable to recover it. 00:30:13.159 [2024-07-15 15:35:17.026506] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.159 [2024-07-15 15:35:17.026590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.159 [2024-07-15 15:35:17.026608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.159 [2024-07-15 15:35:17.026618] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.159 [2024-07-15 15:35:17.026627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.159 [2024-07-15 15:35:17.026645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.159 qpair failed and we were unable to recover it. 00:30:13.159 [2024-07-15 15:35:17.036467] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.160 [2024-07-15 15:35:17.036546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.160 [2024-07-15 15:35:17.036565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.160 [2024-07-15 15:35:17.036575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.160 [2024-07-15 15:35:17.036584] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.160 [2024-07-15 15:35:17.036603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.160 qpair failed and we were unable to recover it. 00:30:13.160 [2024-07-15 15:35:17.046536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.160 [2024-07-15 15:35:17.046625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.160 [2024-07-15 15:35:17.046643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.160 [2024-07-15 15:35:17.046653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.160 [2024-07-15 15:35:17.046661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.160 [2024-07-15 15:35:17.046683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.160 qpair failed and we were unable to recover it. 00:30:13.160 [2024-07-15 15:35:17.056608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.160 [2024-07-15 15:35:17.056693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.160 [2024-07-15 15:35:17.056710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.160 [2024-07-15 15:35:17.056720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.160 [2024-07-15 15:35:17.056729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.160 [2024-07-15 15:35:17.056747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.160 qpair failed and we were unable to recover it. 00:30:13.420 [2024-07-15 15:35:17.066567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.420 [2024-07-15 15:35:17.066655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.420 [2024-07-15 15:35:17.066673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.420 [2024-07-15 15:35:17.066683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.420 [2024-07-15 15:35:17.066692] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.420 [2024-07-15 15:35:17.066710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.420 qpair failed and we were unable to recover it. 00:30:13.420 [2024-07-15 15:35:17.076648] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.420 [2024-07-15 15:35:17.076732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.420 [2024-07-15 15:35:17.076749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.420 [2024-07-15 15:35:17.076759] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.420 [2024-07-15 15:35:17.076768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.420 [2024-07-15 15:35:17.076787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.420 qpair failed and we were unable to recover it. 00:30:13.420 [2024-07-15 15:35:17.086660] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.420 [2024-07-15 15:35:17.086744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.420 [2024-07-15 15:35:17.086761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.420 [2024-07-15 15:35:17.086771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.420 [2024-07-15 15:35:17.086780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.420 [2024-07-15 15:35:17.086798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.420 qpair failed and we were unable to recover it. 00:30:13.420 [2024-07-15 15:35:17.096689] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.420 [2024-07-15 15:35:17.096774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.420 [2024-07-15 15:35:17.096795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.420 [2024-07-15 15:35:17.096805] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.420 [2024-07-15 15:35:17.096814] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.420 [2024-07-15 15:35:17.096836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.420 qpair failed and we were unable to recover it. 00:30:13.420 [2024-07-15 15:35:17.106816] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.420 [2024-07-15 15:35:17.106910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.420 [2024-07-15 15:35:17.106929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.420 [2024-07-15 15:35:17.106939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.420 [2024-07-15 15:35:17.106949] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.420 [2024-07-15 15:35:17.106968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.420 qpair failed and we were unable to recover it. 00:30:13.420 [2024-07-15 15:35:17.116753] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.420 [2024-07-15 15:35:17.116880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.420 [2024-07-15 15:35:17.116898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.420 [2024-07-15 15:35:17.116908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.420 [2024-07-15 15:35:17.116917] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.420 [2024-07-15 15:35:17.116934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.420 qpair failed and we were unable to recover it. 00:30:13.420 [2024-07-15 15:35:17.126732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.420 [2024-07-15 15:35:17.126813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.420 [2024-07-15 15:35:17.126836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.420 [2024-07-15 15:35:17.126846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.420 [2024-07-15 15:35:17.126855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.420 [2024-07-15 15:35:17.126873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.420 qpair failed and we were unable to recover it. 00:30:13.420 [2024-07-15 15:35:17.136827] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.420 [2024-07-15 15:35:17.136915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.420 [2024-07-15 15:35:17.136933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.420 [2024-07-15 15:35:17.136943] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.420 [2024-07-15 15:35:17.136952] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.420 [2024-07-15 15:35:17.136976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.420 qpair failed and we were unable to recover it. 00:30:13.420 [2024-07-15 15:35:17.146784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.420 [2024-07-15 15:35:17.146876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.420 [2024-07-15 15:35:17.146894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.420 [2024-07-15 15:35:17.146904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.420 [2024-07-15 15:35:17.146913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.420 [2024-07-15 15:35:17.146931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.420 qpair failed and we were unable to recover it. 00:30:13.420 [2024-07-15 15:35:17.156905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.420 [2024-07-15 15:35:17.157039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.420 [2024-07-15 15:35:17.157057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.420 [2024-07-15 15:35:17.157067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.421 [2024-07-15 15:35:17.157076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.421 [2024-07-15 15:35:17.157095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.421 qpair failed and we were unable to recover it. 00:30:13.421 [2024-07-15 15:35:17.166912] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.421 [2024-07-15 15:35:17.167009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.421 [2024-07-15 15:35:17.167029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.421 [2024-07-15 15:35:17.167039] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.421 [2024-07-15 15:35:17.167049] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.421 [2024-07-15 15:35:17.167067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.421 qpair failed and we were unable to recover it. 00:30:13.421 [2024-07-15 15:35:17.176869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.421 [2024-07-15 15:35:17.176955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.421 [2024-07-15 15:35:17.176973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.421 [2024-07-15 15:35:17.176983] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.421 [2024-07-15 15:35:17.176991] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.421 [2024-07-15 15:35:17.177010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.421 qpair failed and we were unable to recover it. 00:30:13.421 [2024-07-15 15:35:17.186972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.421 [2024-07-15 15:35:17.187107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.421 [2024-07-15 15:35:17.187125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.421 [2024-07-15 15:35:17.187135] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.421 [2024-07-15 15:35:17.187144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.421 [2024-07-15 15:35:17.187162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.421 qpair failed and we were unable to recover it. 00:30:13.421 [2024-07-15 15:35:17.196933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.421 [2024-07-15 15:35:17.197029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.421 [2024-07-15 15:35:17.197047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.421 [2024-07-15 15:35:17.197057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.421 [2024-07-15 15:35:17.197065] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.421 [2024-07-15 15:35:17.197083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.421 qpair failed and we were unable to recover it. 00:30:13.421 [2024-07-15 15:35:17.206955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.421 [2024-07-15 15:35:17.207040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.421 [2024-07-15 15:35:17.207057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.421 [2024-07-15 15:35:17.207067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.421 [2024-07-15 15:35:17.207076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.421 [2024-07-15 15:35:17.207094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.421 qpair failed and we were unable to recover it. 00:30:13.421 [2024-07-15 15:35:17.217029] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.421 [2024-07-15 15:35:17.217110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.421 [2024-07-15 15:35:17.217128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.421 [2024-07-15 15:35:17.217138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.421 [2024-07-15 15:35:17.217147] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.421 [2024-07-15 15:35:17.217165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.421 qpair failed and we were unable to recover it. 00:30:13.421 [2024-07-15 15:35:17.227071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.421 [2024-07-15 15:35:17.227156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.421 [2024-07-15 15:35:17.227174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.421 [2024-07-15 15:35:17.227184] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.421 [2024-07-15 15:35:17.227196] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.421 [2024-07-15 15:35:17.227214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.421 qpair failed and we were unable to recover it. 00:30:13.421 [2024-07-15 15:35:17.237064] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.421 [2024-07-15 15:35:17.237240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.421 [2024-07-15 15:35:17.237259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.421 [2024-07-15 15:35:17.237269] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.421 [2024-07-15 15:35:17.237278] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.421 [2024-07-15 15:35:17.237296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.421 qpair failed and we were unable to recover it. 00:30:13.421 [2024-07-15 15:35:17.247148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.421 [2024-07-15 15:35:17.247275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.421 [2024-07-15 15:35:17.247293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.421 [2024-07-15 15:35:17.247303] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.421 [2024-07-15 15:35:17.247312] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.421 [2024-07-15 15:35:17.247330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.421 qpair failed and we were unable to recover it. 00:30:13.421 [2024-07-15 15:35:17.257173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.421 [2024-07-15 15:35:17.257253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.421 [2024-07-15 15:35:17.257271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.421 [2024-07-15 15:35:17.257280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.421 [2024-07-15 15:35:17.257289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.421 [2024-07-15 15:35:17.257307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.421 qpair failed and we were unable to recover it. 00:30:13.421 [2024-07-15 15:35:17.267194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.421 [2024-07-15 15:35:17.267279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.421 [2024-07-15 15:35:17.267297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.421 [2024-07-15 15:35:17.267306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.421 [2024-07-15 15:35:17.267315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.421 [2024-07-15 15:35:17.267333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.421 qpair failed and we were unable to recover it. 00:30:13.421 [2024-07-15 15:35:17.277181] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.421 [2024-07-15 15:35:17.277268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.421 [2024-07-15 15:35:17.277286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.421 [2024-07-15 15:35:17.277296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.421 [2024-07-15 15:35:17.277304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.421 [2024-07-15 15:35:17.277323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.421 qpair failed and we were unable to recover it. 00:30:13.421 [2024-07-15 15:35:17.287197] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.421 [2024-07-15 15:35:17.287290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.421 [2024-07-15 15:35:17.287308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.421 [2024-07-15 15:35:17.287317] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.421 [2024-07-15 15:35:17.287326] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.421 [2024-07-15 15:35:17.287344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.421 qpair failed and we were unable to recover it. 00:30:13.421 [2024-07-15 15:35:17.297201] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.421 [2024-07-15 15:35:17.297283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.421 [2024-07-15 15:35:17.297301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.421 [2024-07-15 15:35:17.297311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.421 [2024-07-15 15:35:17.297319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.422 [2024-07-15 15:35:17.297338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.422 qpair failed and we were unable to recover it. 00:30:13.422 [2024-07-15 15:35:17.307266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.422 [2024-07-15 15:35:17.307400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.422 [2024-07-15 15:35:17.307418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.422 [2024-07-15 15:35:17.307428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.422 [2024-07-15 15:35:17.307436] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.422 [2024-07-15 15:35:17.307455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.422 qpair failed and we were unable to recover it. 00:30:13.422 [2024-07-15 15:35:17.317378] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.422 [2024-07-15 15:35:17.317492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.422 [2024-07-15 15:35:17.317509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.422 [2024-07-15 15:35:17.317521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.422 [2024-07-15 15:35:17.317530] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.422 [2024-07-15 15:35:17.317549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.422 qpair failed and we were unable to recover it. 00:30:13.682 [2024-07-15 15:35:17.327337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.682 [2024-07-15 15:35:17.327419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.682 [2024-07-15 15:35:17.327437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.682 [2024-07-15 15:35:17.327447] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.682 [2024-07-15 15:35:17.327455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.682 [2024-07-15 15:35:17.327474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.682 qpair failed and we were unable to recover it. 00:30:13.682 [2024-07-15 15:35:17.337311] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.682 [2024-07-15 15:35:17.337392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.682 [2024-07-15 15:35:17.337410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.682 [2024-07-15 15:35:17.337421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.682 [2024-07-15 15:35:17.337430] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.682 [2024-07-15 15:35:17.337447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.682 qpair failed and we were unable to recover it. 00:30:13.682 [2024-07-15 15:35:17.347348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.682 [2024-07-15 15:35:17.347432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.682 [2024-07-15 15:35:17.347449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.682 [2024-07-15 15:35:17.347459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.682 [2024-07-15 15:35:17.347468] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.682 [2024-07-15 15:35:17.347486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.682 qpair failed and we were unable to recover it. 00:30:13.682 [2024-07-15 15:35:17.357388] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.682 [2024-07-15 15:35:17.357470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.682 [2024-07-15 15:35:17.357488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.682 [2024-07-15 15:35:17.357498] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.682 [2024-07-15 15:35:17.357506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.682 [2024-07-15 15:35:17.357524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.682 qpair failed and we were unable to recover it. 00:30:13.682 [2024-07-15 15:35:17.367402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.682 [2024-07-15 15:35:17.367488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.682 [2024-07-15 15:35:17.367506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.682 [2024-07-15 15:35:17.367515] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.682 [2024-07-15 15:35:17.367524] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.682 [2024-07-15 15:35:17.367542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.682 qpair failed and we were unable to recover it. 00:30:13.682 [2024-07-15 15:35:17.377480] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.682 [2024-07-15 15:35:17.377562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.683 [2024-07-15 15:35:17.377580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.683 [2024-07-15 15:35:17.377589] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.683 [2024-07-15 15:35:17.377598] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.683 [2024-07-15 15:35:17.377616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.683 qpair failed and we were unable to recover it. 00:30:13.683 [2024-07-15 15:35:17.387519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.683 [2024-07-15 15:35:17.387610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.683 [2024-07-15 15:35:17.387628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.683 [2024-07-15 15:35:17.387638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.683 [2024-07-15 15:35:17.387647] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.683 [2024-07-15 15:35:17.387665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.683 qpair failed and we were unable to recover it. 00:30:13.683 [2024-07-15 15:35:17.397475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.683 [2024-07-15 15:35:17.397558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.683 [2024-07-15 15:35:17.397575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.683 [2024-07-15 15:35:17.397585] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.683 [2024-07-15 15:35:17.397594] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.683 [2024-07-15 15:35:17.397612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.683 qpair failed and we were unable to recover it. 00:30:13.683 [2024-07-15 15:35:17.407531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.683 [2024-07-15 15:35:17.407615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.683 [2024-07-15 15:35:17.407637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.683 [2024-07-15 15:35:17.407646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.683 [2024-07-15 15:35:17.407655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.683 [2024-07-15 15:35:17.407673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.683 qpair failed and we were unable to recover it. 00:30:13.683 [2024-07-15 15:35:17.417632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.683 [2024-07-15 15:35:17.417718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.683 [2024-07-15 15:35:17.417736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.683 [2024-07-15 15:35:17.417745] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.683 [2024-07-15 15:35:17.417754] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.683 [2024-07-15 15:35:17.417772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.683 qpair failed and we were unable to recover it. 00:30:13.683 [2024-07-15 15:35:17.427579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.683 [2024-07-15 15:35:17.427695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.683 [2024-07-15 15:35:17.427713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.683 [2024-07-15 15:35:17.427724] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.683 [2024-07-15 15:35:17.427733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.683 [2024-07-15 15:35:17.427753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.683 qpair failed and we were unable to recover it. 00:30:13.683 [2024-07-15 15:35:17.437663] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.683 [2024-07-15 15:35:17.437745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.683 [2024-07-15 15:35:17.437762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.683 [2024-07-15 15:35:17.437771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.683 [2024-07-15 15:35:17.437780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.683 [2024-07-15 15:35:17.437798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.683 qpair failed and we were unable to recover it. 00:30:13.683 [2024-07-15 15:35:17.447619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.683 [2024-07-15 15:35:17.447703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.683 [2024-07-15 15:35:17.447722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.683 [2024-07-15 15:35:17.447731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.683 [2024-07-15 15:35:17.447740] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.683 [2024-07-15 15:35:17.447758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.683 qpair failed and we were unable to recover it. 00:30:13.683 [2024-07-15 15:35:17.457670] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.683 [2024-07-15 15:35:17.457769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.683 [2024-07-15 15:35:17.457787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.683 [2024-07-15 15:35:17.457796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.683 [2024-07-15 15:35:17.457805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.683 [2024-07-15 15:35:17.457825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.683 qpair failed and we were unable to recover it. 00:30:13.683 [2024-07-15 15:35:17.467698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.683 [2024-07-15 15:35:17.467780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.683 [2024-07-15 15:35:17.467798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.683 [2024-07-15 15:35:17.467809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.683 [2024-07-15 15:35:17.467819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.683 [2024-07-15 15:35:17.467840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.683 qpair failed and we were unable to recover it. 00:30:13.683 [2024-07-15 15:35:17.477812] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.683 [2024-07-15 15:35:17.477903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.683 [2024-07-15 15:35:17.477920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.683 [2024-07-15 15:35:17.477930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.683 [2024-07-15 15:35:17.477939] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.683 [2024-07-15 15:35:17.477958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.683 qpair failed and we were unable to recover it. 00:30:13.683 [2024-07-15 15:35:17.487792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.683 [2024-07-15 15:35:17.487875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.683 [2024-07-15 15:35:17.487893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.683 [2024-07-15 15:35:17.487902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.683 [2024-07-15 15:35:17.487911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.683 [2024-07-15 15:35:17.487929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.683 qpair failed and we were unable to recover it. 00:30:13.683 [2024-07-15 15:35:17.497914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.683 [2024-07-15 15:35:17.498043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.683 [2024-07-15 15:35:17.498064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.683 [2024-07-15 15:35:17.498074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.683 [2024-07-15 15:35:17.498083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.683 [2024-07-15 15:35:17.498101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.683 qpair failed and we were unable to recover it. 00:30:13.683 [2024-07-15 15:35:17.507898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.683 [2024-07-15 15:35:17.508031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.683 [2024-07-15 15:35:17.508048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.683 [2024-07-15 15:35:17.508058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.683 [2024-07-15 15:35:17.508067] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.683 [2024-07-15 15:35:17.508085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.683 qpair failed and we were unable to recover it. 00:30:13.683 [2024-07-15 15:35:17.517921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.683 [2024-07-15 15:35:17.518007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.683 [2024-07-15 15:35:17.518024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.683 [2024-07-15 15:35:17.518034] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.683 [2024-07-15 15:35:17.518042] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.684 [2024-07-15 15:35:17.518061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.684 qpair failed and we were unable to recover it. 00:30:13.684 [2024-07-15 15:35:17.527935] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.684 [2024-07-15 15:35:17.528020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.684 [2024-07-15 15:35:17.528037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.684 [2024-07-15 15:35:17.528047] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.684 [2024-07-15 15:35:17.528056] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.684 [2024-07-15 15:35:17.528074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.684 qpair failed and we were unable to recover it. 00:30:13.684 [2024-07-15 15:35:17.537976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.684 [2024-07-15 15:35:17.538062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.684 [2024-07-15 15:35:17.538079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.684 [2024-07-15 15:35:17.538089] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.684 [2024-07-15 15:35:17.538097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.684 [2024-07-15 15:35:17.538119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.684 qpair failed and we were unable to recover it. 00:30:13.684 [2024-07-15 15:35:17.547971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.684 [2024-07-15 15:35:17.548053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.684 [2024-07-15 15:35:17.548070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.684 [2024-07-15 15:35:17.548080] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.684 [2024-07-15 15:35:17.548089] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.684 [2024-07-15 15:35:17.548107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.684 qpair failed and we were unable to recover it. 00:30:13.684 [2024-07-15 15:35:17.558038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.684 [2024-07-15 15:35:17.558124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.684 [2024-07-15 15:35:17.558141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.684 [2024-07-15 15:35:17.558151] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.684 [2024-07-15 15:35:17.558160] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.684 [2024-07-15 15:35:17.558178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.684 qpair failed and we were unable to recover it. 00:30:13.684 [2024-07-15 15:35:17.568037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.684 [2024-07-15 15:35:17.568119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.684 [2024-07-15 15:35:17.568137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.684 [2024-07-15 15:35:17.568147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.684 [2024-07-15 15:35:17.568156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.684 [2024-07-15 15:35:17.568174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.684 qpair failed and we were unable to recover it. 00:30:13.684 [2024-07-15 15:35:17.578026] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.684 [2024-07-15 15:35:17.578108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.684 [2024-07-15 15:35:17.578125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.684 [2024-07-15 15:35:17.578135] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.684 [2024-07-15 15:35:17.578144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.684 [2024-07-15 15:35:17.578162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.684 qpair failed and we were unable to recover it. 00:30:13.684 [2024-07-15 15:35:17.588084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.684 [2024-07-15 15:35:17.588169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.684 [2024-07-15 15:35:17.588190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.684 [2024-07-15 15:35:17.588200] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.684 [2024-07-15 15:35:17.588209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.684 [2024-07-15 15:35:17.588226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.684 qpair failed and we were unable to recover it. 00:30:13.945 [2024-07-15 15:35:17.598155] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.945 [2024-07-15 15:35:17.598239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.945 [2024-07-15 15:35:17.598257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.945 [2024-07-15 15:35:17.598267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.945 [2024-07-15 15:35:17.598276] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.945 [2024-07-15 15:35:17.598295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.945 qpair failed and we were unable to recover it. 00:30:13.945 [2024-07-15 15:35:17.608154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.945 [2024-07-15 15:35:17.608260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.945 [2024-07-15 15:35:17.608277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.945 [2024-07-15 15:35:17.608287] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.945 [2024-07-15 15:35:17.608296] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.945 [2024-07-15 15:35:17.608314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.945 qpair failed and we were unable to recover it. 00:30:13.945 [2024-07-15 15:35:17.618194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.945 [2024-07-15 15:35:17.618273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.945 [2024-07-15 15:35:17.618291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.945 [2024-07-15 15:35:17.618300] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.945 [2024-07-15 15:35:17.618309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.945 [2024-07-15 15:35:17.618328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.945 qpair failed and we were unable to recover it. 00:30:13.945 [2024-07-15 15:35:17.628213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.945 [2024-07-15 15:35:17.628296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.945 [2024-07-15 15:35:17.628313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.945 [2024-07-15 15:35:17.628323] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.945 [2024-07-15 15:35:17.628335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.945 [2024-07-15 15:35:17.628353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.945 qpair failed and we were unable to recover it. 00:30:13.945 [2024-07-15 15:35:17.638218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.945 [2024-07-15 15:35:17.638305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.945 [2024-07-15 15:35:17.638322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.945 [2024-07-15 15:35:17.638332] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.945 [2024-07-15 15:35:17.638340] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.945 [2024-07-15 15:35:17.638358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.945 qpair failed and we were unable to recover it. 00:30:13.945 [2024-07-15 15:35:17.648299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.945 [2024-07-15 15:35:17.648414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.945 [2024-07-15 15:35:17.648431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.945 [2024-07-15 15:35:17.648441] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.945 [2024-07-15 15:35:17.648450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.945 [2024-07-15 15:35:17.648468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.945 qpair failed and we were unable to recover it. 00:30:13.945 [2024-07-15 15:35:17.658328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.945 [2024-07-15 15:35:17.658447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.945 [2024-07-15 15:35:17.658465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.945 [2024-07-15 15:35:17.658474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.945 [2024-07-15 15:35:17.658482] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.945 [2024-07-15 15:35:17.658500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.945 qpair failed and we were unable to recover it. 00:30:13.945 [2024-07-15 15:35:17.668304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.945 [2024-07-15 15:35:17.668390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.945 [2024-07-15 15:35:17.668408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.945 [2024-07-15 15:35:17.668417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.945 [2024-07-15 15:35:17.668426] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.945 [2024-07-15 15:35:17.668444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.945 qpair failed and we were unable to recover it. 00:30:13.945 [2024-07-15 15:35:17.678325] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.945 [2024-07-15 15:35:17.678412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.945 [2024-07-15 15:35:17.678429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.945 [2024-07-15 15:35:17.678439] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.945 [2024-07-15 15:35:17.678447] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.945 [2024-07-15 15:35:17.678465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.945 qpair failed and we were unable to recover it. 00:30:13.945 [2024-07-15 15:35:17.688391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.945 [2024-07-15 15:35:17.688474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.945 [2024-07-15 15:35:17.688492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.945 [2024-07-15 15:35:17.688502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.945 [2024-07-15 15:35:17.688510] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.945 [2024-07-15 15:35:17.688529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.945 qpair failed and we were unable to recover it. 00:30:13.945 [2024-07-15 15:35:17.698410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.945 [2024-07-15 15:35:17.698493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.945 [2024-07-15 15:35:17.698510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.945 [2024-07-15 15:35:17.698520] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.945 [2024-07-15 15:35:17.698529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.945 [2024-07-15 15:35:17.698547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.945 qpair failed and we were unable to recover it. 00:30:13.945 [2024-07-15 15:35:17.708429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.945 [2024-07-15 15:35:17.708601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.945 [2024-07-15 15:35:17.708618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.945 [2024-07-15 15:35:17.708627] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.945 [2024-07-15 15:35:17.708636] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.945 [2024-07-15 15:35:17.708655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.945 qpair failed and we were unable to recover it. 00:30:13.945 [2024-07-15 15:35:17.718450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.945 [2024-07-15 15:35:17.718538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.945 [2024-07-15 15:35:17.718556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.945 [2024-07-15 15:35:17.718569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.945 [2024-07-15 15:35:17.718578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.945 [2024-07-15 15:35:17.718597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.945 qpair failed and we were unable to recover it. 00:30:13.946 [2024-07-15 15:35:17.728481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.946 [2024-07-15 15:35:17.728578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.946 [2024-07-15 15:35:17.728595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.946 [2024-07-15 15:35:17.728605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.946 [2024-07-15 15:35:17.728613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.946 [2024-07-15 15:35:17.728632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.946 qpair failed and we were unable to recover it. 00:30:13.946 [2024-07-15 15:35:17.738510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.946 [2024-07-15 15:35:17.738594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.946 [2024-07-15 15:35:17.738612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.946 [2024-07-15 15:35:17.738621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.946 [2024-07-15 15:35:17.738630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.946 [2024-07-15 15:35:17.738648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.946 qpair failed and we were unable to recover it. 00:30:13.946 [2024-07-15 15:35:17.748566] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.946 [2024-07-15 15:35:17.748650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.946 [2024-07-15 15:35:17.748668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.946 [2024-07-15 15:35:17.748677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.946 [2024-07-15 15:35:17.748686] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.946 [2024-07-15 15:35:17.748704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.946 qpair failed and we were unable to recover it. 00:30:13.946 [2024-07-15 15:35:17.758594] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.946 [2024-07-15 15:35:17.758765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.946 [2024-07-15 15:35:17.758782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.946 [2024-07-15 15:35:17.758792] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.946 [2024-07-15 15:35:17.758801] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.946 [2024-07-15 15:35:17.758821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.946 qpair failed and we were unable to recover it. 00:30:13.946 [2024-07-15 15:35:17.768622] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.946 [2024-07-15 15:35:17.768731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.946 [2024-07-15 15:35:17.768749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.946 [2024-07-15 15:35:17.768758] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.946 [2024-07-15 15:35:17.768767] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.946 [2024-07-15 15:35:17.768785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.946 qpair failed and we were unable to recover it. 00:30:13.946 [2024-07-15 15:35:17.778649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.946 [2024-07-15 15:35:17.778729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.946 [2024-07-15 15:35:17.778747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.946 [2024-07-15 15:35:17.778757] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.946 [2024-07-15 15:35:17.778765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.946 [2024-07-15 15:35:17.778784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.946 qpair failed and we were unable to recover it. 00:30:13.946 [2024-07-15 15:35:17.788657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.946 [2024-07-15 15:35:17.788744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.946 [2024-07-15 15:35:17.788762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.946 [2024-07-15 15:35:17.788772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.946 [2024-07-15 15:35:17.788780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.946 [2024-07-15 15:35:17.788799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.946 qpair failed and we were unable to recover it. 00:30:13.946 [2024-07-15 15:35:17.798694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.946 [2024-07-15 15:35:17.798776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.946 [2024-07-15 15:35:17.798793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.946 [2024-07-15 15:35:17.798803] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.946 [2024-07-15 15:35:17.798811] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.946 [2024-07-15 15:35:17.798829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.946 qpair failed and we were unable to recover it. 00:30:13.946 [2024-07-15 15:35:17.808749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.946 [2024-07-15 15:35:17.808866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.946 [2024-07-15 15:35:17.808883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.946 [2024-07-15 15:35:17.808896] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.946 [2024-07-15 15:35:17.808905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.946 [2024-07-15 15:35:17.808924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.946 qpair failed and we were unable to recover it. 00:30:13.946 [2024-07-15 15:35:17.818756] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.946 [2024-07-15 15:35:17.818841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.946 [2024-07-15 15:35:17.818859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.946 [2024-07-15 15:35:17.818870] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.946 [2024-07-15 15:35:17.818878] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.946 [2024-07-15 15:35:17.818896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.946 qpair failed and we were unable to recover it. 00:30:13.946 [2024-07-15 15:35:17.828765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.946 [2024-07-15 15:35:17.828851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.946 [2024-07-15 15:35:17.828869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.946 [2024-07-15 15:35:17.828879] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.946 [2024-07-15 15:35:17.828888] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.946 [2024-07-15 15:35:17.828906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.946 qpair failed and we were unable to recover it. 00:30:13.946 [2024-07-15 15:35:17.838799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.946 [2024-07-15 15:35:17.838884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.946 [2024-07-15 15:35:17.838902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.946 [2024-07-15 15:35:17.838912] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.946 [2024-07-15 15:35:17.838921] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.946 [2024-07-15 15:35:17.838939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.946 qpair failed and we were unable to recover it. 00:30:13.946 [2024-07-15 15:35:17.848834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.946 [2024-07-15 15:35:17.848917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.946 [2024-07-15 15:35:17.848934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.946 [2024-07-15 15:35:17.848944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.946 [2024-07-15 15:35:17.848952] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:13.946 [2024-07-15 15:35:17.848971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.946 qpair failed and we were unable to recover it. 00:30:14.206 [2024-07-15 15:35:17.858842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.206 [2024-07-15 15:35:17.858927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.207 [2024-07-15 15:35:17.858945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.207 [2024-07-15 15:35:17.858954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.207 [2024-07-15 15:35:17.858963] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.207 [2024-07-15 15:35:17.858982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.207 qpair failed and we were unable to recover it. 00:30:14.207 [2024-07-15 15:35:17.868877] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.207 [2024-07-15 15:35:17.868970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.207 [2024-07-15 15:35:17.868988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.207 [2024-07-15 15:35:17.868998] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.207 [2024-07-15 15:35:17.869006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.207 [2024-07-15 15:35:17.869025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.207 qpair failed and we were unable to recover it. 00:30:14.207 [2024-07-15 15:35:17.878919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.207 [2024-07-15 15:35:17.879003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.207 [2024-07-15 15:35:17.879021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.207 [2024-07-15 15:35:17.879030] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.207 [2024-07-15 15:35:17.879039] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.207 [2024-07-15 15:35:17.879058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.207 qpair failed and we were unable to recover it. 00:30:14.207 [2024-07-15 15:35:17.888953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.207 [2024-07-15 15:35:17.889036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.207 [2024-07-15 15:35:17.889053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.207 [2024-07-15 15:35:17.889063] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.207 [2024-07-15 15:35:17.889071] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.207 [2024-07-15 15:35:17.889089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.207 qpair failed and we were unable to recover it. 00:30:14.207 [2024-07-15 15:35:17.898963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.207 [2024-07-15 15:35:17.899069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.207 [2024-07-15 15:35:17.899091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.207 [2024-07-15 15:35:17.899102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.207 [2024-07-15 15:35:17.899110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.207 [2024-07-15 15:35:17.899128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.207 qpair failed and we were unable to recover it. 00:30:14.207 [2024-07-15 15:35:17.909025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.207 [2024-07-15 15:35:17.909154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.207 [2024-07-15 15:35:17.909172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.207 [2024-07-15 15:35:17.909181] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.207 [2024-07-15 15:35:17.909190] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.207 [2024-07-15 15:35:17.909209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.207 qpair failed and we were unable to recover it. 00:30:14.207 [2024-07-15 15:35:17.919029] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.207 [2024-07-15 15:35:17.919113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.207 [2024-07-15 15:35:17.919130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.207 [2024-07-15 15:35:17.919140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.207 [2024-07-15 15:35:17.919148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.207 [2024-07-15 15:35:17.919166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.207 qpair failed and we were unable to recover it. 00:30:14.207 [2024-07-15 15:35:17.928997] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.207 [2024-07-15 15:35:17.929178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.207 [2024-07-15 15:35:17.929195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.207 [2024-07-15 15:35:17.929205] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.207 [2024-07-15 15:35:17.929214] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.207 [2024-07-15 15:35:17.929233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.207 qpair failed and we were unable to recover it. 00:30:14.207 [2024-07-15 15:35:17.939078] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.207 [2024-07-15 15:35:17.939188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.207 [2024-07-15 15:35:17.939205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.207 [2024-07-15 15:35:17.939215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.207 [2024-07-15 15:35:17.939224] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.207 [2024-07-15 15:35:17.939245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.207 qpair failed and we were unable to recover it. 00:30:14.207 [2024-07-15 15:35:17.949085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.207 [2024-07-15 15:35:17.949171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.207 [2024-07-15 15:35:17.949188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.207 [2024-07-15 15:35:17.949198] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.207 [2024-07-15 15:35:17.949206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.207 [2024-07-15 15:35:17.949224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.207 qpair failed and we were unable to recover it. 00:30:14.207 [2024-07-15 15:35:17.959121] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.207 [2024-07-15 15:35:17.959209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.207 [2024-07-15 15:35:17.959228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.207 [2024-07-15 15:35:17.959238] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.207 [2024-07-15 15:35:17.959247] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.207 [2024-07-15 15:35:17.959266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.207 qpair failed and we were unable to recover it. 00:30:14.207 [2024-07-15 15:35:17.969156] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.207 [2024-07-15 15:35:17.969272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.207 [2024-07-15 15:35:17.969289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.207 [2024-07-15 15:35:17.969299] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.207 [2024-07-15 15:35:17.969308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.207 [2024-07-15 15:35:17.969326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.207 qpair failed and we were unable to recover it. 00:30:14.207 [2024-07-15 15:35:17.979213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.207 [2024-07-15 15:35:17.979295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.207 [2024-07-15 15:35:17.979312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.207 [2024-07-15 15:35:17.979322] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.207 [2024-07-15 15:35:17.979331] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.207 [2024-07-15 15:35:17.979349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.207 qpair failed and we were unable to recover it. 00:30:14.207 [2024-07-15 15:35:17.989200] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.207 [2024-07-15 15:35:17.989330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.207 [2024-07-15 15:35:17.989351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.207 [2024-07-15 15:35:17.989361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.207 [2024-07-15 15:35:17.989369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.207 [2024-07-15 15:35:17.989388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.207 qpair failed and we were unable to recover it. 00:30:14.207 [2024-07-15 15:35:17.999238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.207 [2024-07-15 15:35:17.999356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.207 [2024-07-15 15:35:17.999374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.207 [2024-07-15 15:35:17.999385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.208 [2024-07-15 15:35:17.999394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.208 [2024-07-15 15:35:17.999413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.208 qpair failed and we were unable to recover it. 00:30:14.208 [2024-07-15 15:35:18.009278] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.208 [2024-07-15 15:35:18.009365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.208 [2024-07-15 15:35:18.009382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.208 [2024-07-15 15:35:18.009392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.208 [2024-07-15 15:35:18.009400] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.208 [2024-07-15 15:35:18.009420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.208 qpair failed and we were unable to recover it. 00:30:14.208 [2024-07-15 15:35:18.019289] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.208 [2024-07-15 15:35:18.019372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.208 [2024-07-15 15:35:18.019389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.208 [2024-07-15 15:35:18.019399] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.208 [2024-07-15 15:35:18.019408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.208 [2024-07-15 15:35:18.019426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.208 qpair failed and we were unable to recover it. 00:30:14.208 [2024-07-15 15:35:18.029300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.208 [2024-07-15 15:35:18.029384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.208 [2024-07-15 15:35:18.029401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.208 [2024-07-15 15:35:18.029410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.208 [2024-07-15 15:35:18.029422] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.208 [2024-07-15 15:35:18.029441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.208 qpair failed and we were unable to recover it. 00:30:14.208 [2024-07-15 15:35:18.039343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.208 [2024-07-15 15:35:18.039427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.208 [2024-07-15 15:35:18.039444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.208 [2024-07-15 15:35:18.039454] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.208 [2024-07-15 15:35:18.039462] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.208 [2024-07-15 15:35:18.039480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.208 qpair failed and we were unable to recover it. 00:30:14.208 [2024-07-15 15:35:18.049381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.208 [2024-07-15 15:35:18.049552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.208 [2024-07-15 15:35:18.049568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.208 [2024-07-15 15:35:18.049578] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.208 [2024-07-15 15:35:18.049587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.208 [2024-07-15 15:35:18.049606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.208 qpair failed and we were unable to recover it. 00:30:14.208 [2024-07-15 15:35:18.059401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.208 [2024-07-15 15:35:18.059479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.208 [2024-07-15 15:35:18.059496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.208 [2024-07-15 15:35:18.059506] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.208 [2024-07-15 15:35:18.059514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.208 [2024-07-15 15:35:18.059532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.208 qpair failed and we were unable to recover it. 00:30:14.208 [2024-07-15 15:35:18.069422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.208 [2024-07-15 15:35:18.069507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.208 [2024-07-15 15:35:18.069524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.208 [2024-07-15 15:35:18.069534] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.208 [2024-07-15 15:35:18.069543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.208 [2024-07-15 15:35:18.069561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.208 qpair failed and we were unable to recover it. 00:30:14.208 [2024-07-15 15:35:18.079466] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.208 [2024-07-15 15:35:18.079552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.208 [2024-07-15 15:35:18.079570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.208 [2024-07-15 15:35:18.079579] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.208 [2024-07-15 15:35:18.079588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.208 [2024-07-15 15:35:18.079605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.208 qpair failed and we were unable to recover it. 00:30:14.208 [2024-07-15 15:35:18.089436] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.208 [2024-07-15 15:35:18.089522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.208 [2024-07-15 15:35:18.089539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.208 [2024-07-15 15:35:18.089548] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.208 [2024-07-15 15:35:18.089557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.208 [2024-07-15 15:35:18.089575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.208 qpair failed and we were unable to recover it. 00:30:14.208 [2024-07-15 15:35:18.099525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.208 [2024-07-15 15:35:18.099606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.208 [2024-07-15 15:35:18.099624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.208 [2024-07-15 15:35:18.099634] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.208 [2024-07-15 15:35:18.099642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.208 [2024-07-15 15:35:18.099661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.208 qpair failed and we were unable to recover it. 00:30:14.208 [2024-07-15 15:35:18.109560] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.208 [2024-07-15 15:35:18.109651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.208 [2024-07-15 15:35:18.109669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.208 [2024-07-15 15:35:18.109678] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.208 [2024-07-15 15:35:18.109686] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.208 [2024-07-15 15:35:18.109705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.208 qpair failed and we were unable to recover it. 00:30:14.469 [2024-07-15 15:35:18.119568] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.469 [2024-07-15 15:35:18.119682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.469 [2024-07-15 15:35:18.119701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.469 [2024-07-15 15:35:18.119711] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.469 [2024-07-15 15:35:18.119723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.469 [2024-07-15 15:35:18.119741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.469 qpair failed and we were unable to recover it. 00:30:14.469 [2024-07-15 15:35:18.129614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.469 [2024-07-15 15:35:18.129745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.469 [2024-07-15 15:35:18.129763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.469 [2024-07-15 15:35:18.129772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.469 [2024-07-15 15:35:18.129781] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.469 [2024-07-15 15:35:18.129800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.469 qpair failed and we were unable to recover it. 00:30:14.469 [2024-07-15 15:35:18.139640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.469 [2024-07-15 15:35:18.139723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.469 [2024-07-15 15:35:18.139740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.469 [2024-07-15 15:35:18.139749] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.469 [2024-07-15 15:35:18.139758] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.469 [2024-07-15 15:35:18.139776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.469 qpair failed and we were unable to recover it. 00:30:14.469 [2024-07-15 15:35:18.149669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.469 [2024-07-15 15:35:18.149756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.469 [2024-07-15 15:35:18.149773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.469 [2024-07-15 15:35:18.149783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.469 [2024-07-15 15:35:18.149792] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.469 [2024-07-15 15:35:18.149810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.469 qpair failed and we were unable to recover it. 00:30:14.469 [2024-07-15 15:35:18.159671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.469 [2024-07-15 15:35:18.159754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.469 [2024-07-15 15:35:18.159771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.469 [2024-07-15 15:35:18.159781] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.469 [2024-07-15 15:35:18.159789] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.469 [2024-07-15 15:35:18.159808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.469 qpair failed and we were unable to recover it. 00:30:14.469 [2024-07-15 15:35:18.169743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.469 [2024-07-15 15:35:18.169824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.469 [2024-07-15 15:35:18.169847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.469 [2024-07-15 15:35:18.169856] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.469 [2024-07-15 15:35:18.169865] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.469 [2024-07-15 15:35:18.169884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.469 qpair failed and we were unable to recover it. 00:30:14.469 [2024-07-15 15:35:18.179757] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.469 [2024-07-15 15:35:18.179933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.469 [2024-07-15 15:35:18.179951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.469 [2024-07-15 15:35:18.179960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.469 [2024-07-15 15:35:18.179969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.469 [2024-07-15 15:35:18.179988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.469 qpair failed and we were unable to recover it. 00:30:14.469 [2024-07-15 15:35:18.189780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.469 [2024-07-15 15:35:18.189869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.469 [2024-07-15 15:35:18.189886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.469 [2024-07-15 15:35:18.189896] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.469 [2024-07-15 15:35:18.189905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.469 [2024-07-15 15:35:18.189923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.469 qpair failed and we were unable to recover it. 00:30:14.469 [2024-07-15 15:35:18.199808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.469 [2024-07-15 15:35:18.199898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.469 [2024-07-15 15:35:18.199916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.469 [2024-07-15 15:35:18.199925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.469 [2024-07-15 15:35:18.199934] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.469 [2024-07-15 15:35:18.199952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.469 qpair failed and we were unable to recover it. 00:30:14.469 [2024-07-15 15:35:18.209881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.469 [2024-07-15 15:35:18.210057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.469 [2024-07-15 15:35:18.210075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.469 [2024-07-15 15:35:18.210088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.469 [2024-07-15 15:35:18.210097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.469 [2024-07-15 15:35:18.210116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.469 qpair failed and we were unable to recover it. 00:30:14.469 [2024-07-15 15:35:18.219879] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.469 [2024-07-15 15:35:18.219973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.470 [2024-07-15 15:35:18.219991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.470 [2024-07-15 15:35:18.220001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.470 [2024-07-15 15:35:18.220010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.470 [2024-07-15 15:35:18.220029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.470 qpair failed and we were unable to recover it. 00:30:14.470 [2024-07-15 15:35:18.229884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.470 [2024-07-15 15:35:18.229969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.470 [2024-07-15 15:35:18.229986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.470 [2024-07-15 15:35:18.229996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.470 [2024-07-15 15:35:18.230005] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.470 [2024-07-15 15:35:18.230024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.470 qpair failed and we were unable to recover it. 00:30:14.470 [2024-07-15 15:35:18.239930] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.470 [2024-07-15 15:35:18.240015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.470 [2024-07-15 15:35:18.240032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.470 [2024-07-15 15:35:18.240041] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.470 [2024-07-15 15:35:18.240050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.470 [2024-07-15 15:35:18.240069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.470 qpair failed and we were unable to recover it. 00:30:14.470 [2024-07-15 15:35:18.249968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.470 [2024-07-15 15:35:18.250052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.470 [2024-07-15 15:35:18.250069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.470 [2024-07-15 15:35:18.250078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.470 [2024-07-15 15:35:18.250087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.470 [2024-07-15 15:35:18.250106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.470 qpair failed and we were unable to recover it. 00:30:14.470 [2024-07-15 15:35:18.259999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.470 [2024-07-15 15:35:18.260085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.470 [2024-07-15 15:35:18.260102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.470 [2024-07-15 15:35:18.260111] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.470 [2024-07-15 15:35:18.260120] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.470 [2024-07-15 15:35:18.260138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.470 qpair failed and we were unable to recover it. 00:30:14.470 [2024-07-15 15:35:18.270050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.470 [2024-07-15 15:35:18.270223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.470 [2024-07-15 15:35:18.270240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.470 [2024-07-15 15:35:18.270250] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.470 [2024-07-15 15:35:18.270259] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.470 [2024-07-15 15:35:18.270278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.470 qpair failed and we were unable to recover it. 00:30:14.470 [2024-07-15 15:35:18.280059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.470 [2024-07-15 15:35:18.280169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.470 [2024-07-15 15:35:18.280186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.470 [2024-07-15 15:35:18.280196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.470 [2024-07-15 15:35:18.280204] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.470 [2024-07-15 15:35:18.280222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.470 qpair failed and we were unable to recover it. 00:30:14.470 [2024-07-15 15:35:18.290091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.470 [2024-07-15 15:35:18.290268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.470 [2024-07-15 15:35:18.290285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.470 [2024-07-15 15:35:18.290296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.470 [2024-07-15 15:35:18.290306] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.470 [2024-07-15 15:35:18.290324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.470 qpair failed and we were unable to recover it. 00:30:14.470 [2024-07-15 15:35:18.300121] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.470 [2024-07-15 15:35:18.300204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.470 [2024-07-15 15:35:18.300225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.470 [2024-07-15 15:35:18.300234] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.470 [2024-07-15 15:35:18.300243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.470 [2024-07-15 15:35:18.300262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.470 qpair failed and we were unable to recover it. 00:30:14.470 [2024-07-15 15:35:18.310138] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.470 [2024-07-15 15:35:18.310224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.470 [2024-07-15 15:35:18.310241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.470 [2024-07-15 15:35:18.310251] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.470 [2024-07-15 15:35:18.310260] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.470 [2024-07-15 15:35:18.310279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.470 qpair failed and we were unable to recover it. 00:30:14.470 [2024-07-15 15:35:18.320180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.470 [2024-07-15 15:35:18.320279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.470 [2024-07-15 15:35:18.320297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.470 [2024-07-15 15:35:18.320306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.470 [2024-07-15 15:35:18.320315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.470 [2024-07-15 15:35:18.320333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.470 qpair failed and we were unable to recover it. 00:30:14.470 [2024-07-15 15:35:18.330221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.470 [2024-07-15 15:35:18.330316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.470 [2024-07-15 15:35:18.330333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.470 [2024-07-15 15:35:18.330342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.470 [2024-07-15 15:35:18.330351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.470 [2024-07-15 15:35:18.330369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.470 qpair failed and we were unable to recover it. 00:30:14.470 [2024-07-15 15:35:18.340228] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.470 [2024-07-15 15:35:18.340312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.470 [2024-07-15 15:35:18.340330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.470 [2024-07-15 15:35:18.340339] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.470 [2024-07-15 15:35:18.340348] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.470 [2024-07-15 15:35:18.340370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.470 qpair failed and we were unable to recover it. 00:30:14.470 [2024-07-15 15:35:18.350239] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.470 [2024-07-15 15:35:18.350321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.471 [2024-07-15 15:35:18.350338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.471 [2024-07-15 15:35:18.350348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.471 [2024-07-15 15:35:18.350357] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.471 [2024-07-15 15:35:18.350375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.471 qpair failed and we were unable to recover it. 00:30:14.471 [2024-07-15 15:35:18.360281] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.471 [2024-07-15 15:35:18.360362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.471 [2024-07-15 15:35:18.360379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.471 [2024-07-15 15:35:18.360389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.471 [2024-07-15 15:35:18.360397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.471 [2024-07-15 15:35:18.360416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.471 qpair failed and we were unable to recover it. 00:30:14.471 [2024-07-15 15:35:18.370307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.471 [2024-07-15 15:35:18.370395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.471 [2024-07-15 15:35:18.370412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.471 [2024-07-15 15:35:18.370422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.471 [2024-07-15 15:35:18.370431] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.471 [2024-07-15 15:35:18.370448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.471 qpair failed and we were unable to recover it. 00:30:14.732 [2024-07-15 15:35:18.380346] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.732 [2024-07-15 15:35:18.380424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.732 [2024-07-15 15:35:18.380441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.732 [2024-07-15 15:35:18.380451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.732 [2024-07-15 15:35:18.380460] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.732 [2024-07-15 15:35:18.380478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.732 qpair failed and we were unable to recover it. 00:30:14.732 [2024-07-15 15:35:18.390354] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.732 [2024-07-15 15:35:18.390438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.732 [2024-07-15 15:35:18.390459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.732 [2024-07-15 15:35:18.390468] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.732 [2024-07-15 15:35:18.390477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.732 [2024-07-15 15:35:18.390496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.732 qpair failed and we were unable to recover it. 00:30:14.732 [2024-07-15 15:35:18.400396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.732 [2024-07-15 15:35:18.400477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.732 [2024-07-15 15:35:18.400495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.732 [2024-07-15 15:35:18.400504] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.732 [2024-07-15 15:35:18.400513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.732 [2024-07-15 15:35:18.400531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.732 qpair failed and we were unable to recover it. 00:30:14.732 [2024-07-15 15:35:18.410431] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.732 [2024-07-15 15:35:18.410511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.732 [2024-07-15 15:35:18.410528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.732 [2024-07-15 15:35:18.410538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.732 [2024-07-15 15:35:18.410547] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.732 [2024-07-15 15:35:18.410565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.732 qpair failed and we were unable to recover it. 00:30:14.732 [2024-07-15 15:35:18.420472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.732 [2024-07-15 15:35:18.420575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.732 [2024-07-15 15:35:18.420593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.732 [2024-07-15 15:35:18.420602] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.732 [2024-07-15 15:35:18.420611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.732 [2024-07-15 15:35:18.420630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.732 qpair failed and we were unable to recover it. 00:30:14.732 [2024-07-15 15:35:18.430473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.732 [2024-07-15 15:35:18.430556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.732 [2024-07-15 15:35:18.430574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.732 [2024-07-15 15:35:18.430583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.732 [2024-07-15 15:35:18.430595] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.732 [2024-07-15 15:35:18.430614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.732 qpair failed and we were unable to recover it. 00:30:14.732 [2024-07-15 15:35:18.440540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.732 [2024-07-15 15:35:18.440650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.732 [2024-07-15 15:35:18.440669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.732 [2024-07-15 15:35:18.440680] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.732 [2024-07-15 15:35:18.440690] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.732 [2024-07-15 15:35:18.440709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.732 qpair failed and we were unable to recover it. 00:30:14.732 [2024-07-15 15:35:18.450522] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.732 [2024-07-15 15:35:18.450615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.732 [2024-07-15 15:35:18.450634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.732 [2024-07-15 15:35:18.450645] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.732 [2024-07-15 15:35:18.450655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.732 [2024-07-15 15:35:18.450674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.732 qpair failed and we were unable to recover it. 00:30:14.732 [2024-07-15 15:35:18.460500] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.732 [2024-07-15 15:35:18.460623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.732 [2024-07-15 15:35:18.460640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.732 [2024-07-15 15:35:18.460650] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.732 [2024-07-15 15:35:18.460658] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.732 [2024-07-15 15:35:18.460677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.732 qpair failed and we were unable to recover it. 00:30:14.732 [2024-07-15 15:35:18.470618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.732 [2024-07-15 15:35:18.470794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.732 [2024-07-15 15:35:18.470812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.732 [2024-07-15 15:35:18.470822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.732 [2024-07-15 15:35:18.470836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.732 [2024-07-15 15:35:18.470856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.732 qpair failed and we were unable to recover it. 00:30:14.732 [2024-07-15 15:35:18.480565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.732 [2024-07-15 15:35:18.480735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.732 [2024-07-15 15:35:18.480753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.732 [2024-07-15 15:35:18.480762] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.732 [2024-07-15 15:35:18.480771] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.732 [2024-07-15 15:35:18.480790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.732 qpair failed and we were unable to recover it. 00:30:14.732 [2024-07-15 15:35:18.490665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.732 [2024-07-15 15:35:18.490792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.732 [2024-07-15 15:35:18.490810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.732 [2024-07-15 15:35:18.490821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.732 [2024-07-15 15:35:18.490835] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.732 [2024-07-15 15:35:18.490855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.732 qpair failed and we were unable to recover it. 00:30:14.732 [2024-07-15 15:35:18.500670] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.732 [2024-07-15 15:35:18.500751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.732 [2024-07-15 15:35:18.500769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.732 [2024-07-15 15:35:18.500779] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.733 [2024-07-15 15:35:18.500788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.733 [2024-07-15 15:35:18.500806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.733 qpair failed and we were unable to recover it. 00:30:14.733 [2024-07-15 15:35:18.510705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.733 [2024-07-15 15:35:18.510789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.733 [2024-07-15 15:35:18.510806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.733 [2024-07-15 15:35:18.510816] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.733 [2024-07-15 15:35:18.510825] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.733 [2024-07-15 15:35:18.510847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.733 qpair failed and we were unable to recover it. 00:30:14.733 [2024-07-15 15:35:18.520768] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.733 [2024-07-15 15:35:18.520859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.733 [2024-07-15 15:35:18.520879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.733 [2024-07-15 15:35:18.520889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.733 [2024-07-15 15:35:18.520901] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.733 [2024-07-15 15:35:18.520919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.733 qpair failed and we were unable to recover it. 00:30:14.733 [2024-07-15 15:35:18.530741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.733 [2024-07-15 15:35:18.530884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.733 [2024-07-15 15:35:18.530902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.733 [2024-07-15 15:35:18.530912] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.733 [2024-07-15 15:35:18.530921] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.733 [2024-07-15 15:35:18.530939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.733 qpair failed and we were unable to recover it. 00:30:14.733 [2024-07-15 15:35:18.540898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.733 [2024-07-15 15:35:18.540984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.733 [2024-07-15 15:35:18.541002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.733 [2024-07-15 15:35:18.541013] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.733 [2024-07-15 15:35:18.541021] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.733 [2024-07-15 15:35:18.541040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.733 qpair failed and we were unable to recover it. 00:30:14.733 [2024-07-15 15:35:18.550801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.733 [2024-07-15 15:35:18.550891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.733 [2024-07-15 15:35:18.550908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.733 [2024-07-15 15:35:18.550918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.733 [2024-07-15 15:35:18.550927] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.733 [2024-07-15 15:35:18.550946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.733 qpair failed and we were unable to recover it. 00:30:14.733 [2024-07-15 15:35:18.560871] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.733 [2024-07-15 15:35:18.560959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.733 [2024-07-15 15:35:18.560977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.733 [2024-07-15 15:35:18.560987] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.733 [2024-07-15 15:35:18.560996] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.733 [2024-07-15 15:35:18.561015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.733 qpair failed and we were unable to recover it. 00:30:14.733 [2024-07-15 15:35:18.570865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.733 [2024-07-15 15:35:18.570996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.733 [2024-07-15 15:35:18.571015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.733 [2024-07-15 15:35:18.571024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.733 [2024-07-15 15:35:18.571033] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.733 [2024-07-15 15:35:18.571053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.733 qpair failed and we were unable to recover it. 00:30:14.733 [2024-07-15 15:35:18.580911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.733 [2024-07-15 15:35:18.580995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.733 [2024-07-15 15:35:18.581012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.733 [2024-07-15 15:35:18.581022] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.733 [2024-07-15 15:35:18.581030] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.733 [2024-07-15 15:35:18.581049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.733 qpair failed and we were unable to recover it. 00:30:14.733 [2024-07-15 15:35:18.590984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.733 [2024-07-15 15:35:18.591113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.733 [2024-07-15 15:35:18.591130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.733 [2024-07-15 15:35:18.591140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.733 [2024-07-15 15:35:18.591149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.733 [2024-07-15 15:35:18.591167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.733 qpair failed and we were unable to recover it. 00:30:14.733 [2024-07-15 15:35:18.600989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.733 [2024-07-15 15:35:18.601075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.733 [2024-07-15 15:35:18.601093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.733 [2024-07-15 15:35:18.601102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.733 [2024-07-15 15:35:18.601111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.733 [2024-07-15 15:35:18.601129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.733 qpair failed and we were unable to recover it. 00:30:14.733 [2024-07-15 15:35:18.611026] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.733 [2024-07-15 15:35:18.611112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.733 [2024-07-15 15:35:18.611129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.733 [2024-07-15 15:35:18.611143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.733 [2024-07-15 15:35:18.611151] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.733 [2024-07-15 15:35:18.611169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.733 qpair failed and we were unable to recover it. 00:30:14.733 [2024-07-15 15:35:18.621079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.733 [2024-07-15 15:35:18.621183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.733 [2024-07-15 15:35:18.621201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.733 [2024-07-15 15:35:18.621211] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.733 [2024-07-15 15:35:18.621220] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.733 [2024-07-15 15:35:18.621238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.733 qpair failed and we were unable to recover it. 00:30:14.733 [2024-07-15 15:35:18.630988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.733 [2024-07-15 15:35:18.631071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.733 [2024-07-15 15:35:18.631088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.733 [2024-07-15 15:35:18.631098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.733 [2024-07-15 15:35:18.631106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.733 [2024-07-15 15:35:18.631124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.733 qpair failed and we were unable to recover it. 00:30:14.994 [2024-07-15 15:35:18.641079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.994 [2024-07-15 15:35:18.641164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.994 [2024-07-15 15:35:18.641182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.994 [2024-07-15 15:35:18.641193] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.994 [2024-07-15 15:35:18.641202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.994 [2024-07-15 15:35:18.641220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.994 qpair failed and we were unable to recover it. 00:30:14.994 [2024-07-15 15:35:18.651087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.994 [2024-07-15 15:35:18.651173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.994 [2024-07-15 15:35:18.651190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.994 [2024-07-15 15:35:18.651200] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.994 [2024-07-15 15:35:18.651209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.994 [2024-07-15 15:35:18.651227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.994 qpair failed and we were unable to recover it. 00:30:14.994 [2024-07-15 15:35:18.661139] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.994 [2024-07-15 15:35:18.661226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.994 [2024-07-15 15:35:18.661243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.994 [2024-07-15 15:35:18.661253] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.994 [2024-07-15 15:35:18.661262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.994 [2024-07-15 15:35:18.661281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.994 qpair failed and we were unable to recover it. 00:30:14.994 [2024-07-15 15:35:18.671099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.994 [2024-07-15 15:35:18.671276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.994 [2024-07-15 15:35:18.671294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.994 [2024-07-15 15:35:18.671304] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.994 [2024-07-15 15:35:18.671314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.994 [2024-07-15 15:35:18.671333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.994 qpair failed and we were unable to recover it. 00:30:14.994 [2024-07-15 15:35:18.681119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.994 [2024-07-15 15:35:18.681221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.994 [2024-07-15 15:35:18.681238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.994 [2024-07-15 15:35:18.681248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.994 [2024-07-15 15:35:18.681256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.994 [2024-07-15 15:35:18.681275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.994 qpair failed and we were unable to recover it. 00:30:14.994 [2024-07-15 15:35:18.691223] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.994 [2024-07-15 15:35:18.691334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.994 [2024-07-15 15:35:18.691352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.994 [2024-07-15 15:35:18.691361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.994 [2024-07-15 15:35:18.691370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.995 [2024-07-15 15:35:18.691389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.995 qpair failed and we were unable to recover it. 00:30:14.995 [2024-07-15 15:35:18.701276] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.995 [2024-07-15 15:35:18.701358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.995 [2024-07-15 15:35:18.701378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.995 [2024-07-15 15:35:18.701388] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.995 [2024-07-15 15:35:18.701397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.995 [2024-07-15 15:35:18.701415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.995 qpair failed and we were unable to recover it. 00:30:14.995 [2024-07-15 15:35:18.711253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.995 [2024-07-15 15:35:18.711338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.995 [2024-07-15 15:35:18.711355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.995 [2024-07-15 15:35:18.711365] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.995 [2024-07-15 15:35:18.711374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.995 [2024-07-15 15:35:18.711392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.995 qpair failed and we were unable to recover it. 00:30:14.995 [2024-07-15 15:35:18.721310] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.995 [2024-07-15 15:35:18.721396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.995 [2024-07-15 15:35:18.721414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.995 [2024-07-15 15:35:18.721424] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.995 [2024-07-15 15:35:18.721432] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.995 [2024-07-15 15:35:18.721450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.995 qpair failed and we were unable to recover it. 00:30:14.995 [2024-07-15 15:35:18.731258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.995 [2024-07-15 15:35:18.731344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.995 [2024-07-15 15:35:18.731362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.995 [2024-07-15 15:35:18.731372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.995 [2024-07-15 15:35:18.731380] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.995 [2024-07-15 15:35:18.731399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.995 qpair failed and we were unable to recover it. 00:30:14.995 [2024-07-15 15:35:18.741276] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.995 [2024-07-15 15:35:18.741362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.995 [2024-07-15 15:35:18.741380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.995 [2024-07-15 15:35:18.741390] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.995 [2024-07-15 15:35:18.741398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.995 [2024-07-15 15:35:18.741420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.995 qpair failed and we were unable to recover it. 00:30:14.995 [2024-07-15 15:35:18.751382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.995 [2024-07-15 15:35:18.751467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.995 [2024-07-15 15:35:18.751484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.995 [2024-07-15 15:35:18.751494] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.995 [2024-07-15 15:35:18.751502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.995 [2024-07-15 15:35:18.751520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.995 qpair failed and we were unable to recover it. 00:30:14.995 [2024-07-15 15:35:18.761428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.995 [2024-07-15 15:35:18.761516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.995 [2024-07-15 15:35:18.761533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.995 [2024-07-15 15:35:18.761543] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.995 [2024-07-15 15:35:18.761551] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.995 [2024-07-15 15:35:18.761569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.995 qpair failed and we were unable to recover it. 00:30:14.995 [2024-07-15 15:35:18.771460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.995 [2024-07-15 15:35:18.771592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.995 [2024-07-15 15:35:18.771610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.995 [2024-07-15 15:35:18.771620] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.995 [2024-07-15 15:35:18.771628] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.995 [2024-07-15 15:35:18.771648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.995 qpair failed and we were unable to recover it. 00:30:14.995 [2024-07-15 15:35:18.781389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.995 [2024-07-15 15:35:18.781473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.995 [2024-07-15 15:35:18.781490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.995 [2024-07-15 15:35:18.781500] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.995 [2024-07-15 15:35:18.781509] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.995 [2024-07-15 15:35:18.781527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.995 qpair failed and we were unable to recover it. 00:30:14.995 [2024-07-15 15:35:18.791431] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.995 [2024-07-15 15:35:18.791604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.995 [2024-07-15 15:35:18.791625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.995 [2024-07-15 15:35:18.791634] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.995 [2024-07-15 15:35:18.791643] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.995 [2024-07-15 15:35:18.791662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.995 qpair failed and we were unable to recover it. 00:30:14.995 [2024-07-15 15:35:18.801440] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.995 [2024-07-15 15:35:18.801522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.995 [2024-07-15 15:35:18.801540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.995 [2024-07-15 15:35:18.801550] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.995 [2024-07-15 15:35:18.801559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.995 [2024-07-15 15:35:18.801577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.995 qpair failed and we were unable to recover it. 00:30:14.995 [2024-07-15 15:35:18.811584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.995 [2024-07-15 15:35:18.811760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.995 [2024-07-15 15:35:18.811779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.995 [2024-07-15 15:35:18.811789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.995 [2024-07-15 15:35:18.811798] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.995 [2024-07-15 15:35:18.811817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.995 qpair failed and we were unable to recover it. 00:30:14.995 [2024-07-15 15:35:18.821544] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.995 [2024-07-15 15:35:18.821718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.995 [2024-07-15 15:35:18.821735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.995 [2024-07-15 15:35:18.821745] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.995 [2024-07-15 15:35:18.821754] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.995 [2024-07-15 15:35:18.821774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.995 qpair failed and we were unable to recover it. 00:30:14.995 [2024-07-15 15:35:18.831600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.995 [2024-07-15 15:35:18.831682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.995 [2024-07-15 15:35:18.831700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.995 [2024-07-15 15:35:18.831710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.995 [2024-07-15 15:35:18.831719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.995 [2024-07-15 15:35:18.831740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.995 qpair failed and we were unable to recover it. 00:30:14.995 [2024-07-15 15:35:18.841628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.996 [2024-07-15 15:35:18.841707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.996 [2024-07-15 15:35:18.841724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.996 [2024-07-15 15:35:18.841734] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.996 [2024-07-15 15:35:18.841743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.996 [2024-07-15 15:35:18.841761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.996 qpair failed and we were unable to recover it. 00:30:14.996 [2024-07-15 15:35:18.851663] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.996 [2024-07-15 15:35:18.851741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.996 [2024-07-15 15:35:18.851759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.996 [2024-07-15 15:35:18.851769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.996 [2024-07-15 15:35:18.851778] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.996 [2024-07-15 15:35:18.851796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.996 qpair failed and we were unable to recover it. 00:30:14.996 [2024-07-15 15:35:18.861659] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.996 [2024-07-15 15:35:18.861740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.996 [2024-07-15 15:35:18.861758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.996 [2024-07-15 15:35:18.861768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.996 [2024-07-15 15:35:18.861776] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.996 [2024-07-15 15:35:18.861794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.996 qpair failed and we were unable to recover it. 00:30:14.996 [2024-07-15 15:35:18.871702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.996 [2024-07-15 15:35:18.871786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.996 [2024-07-15 15:35:18.871804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.996 [2024-07-15 15:35:18.871814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.996 [2024-07-15 15:35:18.871823] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.996 [2024-07-15 15:35:18.871844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.996 qpair failed and we were unable to recover it. 00:30:14.996 [2024-07-15 15:35:18.881678] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.996 [2024-07-15 15:35:18.881761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.996 [2024-07-15 15:35:18.881778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.996 [2024-07-15 15:35:18.881788] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.996 [2024-07-15 15:35:18.881797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.996 [2024-07-15 15:35:18.881815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.996 qpair failed and we were unable to recover it. 00:30:14.996 [2024-07-15 15:35:18.891774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.996 [2024-07-15 15:35:18.891862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.996 [2024-07-15 15:35:18.891880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.996 [2024-07-15 15:35:18.891890] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.996 [2024-07-15 15:35:18.891898] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:14.996 [2024-07-15 15:35:18.891917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.996 qpair failed and we were unable to recover it. 00:30:15.257 [2024-07-15 15:35:18.901769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.257 [2024-07-15 15:35:18.901860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.257 [2024-07-15 15:35:18.901878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.257 [2024-07-15 15:35:18.901887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.257 [2024-07-15 15:35:18.901896] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:15.257 [2024-07-15 15:35:18.901914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.257 qpair failed and we were unable to recover it. 00:30:15.257 [2024-07-15 15:35:18.911782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.257 [2024-07-15 15:35:18.911872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.257 [2024-07-15 15:35:18.911889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.257 [2024-07-15 15:35:18.911899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.257 [2024-07-15 15:35:18.911907] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:15.257 [2024-07-15 15:35:18.911925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.257 qpair failed and we were unable to recover it. 00:30:15.257 [2024-07-15 15:35:18.921803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.257 [2024-07-15 15:35:18.921891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.257 [2024-07-15 15:35:18.921908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.257 [2024-07-15 15:35:18.921918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.257 [2024-07-15 15:35:18.921930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:15.257 [2024-07-15 15:35:18.921949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.257 qpair failed and we were unable to recover it. 00:30:15.257 [2024-07-15 15:35:18.931929] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.257 [2024-07-15 15:35:18.932021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.257 [2024-07-15 15:35:18.932039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.257 [2024-07-15 15:35:18.932049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.257 [2024-07-15 15:35:18.932057] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:15.257 [2024-07-15 15:35:18.932076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.257 qpair failed and we were unable to recover it. 00:30:15.257 [2024-07-15 15:35:18.941941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.257 [2024-07-15 15:35:18.942029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.257 [2024-07-15 15:35:18.942047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.257 [2024-07-15 15:35:18.942057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.257 [2024-07-15 15:35:18.942066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:15.257 [2024-07-15 15:35:18.942084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.257 qpair failed and we were unable to recover it. 00:30:15.257 [2024-07-15 15:35:18.951981] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.257 [2024-07-15 15:35:18.952069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.257 [2024-07-15 15:35:18.952086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.257 [2024-07-15 15:35:18.952096] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.257 [2024-07-15 15:35:18.952104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:15.257 [2024-07-15 15:35:18.952122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.257 qpair failed and we were unable to recover it. 00:30:15.257 [2024-07-15 15:35:18.961974] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.257 [2024-07-15 15:35:18.962059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.257 [2024-07-15 15:35:18.962077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.257 [2024-07-15 15:35:18.962087] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.257 [2024-07-15 15:35:18.962095] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:15.257 [2024-07-15 15:35:18.962113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.257 qpair failed and we were unable to recover it. 00:30:15.257 [2024-07-15 15:35:18.972019] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.257 [2024-07-15 15:35:18.972242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.257 [2024-07-15 15:35:18.972261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.257 [2024-07-15 15:35:18.972271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.257 [2024-07-15 15:35:18.972280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:15.257 [2024-07-15 15:35:18.972299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.257 qpair failed and we were unable to recover it. 00:30:15.257 [2024-07-15 15:35:18.982050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.257 [2024-07-15 15:35:18.982131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.257 [2024-07-15 15:35:18.982149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.257 [2024-07-15 15:35:18.982158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.257 [2024-07-15 15:35:18.982168] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:15.257 [2024-07-15 15:35:18.982186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.257 qpair failed and we were unable to recover it. 00:30:15.257 [2024-07-15 15:35:18.992065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.257 [2024-07-15 15:35:18.992149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.257 [2024-07-15 15:35:18.992166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.257 [2024-07-15 15:35:18.992176] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.257 [2024-07-15 15:35:18.992184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:15.257 [2024-07-15 15:35:18.992203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.257 qpair failed and we were unable to recover it. 00:30:15.257 [2024-07-15 15:35:19.002006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.257 [2024-07-15 15:35:19.002091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.257 [2024-07-15 15:35:19.002109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.257 [2024-07-15 15:35:19.002119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.257 [2024-07-15 15:35:19.002127] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:15.257 [2024-07-15 15:35:19.002145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.257 qpair failed and we were unable to recover it. 00:30:15.257 [2024-07-15 15:35:19.012137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.257 [2024-07-15 15:35:19.012222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.257 [2024-07-15 15:35:19.012240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.257 [2024-07-15 15:35:19.012253] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.257 [2024-07-15 15:35:19.012262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:15.257 [2024-07-15 15:35:19.012281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.257 qpair failed and we were unable to recover it. 00:30:15.257 [2024-07-15 15:35:19.022160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.257 [2024-07-15 15:35:19.022244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.257 [2024-07-15 15:35:19.022262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.257 [2024-07-15 15:35:19.022272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.257 [2024-07-15 15:35:19.022281] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:15.257 [2024-07-15 15:35:19.022299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.257 qpair failed and we were unable to recover it. 00:30:15.257 [2024-07-15 15:35:19.032204] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.257 [2024-07-15 15:35:19.032334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.258 [2024-07-15 15:35:19.032352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.258 [2024-07-15 15:35:19.032362] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.258 [2024-07-15 15:35:19.032371] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:15.258 [2024-07-15 15:35:19.032388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.258 qpair failed and we were unable to recover it. 00:30:15.258 [2024-07-15 15:35:19.042156] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.258 [2024-07-15 15:35:19.042245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.258 [2024-07-15 15:35:19.042263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.258 [2024-07-15 15:35:19.042272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.258 [2024-07-15 15:35:19.042281] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:15.258 [2024-07-15 15:35:19.042299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.258 qpair failed and we were unable to recover it. 00:30:15.258 [2024-07-15 15:35:19.052204] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.258 [2024-07-15 15:35:19.052285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.258 [2024-07-15 15:35:19.052302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.258 [2024-07-15 15:35:19.052312] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.258 [2024-07-15 15:35:19.052321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:15.258 [2024-07-15 15:35:19.052339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.258 qpair failed and we were unable to recover it. 00:30:15.258 [2024-07-15 15:35:19.062168] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.258 [2024-07-15 15:35:19.062250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.258 [2024-07-15 15:35:19.062268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.258 [2024-07-15 15:35:19.062278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.258 [2024-07-15 15:35:19.062286] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:15.258 [2024-07-15 15:35:19.062305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.258 qpair failed and we were unable to recover it. 00:30:15.258 [2024-07-15 15:35:19.072266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.258 [2024-07-15 15:35:19.072351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.258 [2024-07-15 15:35:19.072369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.258 [2024-07-15 15:35:19.072379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.258 [2024-07-15 15:35:19.072387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:15.258 [2024-07-15 15:35:19.072406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.258 qpair failed and we were unable to recover it. 00:30:15.258 [2024-07-15 15:35:19.082317] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.258 [2024-07-15 15:35:19.082404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.258 [2024-07-15 15:35:19.082423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.258 [2024-07-15 15:35:19.082433] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.258 [2024-07-15 15:35:19.082443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:15.258 [2024-07-15 15:35:19.082461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.258 qpair failed and we were unable to recover it. 00:30:15.258 [2024-07-15 15:35:19.092341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.258 [2024-07-15 15:35:19.092423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.258 [2024-07-15 15:35:19.092441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.258 [2024-07-15 15:35:19.092451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.258 [2024-07-15 15:35:19.092459] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:15.258 [2024-07-15 15:35:19.092477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.258 qpair failed and we were unable to recover it. 00:30:15.258 [2024-07-15 15:35:19.102371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.258 [2024-07-15 15:35:19.102458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.258 [2024-07-15 15:35:19.102480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.258 [2024-07-15 15:35:19.102489] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.258 [2024-07-15 15:35:19.102498] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:15.258 [2024-07-15 15:35:19.102516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.258 qpair failed and we were unable to recover it. 00:30:15.258 [2024-07-15 15:35:19.112396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.258 [2024-07-15 15:35:19.112484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.258 [2024-07-15 15:35:19.112503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.258 [2024-07-15 15:35:19.112513] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.258 [2024-07-15 15:35:19.112523] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:15.258 [2024-07-15 15:35:19.112542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.258 qpair failed and we were unable to recover it. 00:30:15.258 [2024-07-15 15:35:19.122435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.258 [2024-07-15 15:35:19.122515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.258 [2024-07-15 15:35:19.122533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.258 [2024-07-15 15:35:19.122543] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.258 [2024-07-15 15:35:19.122552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:15.258 [2024-07-15 15:35:19.122571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.258 qpair failed and we were unable to recover it. 00:30:15.258 [2024-07-15 15:35:19.132470] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.258 [2024-07-15 15:35:19.132646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.258 [2024-07-15 15:35:19.132664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.258 [2024-07-15 15:35:19.132673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.258 [2024-07-15 15:35:19.132682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:15.258 [2024-07-15 15:35:19.132701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.258 qpair failed and we were unable to recover it. 00:30:15.258 [2024-07-15 15:35:19.142480] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.258 [2024-07-15 15:35:19.142563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.258 [2024-07-15 15:35:19.142581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.258 [2024-07-15 15:35:19.142591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.258 [2024-07-15 15:35:19.142599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:15.258 [2024-07-15 15:35:19.142617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.258 qpair failed and we were unable to recover it. 00:30:15.258 [2024-07-15 15:35:19.152497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.258 [2024-07-15 15:35:19.152579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.258 [2024-07-15 15:35:19.152596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.258 [2024-07-15 15:35:19.152606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.258 [2024-07-15 15:35:19.152615] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:15.258 [2024-07-15 15:35:19.152633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.258 qpair failed and we were unable to recover it. 00:30:15.517 [2024-07-15 15:35:19.162543] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.517 [2024-07-15 15:35:19.162627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.517 [2024-07-15 15:35:19.162644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.517 [2024-07-15 15:35:19.162654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.517 [2024-07-15 15:35:19.162663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:15.517 [2024-07-15 15:35:19.162681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.517 qpair failed and we were unable to recover it. 00:30:15.517 [2024-07-15 15:35:19.172572] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.517 [2024-07-15 15:35:19.172658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.517 [2024-07-15 15:35:19.172677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.517 [2024-07-15 15:35:19.172687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.517 [2024-07-15 15:35:19.172697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:15.517 [2024-07-15 15:35:19.172715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.517 qpair failed and we were unable to recover it. 00:30:15.517 [2024-07-15 15:35:19.182637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.517 [2024-07-15 15:35:19.182722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.517 [2024-07-15 15:35:19.182740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.517 [2024-07-15 15:35:19.182749] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.517 [2024-07-15 15:35:19.182758] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:15.517 [2024-07-15 15:35:19.182776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.517 qpair failed and we were unable to recover it. 00:30:15.518 [2024-07-15 15:35:19.192610] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.518 [2024-07-15 15:35:19.192696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.518 [2024-07-15 15:35:19.192717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.518 [2024-07-15 15:35:19.192726] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.518 [2024-07-15 15:35:19.192735] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:15.518 [2024-07-15 15:35:19.192753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.518 qpair failed and we were unable to recover it. 00:30:15.518 [2024-07-15 15:35:19.202632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.518 [2024-07-15 15:35:19.202719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.518 [2024-07-15 15:35:19.202737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.518 [2024-07-15 15:35:19.202747] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.518 [2024-07-15 15:35:19.202756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:15.518 [2024-07-15 15:35:19.202774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.518 qpair failed and we were unable to recover it. 00:30:15.518 [2024-07-15 15:35:19.212724] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.518 [2024-07-15 15:35:19.212836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.518 [2024-07-15 15:35:19.212854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.518 [2024-07-15 15:35:19.212863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.518 [2024-07-15 15:35:19.212872] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:15.518 [2024-07-15 15:35:19.212890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.518 qpair failed and we were unable to recover it. 00:30:15.518 [2024-07-15 15:35:19.222708] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.518 [2024-07-15 15:35:19.222789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.518 [2024-07-15 15:35:19.222807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.518 [2024-07-15 15:35:19.222816] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.518 [2024-07-15 15:35:19.222826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:15.518 [2024-07-15 15:35:19.222848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.518 qpair failed and we were unable to recover it. 00:30:15.518 [2024-07-15 15:35:19.232716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.518 [2024-07-15 15:35:19.232799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.518 [2024-07-15 15:35:19.232816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.518 [2024-07-15 15:35:19.232826] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.518 [2024-07-15 15:35:19.232838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:15.518 [2024-07-15 15:35:19.232859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.518 qpair failed and we were unable to recover it. 00:30:15.518 [2024-07-15 15:35:19.242763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.518 [2024-07-15 15:35:19.242856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.518 [2024-07-15 15:35:19.242874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.518 [2024-07-15 15:35:19.242883] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.518 [2024-07-15 15:35:19.242892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:15.518 [2024-07-15 15:35:19.242910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.518 qpair failed and we were unable to recover it. 00:30:15.518 [2024-07-15 15:35:19.252792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.518 [2024-07-15 15:35:19.252912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.518 [2024-07-15 15:35:19.252929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.518 [2024-07-15 15:35:19.252939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.518 [2024-07-15 15:35:19.252948] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff16c000b90 00:30:15.518 [2024-07-15 15:35:19.252966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.518 qpair failed and we were unable to recover it. 00:30:15.518 [2024-07-15 15:35:19.262850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.518 [2024-07-15 15:35:19.262958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.518 [2024-07-15 15:35:19.262987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.518 [2024-07-15 15:35:19.263003] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.518 [2024-07-15 15:35:19.263016] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff164000b90 00:30:15.518 [2024-07-15 15:35:19.263043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.518 qpair failed and we were unable to recover it. 00:30:15.518 [2024-07-15 15:35:19.272880] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.518 [2024-07-15 15:35:19.272965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.518 [2024-07-15 15:35:19.272984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.518 [2024-07-15 15:35:19.272994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.518 [2024-07-15 15:35:19.273003] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff164000b90 00:30:15.518 [2024-07-15 15:35:19.273022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:15.518 qpair failed and we were unable to recover it. 00:30:15.518 [2024-07-15 15:35:19.282896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.518 [2024-07-15 15:35:19.283033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.518 [2024-07-15 15:35:19.283066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.518 [2024-07-15 15:35:19.283081] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.518 [2024-07-15 15:35:19.283094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff174000b90 00:30:15.518 [2024-07-15 15:35:19.283123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.518 qpair failed and we were unable to recover it. 00:30:15.518 [2024-07-15 15:35:19.292904] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.518 [2024-07-15 15:35:19.293038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.518 [2024-07-15 15:35:19.293057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.518 [2024-07-15 15:35:19.293067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.518 [2024-07-15 15:35:19.293076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff174000b90 00:30:15.518 [2024-07-15 15:35:19.293095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.518 qpair failed and we were unable to recover it. 00:30:15.518 [2024-07-15 15:35:19.293189] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:30:15.518 A controller has encountered a failure and is being reset. 00:30:15.518 [2024-07-15 15:35:19.303000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.518 [2024-07-15 15:35:19.303120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.518 [2024-07-15 15:35:19.303149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.518 [2024-07-15 15:35:19.303163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.518 [2024-07-15 15:35:19.303176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19dd210 00:30:15.518 [2024-07-15 15:35:19.303201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.518 qpair failed and we were unable to recover it. 00:30:15.518 [2024-07-15 15:35:19.312954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.518 [2024-07-15 15:35:19.313066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.518 [2024-07-15 15:35:19.313085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.518 [2024-07-15 15:35:19.313095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.518 [2024-07-15 15:35:19.313103] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19dd210 00:30:15.518 [2024-07-15 15:35:19.313121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:15.518 qpair failed and we were unable to recover it. 00:30:15.778 Controller properly reset. 00:30:15.778 Initializing NVMe Controllers 00:30:15.778 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:15.778 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:15.778 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:30:15.778 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:30:15.778 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:30:15.778 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:30:15.778 Initialization complete. Launching workers. 00:30:15.778 Starting thread on core 1 00:30:15.778 Starting thread on core 2 00:30:15.778 Starting thread on core 3 00:30:15.778 Starting thread on core 0 00:30:15.778 15:35:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:30:15.778 00:30:15.778 real 0m11.358s 00:30:15.778 user 0m20.517s 00:30:15.778 sys 0m5.073s 00:30:15.778 15:35:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:15.778 15:35:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:15.778 ************************************ 00:30:15.778 END TEST nvmf_target_disconnect_tc2 00:30:15.778 ************************************ 00:30:15.778 15:35:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:30:15.778 15:35:19 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:30:15.778 15:35:19 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:30:15.778 15:35:19 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:30:15.778 15:35:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:15.778 15:35:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:30:15.778 15:35:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:15.778 15:35:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:30:15.778 15:35:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:15.778 15:35:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:15.778 rmmod nvme_tcp 00:30:15.778 rmmod nvme_fabrics 00:30:15.778 rmmod nvme_keyring 00:30:15.778 15:35:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:15.778 15:35:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:30:15.778 15:35:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:30:15.778 15:35:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 3226710 ']' 00:30:15.778 15:35:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 3226710 00:30:15.778 15:35:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 3226710 ']' 00:30:15.778 15:35:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 3226710 00:30:15.778 15:35:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:30:15.778 15:35:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:15.778 15:35:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3226710 00:30:15.778 15:35:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:30:15.778 15:35:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:30:15.778 15:35:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3226710' 00:30:15.778 killing process with pid 3226710 00:30:15.778 15:35:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 3226710 00:30:15.778 15:35:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 3226710 00:30:16.038 15:35:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:16.038 15:35:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:16.038 15:35:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:16.038 15:35:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:16.038 15:35:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:16.038 15:35:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:16.038 15:35:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:16.038 15:35:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:18.574 15:35:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:18.574 00:30:18.574 real 0m20.997s 00:30:18.574 user 0m48.277s 00:30:18.574 sys 0m10.776s 00:30:18.574 15:35:21 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:18.574 15:35:21 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:18.574 ************************************ 00:30:18.574 END TEST nvmf_target_disconnect 00:30:18.574 ************************************ 00:30:18.574 15:35:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:18.574 15:35:21 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:30:18.574 15:35:21 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:18.574 15:35:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:18.574 15:35:22 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:30:18.574 00:30:18.574 real 22m17.435s 00:30:18.574 user 45m21.212s 00:30:18.574 sys 8m15.016s 00:30:18.574 15:35:22 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:18.574 15:35:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:18.574 ************************************ 00:30:18.574 END TEST nvmf_tcp 00:30:18.574 ************************************ 00:30:18.574 15:35:22 -- common/autotest_common.sh@1142 -- # return 0 00:30:18.574 15:35:22 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:30:18.574 15:35:22 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:18.574 15:35:22 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:18.574 15:35:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:18.574 15:35:22 -- common/autotest_common.sh@10 -- # set +x 00:30:18.574 ************************************ 00:30:18.574 START TEST spdkcli_nvmf_tcp 00:30:18.574 ************************************ 00:30:18.574 15:35:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:18.574 * Looking for test storage... 00:30:18.574 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:30:18.574 15:35:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:30:18.574 15:35:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:30:18.574 15:35:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:30:18.574 15:35:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:18.574 15:35:22 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:30:18.574 15:35:22 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:18.574 15:35:22 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:18.574 15:35:22 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:18.574 15:35:22 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:18.574 15:35:22 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:18.574 15:35:22 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:18.574 15:35:22 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:18.574 15:35:22 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:18.574 15:35:22 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:18.574 15:35:22 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:18.574 15:35:22 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:30:18.574 15:35:22 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:30:18.574 15:35:22 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:18.574 15:35:22 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:18.574 15:35:22 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:18.574 15:35:22 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:18.574 15:35:22 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:18.574 15:35:22 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:18.574 15:35:22 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:18.574 15:35:22 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:18.575 15:35:22 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.575 15:35:22 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.575 15:35:22 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.575 15:35:22 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:30:18.575 15:35:22 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.575 15:35:22 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:30:18.575 15:35:22 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:18.575 15:35:22 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:18.575 15:35:22 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:18.575 15:35:22 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:18.575 15:35:22 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:18.575 15:35:22 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:18.575 15:35:22 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:18.575 15:35:22 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:18.575 15:35:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:30:18.575 15:35:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:30:18.575 15:35:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:30:18.575 15:35:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:30:18.575 15:35:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:18.575 15:35:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:18.575 15:35:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:30:18.575 15:35:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3228422 00:30:18.575 15:35:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3228422 00:30:18.575 15:35:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 3228422 ']' 00:30:18.575 15:35:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:18.575 15:35:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:30:18.575 15:35:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:18.575 15:35:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:18.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:18.575 15:35:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:18.575 15:35:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:18.575 [2024-07-15 15:35:22.302855] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:30:18.575 [2024-07-15 15:35:22.302916] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3228422 ] 00:30:18.575 EAL: No free 2048 kB hugepages reported on node 1 00:30:18.575 [2024-07-15 15:35:22.370276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:18.575 [2024-07-15 15:35:22.445392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:18.575 [2024-07-15 15:35:22.445395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:19.508 15:35:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:19.508 15:35:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:30:19.508 15:35:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:30:19.508 15:35:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:19.508 15:35:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:19.508 15:35:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:30:19.508 15:35:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:30:19.508 15:35:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:30:19.508 15:35:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:19.508 15:35:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:19.508 15:35:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:30:19.508 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:30:19.508 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:30:19.508 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:30:19.508 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:30:19.508 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:30:19.508 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:30:19.508 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:19.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:30:19.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:30:19.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:19.508 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:19.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:30:19.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:19.508 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:19.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:30:19.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:19.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:19.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:19.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:19.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:30:19.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:30:19.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:19.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:30:19.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:19.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:30:19.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:30:19.508 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:30:19.508 ' 00:30:22.037 [2024-07-15 15:35:25.549771] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:22.972 [2024-07-15 15:35:26.749744] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:30:25.506 [2024-07-15 15:35:28.968451] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:30:27.430 [2024-07-15 15:35:30.878475] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:30:28.803 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:30:28.803 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:30:28.803 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:30:28.803 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:30:28.803 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:30:28.803 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:30:28.803 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:30:28.803 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:28.803 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:30:28.803 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:30:28.803 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:28.803 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:28.803 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:30:28.803 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:28.803 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:28.803 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:30:28.803 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:28.803 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:28.803 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:28.803 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:28.803 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:30:28.803 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:30:28.803 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:28.803 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:30:28.803 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:28.803 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:30:28.803 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:30:28.803 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:30:28.803 15:35:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:30:28.803 15:35:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:28.803 15:35:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:28.803 15:35:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:30:28.803 15:35:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:28.803 15:35:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:28.803 15:35:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:30:28.803 15:35:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:30:29.062 15:35:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:30:29.062 15:35:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:30:29.062 15:35:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:30:29.062 15:35:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:29.062 15:35:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:29.062 15:35:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:30:29.062 15:35:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:29.062 15:35:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:29.062 15:35:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:30:29.062 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:30:29.062 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:29.062 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:30:29.062 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:30:29.062 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:30:29.062 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:30:29.062 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:29.062 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:30:29.062 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:30:29.062 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:30:29.062 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:30:29.062 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:30:29.062 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:30:29.062 ' 00:30:34.365 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:30:34.365 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:30:34.365 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:34.365 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:30:34.365 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:30:34.365 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:30:34.366 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:30:34.366 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:34.366 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:30:34.366 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:30:34.366 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:30:34.366 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:30:34.366 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:30:34.366 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:30:34.366 15:35:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:30:34.366 15:35:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:34.366 15:35:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:34.366 15:35:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3228422 00:30:34.366 15:35:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 3228422 ']' 00:30:34.366 15:35:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 3228422 00:30:34.366 15:35:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:30:34.366 15:35:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:34.366 15:35:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3228422 00:30:34.366 15:35:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:34.366 15:35:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:34.366 15:35:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3228422' 00:30:34.366 killing process with pid 3228422 00:30:34.366 15:35:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 3228422 00:30:34.366 15:35:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 3228422 00:30:34.366 15:35:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:30:34.366 15:35:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:30:34.366 15:35:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3228422 ']' 00:30:34.366 15:35:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3228422 00:30:34.366 15:35:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 3228422 ']' 00:30:34.366 15:35:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 3228422 00:30:34.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3228422) - No such process 00:30:34.366 15:35:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 3228422 is not found' 00:30:34.366 Process with pid 3228422 is not found 00:30:34.366 15:35:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:30:34.366 15:35:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:30:34.366 15:35:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:30:34.366 00:30:34.366 real 0m16.067s 00:30:34.366 user 0m33.299s 00:30:34.366 sys 0m0.935s 00:30:34.366 15:35:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:34.366 15:35:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:34.366 ************************************ 00:30:34.366 END TEST spdkcli_nvmf_tcp 00:30:34.366 ************************************ 00:30:34.366 15:35:38 -- common/autotest_common.sh@1142 -- # return 0 00:30:34.366 15:35:38 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:34.366 15:35:38 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:34.366 15:35:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:34.366 15:35:38 -- common/autotest_common.sh@10 -- # set +x 00:30:34.366 ************************************ 00:30:34.366 START TEST nvmf_identify_passthru 00:30:34.366 ************************************ 00:30:34.366 15:35:38 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:34.626 * Looking for test storage... 00:30:34.626 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:34.626 15:35:38 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:34.626 15:35:38 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:30:34.626 15:35:38 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:34.626 15:35:38 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:34.626 15:35:38 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:34.626 15:35:38 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:34.626 15:35:38 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:34.626 15:35:38 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:34.626 15:35:38 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:34.626 15:35:38 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:34.626 15:35:38 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:34.626 15:35:38 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:34.626 15:35:38 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:30:34.626 15:35:38 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:30:34.626 15:35:38 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:34.626 15:35:38 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:34.626 15:35:38 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:34.626 15:35:38 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:34.626 15:35:38 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:34.626 15:35:38 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:34.626 15:35:38 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:34.626 15:35:38 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:34.626 15:35:38 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.626 15:35:38 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.626 15:35:38 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.626 15:35:38 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:30:34.626 15:35:38 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.626 15:35:38 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:30:34.626 15:35:38 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:34.626 15:35:38 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:34.626 15:35:38 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:34.626 15:35:38 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:34.626 15:35:38 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:34.626 15:35:38 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:34.626 15:35:38 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:34.626 15:35:38 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:34.626 15:35:38 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:34.626 15:35:38 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:34.626 15:35:38 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:34.626 15:35:38 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:34.626 15:35:38 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.626 15:35:38 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.626 15:35:38 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.626 15:35:38 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:30:34.626 15:35:38 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.626 15:35:38 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:30:34.626 15:35:38 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:34.626 15:35:38 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:34.626 15:35:38 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:34.626 15:35:38 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:34.626 15:35:38 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:34.626 15:35:38 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:34.626 15:35:38 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:34.626 15:35:38 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:34.626 15:35:38 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:34.626 15:35:38 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:34.626 15:35:38 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:30:34.626 15:35:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:41.189 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:41.189 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:41.189 Found net devices under 0000:af:00.0: cvl_0_0 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:41.189 Found net devices under 0000:af:00.1: cvl_0_1 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:41.189 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:41.190 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:30:41.190 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:41.190 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:41.190 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:41.190 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:41.190 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:41.190 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:41.190 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:41.190 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:41.190 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:41.190 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:41.190 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:41.190 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:41.190 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:41.190 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:41.190 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:41.190 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:41.190 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:41.190 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:41.190 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:41.190 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:41.190 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:41.190 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:41.190 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:41.190 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:41.190 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:30:41.190 00:30:41.190 --- 10.0.0.2 ping statistics --- 00:30:41.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:41.190 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:30:41.190 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:41.190 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:41.190 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:30:41.190 00:30:41.190 --- 10.0.0.1 ping statistics --- 00:30:41.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:41.190 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:30:41.190 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:41.190 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:30:41.190 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:41.190 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:41.190 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:41.190 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:41.190 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:41.190 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:41.190 15:35:44 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:41.190 15:35:44 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:30:41.190 15:35:44 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:41.190 15:35:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:41.190 15:35:44 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:30:41.190 15:35:44 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:30:41.190 15:35:44 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:30:41.190 15:35:44 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:30:41.190 15:35:44 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:30:41.190 15:35:44 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:30:41.190 15:35:44 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:30:41.190 15:35:44 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:41.190 15:35:44 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:41.190 15:35:44 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:30:41.190 15:35:44 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:30:41.190 15:35:44 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:d8:00.0 00:30:41.190 15:35:44 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:d8:00.0 00:30:41.190 15:35:44 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:d8:00.0 00:30:41.190 15:35:44 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:d8:00.0 ']' 00:30:41.190 15:35:44 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:d8:00.0' -i 0 00:30:41.190 15:35:44 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:30:41.190 15:35:44 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:30:41.190 EAL: No free 2048 kB hugepages reported on node 1 00:30:46.458 15:35:49 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLN916500W71P6AGN 00:30:46.458 15:35:49 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:d8:00.0' -i 0 00:30:46.458 15:35:49 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:30:46.458 15:35:49 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:30:46.458 EAL: No free 2048 kB hugepages reported on node 1 00:30:50.655 15:35:54 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:30:50.655 15:35:54 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:30:50.655 15:35:54 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:50.655 15:35:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:50.655 15:35:54 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:30:50.655 15:35:54 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:50.655 15:35:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:50.655 15:35:54 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3235801 00:30:50.655 15:35:54 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:50.655 15:35:54 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3235801 00:30:50.655 15:35:54 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 3235801 ']' 00:30:50.655 15:35:54 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:50.655 15:35:54 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:50.655 15:35:54 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:50.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:50.655 15:35:54 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:50.655 15:35:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:50.655 15:35:54 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:50.655 [2024-07-15 15:35:54.353915] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:30:50.655 [2024-07-15 15:35:54.353969] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:50.655 EAL: No free 2048 kB hugepages reported on node 1 00:30:50.655 [2024-07-15 15:35:54.428269] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:50.655 [2024-07-15 15:35:54.501756] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:50.655 [2024-07-15 15:35:54.501794] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:50.655 [2024-07-15 15:35:54.501803] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:50.655 [2024-07-15 15:35:54.501811] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:50.655 [2024-07-15 15:35:54.501817] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:50.655 [2024-07-15 15:35:54.501948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:50.655 [2024-07-15 15:35:54.502045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:50.655 [2024-07-15 15:35:54.502129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:50.655 [2024-07-15 15:35:54.502130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:51.590 15:35:55 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:51.591 15:35:55 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:30:51.591 15:35:55 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:30:51.591 15:35:55 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.591 15:35:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:51.591 INFO: Log level set to 20 00:30:51.591 INFO: Requests: 00:30:51.591 { 00:30:51.591 "jsonrpc": "2.0", 00:30:51.591 "method": "nvmf_set_config", 00:30:51.591 "id": 1, 00:30:51.591 "params": { 00:30:51.591 "admin_cmd_passthru": { 00:30:51.591 "identify_ctrlr": true 00:30:51.591 } 00:30:51.591 } 00:30:51.591 } 00:30:51.591 00:30:51.591 INFO: response: 00:30:51.591 { 00:30:51.591 "jsonrpc": "2.0", 00:30:51.591 "id": 1, 00:30:51.591 "result": true 00:30:51.591 } 00:30:51.591 00:30:51.591 15:35:55 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.591 15:35:55 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:30:51.591 15:35:55 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.591 15:35:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:51.591 INFO: Setting log level to 20 00:30:51.591 INFO: Setting log level to 20 00:30:51.591 INFO: Log level set to 20 00:30:51.591 INFO: Log level set to 20 00:30:51.591 INFO: Requests: 00:30:51.591 { 00:30:51.591 "jsonrpc": "2.0", 00:30:51.591 "method": "framework_start_init", 00:30:51.591 "id": 1 00:30:51.591 } 00:30:51.591 00:30:51.591 INFO: Requests: 00:30:51.591 { 00:30:51.591 "jsonrpc": "2.0", 00:30:51.591 "method": "framework_start_init", 00:30:51.591 "id": 1 00:30:51.591 } 00:30:51.591 00:30:51.591 [2024-07-15 15:35:55.248331] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:30:51.591 INFO: response: 00:30:51.591 { 00:30:51.591 "jsonrpc": "2.0", 00:30:51.591 "id": 1, 00:30:51.591 "result": true 00:30:51.591 } 00:30:51.591 00:30:51.591 INFO: response: 00:30:51.591 { 00:30:51.591 "jsonrpc": "2.0", 00:30:51.591 "id": 1, 00:30:51.591 "result": true 00:30:51.591 } 00:30:51.591 00:30:51.591 15:35:55 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.591 15:35:55 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:51.591 15:35:55 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.591 15:35:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:51.591 INFO: Setting log level to 40 00:30:51.591 INFO: Setting log level to 40 00:30:51.591 INFO: Setting log level to 40 00:30:51.591 [2024-07-15 15:35:55.261794] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:51.591 15:35:55 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.591 15:35:55 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:30:51.591 15:35:55 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:51.591 15:35:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:51.591 15:35:55 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:d8:00.0 00:30:51.591 15:35:55 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.591 15:35:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:54.879 Nvme0n1 00:30:54.879 15:35:58 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:54.879 15:35:58 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:30:54.879 15:35:58 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:54.879 15:35:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:54.879 15:35:58 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:54.879 15:35:58 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:54.879 15:35:58 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:54.879 15:35:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:54.879 15:35:58 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:54.879 15:35:58 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:54.879 15:35:58 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:54.879 15:35:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:54.879 [2024-07-15 15:35:58.195115] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:54.879 15:35:58 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:54.879 15:35:58 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:30:54.879 15:35:58 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:54.879 15:35:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:54.880 [ 00:30:54.880 { 00:30:54.880 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:54.880 "subtype": "Discovery", 00:30:54.880 "listen_addresses": [], 00:30:54.880 "allow_any_host": true, 00:30:54.880 "hosts": [] 00:30:54.880 }, 00:30:54.880 { 00:30:54.880 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:54.880 "subtype": "NVMe", 00:30:54.880 "listen_addresses": [ 00:30:54.880 { 00:30:54.880 "trtype": "TCP", 00:30:54.880 "adrfam": "IPv4", 00:30:54.880 "traddr": "10.0.0.2", 00:30:54.880 "trsvcid": "4420" 00:30:54.880 } 00:30:54.880 ], 00:30:54.880 "allow_any_host": true, 00:30:54.880 "hosts": [], 00:30:54.880 "serial_number": "SPDK00000000000001", 00:30:54.880 "model_number": "SPDK bdev Controller", 00:30:54.880 "max_namespaces": 1, 00:30:54.880 "min_cntlid": 1, 00:30:54.880 "max_cntlid": 65519, 00:30:54.880 "namespaces": [ 00:30:54.880 { 00:30:54.880 "nsid": 1, 00:30:54.880 "bdev_name": "Nvme0n1", 00:30:54.880 "name": "Nvme0n1", 00:30:54.880 "nguid": "CFB382EC5E56408BA45435B1F7A999C6", 00:30:54.880 "uuid": "cfb382ec-5e56-408b-a454-35b1f7a999c6" 00:30:54.880 } 00:30:54.880 ] 00:30:54.880 } 00:30:54.880 ] 00:30:54.880 15:35:58 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:54.880 15:35:58 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:54.880 15:35:58 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:30:54.880 15:35:58 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:30:54.880 EAL: No free 2048 kB hugepages reported on node 1 00:30:54.880 15:35:58 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLN916500W71P6AGN 00:30:54.880 15:35:58 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:54.880 15:35:58 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:30:54.880 15:35:58 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:30:54.880 EAL: No free 2048 kB hugepages reported on node 1 00:30:54.880 15:35:58 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:30:54.880 15:35:58 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLN916500W71P6AGN '!=' BTLN916500W71P6AGN ']' 00:30:54.880 15:35:58 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:30:54.880 15:35:58 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:54.880 15:35:58 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:54.880 15:35:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:54.880 15:35:58 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:54.880 15:35:58 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:30:54.880 15:35:58 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:30:54.880 15:35:58 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:54.880 15:35:58 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:30:54.880 15:35:58 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:54.880 15:35:58 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:30:54.880 15:35:58 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:54.880 15:35:58 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:54.880 rmmod nvme_tcp 00:30:54.880 rmmod nvme_fabrics 00:30:54.880 rmmod nvme_keyring 00:30:54.880 15:35:58 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:54.880 15:35:58 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:30:54.880 15:35:58 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:30:54.880 15:35:58 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 3235801 ']' 00:30:54.880 15:35:58 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 3235801 00:30:54.880 15:35:58 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 3235801 ']' 00:30:54.880 15:35:58 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 3235801 00:30:54.880 15:35:58 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:30:54.880 15:35:58 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:54.880 15:35:58 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3235801 00:30:54.880 15:35:58 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:54.880 15:35:58 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:54.880 15:35:58 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3235801' 00:30:54.880 killing process with pid 3235801 00:30:54.880 15:35:58 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 3235801 00:30:54.880 15:35:58 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 3235801 00:30:56.781 15:36:00 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:56.781 15:36:00 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:56.781 15:36:00 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:56.781 15:36:00 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:56.781 15:36:00 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:56.781 15:36:00 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:56.781 15:36:00 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:56.781 15:36:00 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:59.383 15:36:02 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:59.383 00:30:59.383 real 0m24.519s 00:30:59.383 user 0m32.859s 00:30:59.383 sys 0m6.263s 00:30:59.383 15:36:02 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:59.383 15:36:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:59.383 ************************************ 00:30:59.383 END TEST nvmf_identify_passthru 00:30:59.383 ************************************ 00:30:59.383 15:36:02 -- common/autotest_common.sh@1142 -- # return 0 00:30:59.383 15:36:02 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:59.383 15:36:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:59.383 15:36:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:59.383 15:36:02 -- common/autotest_common.sh@10 -- # set +x 00:30:59.383 ************************************ 00:30:59.383 START TEST nvmf_dif 00:30:59.383 ************************************ 00:30:59.383 15:36:02 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:59.383 * Looking for test storage... 00:30:59.383 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:59.383 15:36:02 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:59.383 15:36:02 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:30:59.383 15:36:02 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:59.383 15:36:02 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:59.383 15:36:02 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:59.383 15:36:02 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:59.383 15:36:02 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:59.383 15:36:02 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:59.383 15:36:02 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:59.383 15:36:02 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:59.383 15:36:02 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:59.383 15:36:02 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:59.383 15:36:02 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:30:59.383 15:36:02 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:30:59.383 15:36:02 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:59.383 15:36:02 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:59.383 15:36:02 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:59.383 15:36:02 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:59.383 15:36:02 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:59.383 15:36:02 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:59.383 15:36:02 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:59.384 15:36:02 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:59.384 15:36:02 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.384 15:36:02 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.384 15:36:02 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.384 15:36:02 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:30:59.384 15:36:02 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.384 15:36:02 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:30:59.384 15:36:02 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:59.384 15:36:02 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:59.384 15:36:02 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:59.384 15:36:02 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:59.384 15:36:02 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:59.384 15:36:02 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:59.384 15:36:02 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:59.384 15:36:02 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:59.384 15:36:02 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:30:59.384 15:36:02 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:30:59.384 15:36:02 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:30:59.384 15:36:02 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:30:59.384 15:36:02 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:30:59.384 15:36:02 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:59.384 15:36:02 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:59.384 15:36:02 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:59.384 15:36:02 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:59.384 15:36:02 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:59.384 15:36:02 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:59.384 15:36:02 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:59.384 15:36:02 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:59.384 15:36:02 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:59.384 15:36:02 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:59.384 15:36:02 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:30:59.384 15:36:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:05.959 15:36:09 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:05.959 15:36:09 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:31:05.959 15:36:09 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:05.959 15:36:09 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:05.959 15:36:09 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:05.959 15:36:09 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:05.959 15:36:09 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:05.959 15:36:09 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:31:05.959 15:36:09 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:05.959 15:36:09 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:31:05.959 15:36:09 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:31:05.959 15:36:09 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:31:05.959 15:36:09 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:31:05.959 15:36:09 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:31:05.959 15:36:09 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:31:05.959 15:36:09 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:05.959 15:36:09 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:05.959 15:36:09 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:05.959 15:36:09 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:05.959 15:36:09 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:05.959 15:36:09 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:05.959 15:36:09 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:05.959 15:36:09 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:05.959 15:36:09 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:05.959 15:36:09 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:05.959 15:36:09 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:05.959 15:36:09 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:05.959 15:36:09 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:05.959 15:36:09 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:05.960 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:05.960 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:05.960 Found net devices under 0000:af:00.0: cvl_0_0 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:05.960 Found net devices under 0000:af:00.1: cvl_0_1 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:05.960 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:05.960 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:31:05.960 00:31:05.960 --- 10.0.0.2 ping statistics --- 00:31:05.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:05.960 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:05.960 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:05.960 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:31:05.960 00:31:05.960 --- 10.0.0.1 ping statistics --- 00:31:05.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:05.960 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:31:05.960 15:36:09 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:09.251 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:31:09.251 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:31:09.251 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:31:09.251 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:31:09.251 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:31:09.251 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:31:09.251 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:31:09.251 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:31:09.251 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:31:09.251 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:31:09.251 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:31:09.251 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:31:09.251 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:31:09.251 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:31:09.251 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:31:09.251 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:31:09.251 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:31:09.251 15:36:12 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:09.251 15:36:12 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:09.251 15:36:12 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:09.251 15:36:12 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:09.251 15:36:12 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:09.251 15:36:12 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:09.251 15:36:12 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:31:09.251 15:36:12 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:31:09.251 15:36:12 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:09.251 15:36:12 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:09.251 15:36:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:09.251 15:36:12 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=3242187 00:31:09.251 15:36:12 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 3242187 00:31:09.251 15:36:12 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:31:09.251 15:36:12 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 3242187 ']' 00:31:09.251 15:36:12 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:09.251 15:36:12 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:09.251 15:36:12 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:09.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:09.251 15:36:12 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:09.251 15:36:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:09.251 [2024-07-15 15:36:12.732397] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:31:09.251 [2024-07-15 15:36:12.732455] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:09.251 EAL: No free 2048 kB hugepages reported on node 1 00:31:09.251 [2024-07-15 15:36:12.807249] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:09.251 [2024-07-15 15:36:12.880493] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:09.251 [2024-07-15 15:36:12.880528] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:09.251 [2024-07-15 15:36:12.880538] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:09.251 [2024-07-15 15:36:12.880546] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:09.251 [2024-07-15 15:36:12.880553] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:09.251 [2024-07-15 15:36:12.880572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:09.817 15:36:13 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:09.817 15:36:13 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:31:09.817 15:36:13 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:09.817 15:36:13 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:09.817 15:36:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:09.817 15:36:13 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:09.817 15:36:13 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:31:09.817 15:36:13 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:31:09.817 15:36:13 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.817 15:36:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:09.817 [2024-07-15 15:36:13.577944] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:09.817 15:36:13 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.817 15:36:13 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:31:09.817 15:36:13 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:09.817 15:36:13 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:09.817 15:36:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:09.818 ************************************ 00:31:09.818 START TEST fio_dif_1_default 00:31:09.818 ************************************ 00:31:09.818 15:36:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:31:09.818 15:36:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:31:09.818 15:36:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:31:09.818 15:36:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:31:09.818 15:36:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:31:09.818 15:36:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:31:09.818 15:36:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:09.818 15:36:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.818 15:36:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:09.818 bdev_null0 00:31:09.818 15:36:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.818 15:36:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:09.818 15:36:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.818 15:36:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:09.818 15:36:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.818 15:36:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:09.818 15:36:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.818 15:36:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:09.818 15:36:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.818 15:36:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:09.818 15:36:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:09.818 15:36:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:09.818 [2024-07-15 15:36:13.662280] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:09.818 15:36:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:09.818 15:36:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:31:09.818 15:36:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:31:09.818 15:36:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:09.818 15:36:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:31:09.818 15:36:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:31:09.818 15:36:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:09.818 15:36:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:09.818 15:36:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:09.818 { 00:31:09.818 "params": { 00:31:09.818 "name": "Nvme$subsystem", 00:31:09.818 "trtype": "$TEST_TRANSPORT", 00:31:09.818 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:09.818 "adrfam": "ipv4", 00:31:09.818 "trsvcid": "$NVMF_PORT", 00:31:09.818 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:09.818 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:09.818 "hdgst": ${hdgst:-false}, 00:31:09.818 "ddgst": ${ddgst:-false} 00:31:09.818 }, 00:31:09.818 "method": "bdev_nvme_attach_controller" 00:31:09.818 } 00:31:09.818 EOF 00:31:09.818 )") 00:31:09.818 15:36:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:09.818 15:36:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:31:09.818 15:36:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:09.818 15:36:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:31:09.818 15:36:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:09.818 15:36:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:31:09.818 15:36:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:09.818 15:36:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:09.818 15:36:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:31:09.818 15:36:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:09.818 15:36:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:09.818 15:36:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:31:09.818 15:36:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:09.818 15:36:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:31:09.818 15:36:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:31:09.818 15:36:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:31:09.818 15:36:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:09.818 15:36:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:31:09.818 15:36:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:31:09.818 15:36:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:09.818 "params": { 00:31:09.818 "name": "Nvme0", 00:31:09.818 "trtype": "tcp", 00:31:09.818 "traddr": "10.0.0.2", 00:31:09.818 "adrfam": "ipv4", 00:31:09.818 "trsvcid": "4420", 00:31:09.818 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:09.818 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:09.818 "hdgst": false, 00:31:09.818 "ddgst": false 00:31:09.818 }, 00:31:09.818 "method": "bdev_nvme_attach_controller" 00:31:09.818 }' 00:31:09.818 15:36:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:09.818 15:36:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:09.818 15:36:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:09.818 15:36:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:09.818 15:36:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:09.818 15:36:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:10.097 15:36:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:10.097 15:36:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:10.097 15:36:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:10.098 15:36:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:10.358 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:10.358 fio-3.35 00:31:10.358 Starting 1 thread 00:31:10.358 EAL: No free 2048 kB hugepages reported on node 1 00:31:22.549 00:31:22.549 filename0: (groupid=0, jobs=1): err= 0: pid=3242611: Mon Jul 15 15:36:24 2024 00:31:22.549 read: IOPS=95, BW=383KiB/s (392kB/s)(3840KiB/10020msec) 00:31:22.549 slat (nsec): min=5582, max=25718, avg=5884.25, stdev=1156.39 00:31:22.549 clat (usec): min=40849, max=45086, avg=41729.81, stdev=498.27 00:31:22.549 lat (usec): min=40855, max=45106, avg=41735.70, stdev=498.45 00:31:22.549 clat percentiles (usec): 00:31:22.549 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:31:22.549 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:31:22.549 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:31:22.549 | 99.00th=[42730], 99.50th=[43254], 99.90th=[44827], 99.95th=[44827], 00:31:22.549 | 99.99th=[44827] 00:31:22.549 bw ( KiB/s): min= 352, max= 384, per=99.68%, avg=382.40, stdev= 7.16, samples=20 00:31:22.549 iops : min= 88, max= 96, avg=95.60, stdev= 1.79, samples=20 00:31:22.549 lat (msec) : 50=100.00% 00:31:22.549 cpu : usr=86.28%, sys=13.47%, ctx=10, majf=0, minf=207 00:31:22.549 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:22.549 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.549 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.549 issued rwts: total=960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.549 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:22.549 00:31:22.549 Run status group 0 (all jobs): 00:31:22.549 READ: bw=383KiB/s (392kB/s), 383KiB/s-383KiB/s (392kB/s-392kB/s), io=3840KiB (3932kB), run=10020-10020msec 00:31:22.549 15:36:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:31:22.549 15:36:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:31:22.549 15:36:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:31:22.549 15:36:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:22.549 15:36:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:31:22.549 15:36:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:22.549 15:36:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:22.549 15:36:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:22.549 15:36:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:22.549 15:36:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:22.549 15:36:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:22.549 15:36:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:22.549 15:36:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:22.549 00:31:22.549 real 0m11.101s 00:31:22.549 user 0m17.390s 00:31:22.549 sys 0m1.709s 00:31:22.549 15:36:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:22.549 15:36:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:22.549 ************************************ 00:31:22.549 END TEST fio_dif_1_default 00:31:22.549 ************************************ 00:31:22.549 15:36:24 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:31:22.550 15:36:24 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:31:22.550 15:36:24 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:22.550 15:36:24 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:22.550 15:36:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:22.550 ************************************ 00:31:22.550 START TEST fio_dif_1_multi_subsystems 00:31:22.550 ************************************ 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:22.550 bdev_null0 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:22.550 [2024-07-15 15:36:24.848402] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:22.550 bdev_null1 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:22.550 { 00:31:22.550 "params": { 00:31:22.550 "name": "Nvme$subsystem", 00:31:22.550 "trtype": "$TEST_TRANSPORT", 00:31:22.550 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:22.550 "adrfam": "ipv4", 00:31:22.550 "trsvcid": "$NVMF_PORT", 00:31:22.550 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:22.550 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:22.550 "hdgst": ${hdgst:-false}, 00:31:22.550 "ddgst": ${ddgst:-false} 00:31:22.550 }, 00:31:22.550 "method": "bdev_nvme_attach_controller" 00:31:22.550 } 00:31:22.550 EOF 00:31:22.550 )") 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:22.550 { 00:31:22.550 "params": { 00:31:22.550 "name": "Nvme$subsystem", 00:31:22.550 "trtype": "$TEST_TRANSPORT", 00:31:22.550 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:22.550 "adrfam": "ipv4", 00:31:22.550 "trsvcid": "$NVMF_PORT", 00:31:22.550 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:22.550 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:22.550 "hdgst": ${hdgst:-false}, 00:31:22.550 "ddgst": ${ddgst:-false} 00:31:22.550 }, 00:31:22.550 "method": "bdev_nvme_attach_controller" 00:31:22.550 } 00:31:22.550 EOF 00:31:22.550 )") 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:22.550 "params": { 00:31:22.550 "name": "Nvme0", 00:31:22.550 "trtype": "tcp", 00:31:22.550 "traddr": "10.0.0.2", 00:31:22.550 "adrfam": "ipv4", 00:31:22.550 "trsvcid": "4420", 00:31:22.550 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:22.550 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:22.550 "hdgst": false, 00:31:22.550 "ddgst": false 00:31:22.550 }, 00:31:22.550 "method": "bdev_nvme_attach_controller" 00:31:22.550 },{ 00:31:22.550 "params": { 00:31:22.550 "name": "Nvme1", 00:31:22.550 "trtype": "tcp", 00:31:22.550 "traddr": "10.0.0.2", 00:31:22.550 "adrfam": "ipv4", 00:31:22.550 "trsvcid": "4420", 00:31:22.550 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:22.550 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:22.550 "hdgst": false, 00:31:22.550 "ddgst": false 00:31:22.550 }, 00:31:22.550 "method": "bdev_nvme_attach_controller" 00:31:22.550 }' 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:22.550 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:22.551 15:36:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:22.551 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:22.551 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:22.551 fio-3.35 00:31:22.551 Starting 2 threads 00:31:22.551 EAL: No free 2048 kB hugepages reported on node 1 00:31:32.520 00:31:32.520 filename0: (groupid=0, jobs=1): err= 0: pid=3244637: Mon Jul 15 15:36:36 2024 00:31:32.520 read: IOPS=96, BW=386KiB/s (395kB/s)(3872KiB/10029msec) 00:31:32.520 slat (nsec): min=5687, max=29628, avg=7460.51, stdev=2712.25 00:31:32.520 clat (usec): min=40889, max=43063, avg=41416.32, stdev=514.35 00:31:32.520 lat (usec): min=40895, max=43091, avg=41423.79, stdev=514.55 00:31:32.520 clat percentiles (usec): 00:31:32.520 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:31:32.520 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:31:32.520 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:31:32.520 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:31:32.520 | 99.99th=[43254] 00:31:32.520 bw ( KiB/s): min= 352, max= 416, per=34.46%, avg=385.60, stdev=12.61, samples=20 00:31:32.520 iops : min= 88, max= 104, avg=96.40, stdev= 3.15, samples=20 00:31:32.520 lat (msec) : 50=100.00% 00:31:32.520 cpu : usr=93.52%, sys=6.23%, ctx=13, majf=0, minf=141 00:31:32.520 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:32.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:32.520 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:32.520 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:32.520 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:32.520 filename1: (groupid=0, jobs=1): err= 0: pid=3244638: Mon Jul 15 15:36:36 2024 00:31:32.520 read: IOPS=182, BW=732KiB/s (749kB/s)(7344KiB/10039msec) 00:31:32.520 slat (nsec): min=5699, max=25299, avg=6781.22, stdev=1959.63 00:31:32.520 clat (usec): min=892, max=42913, avg=21850.34, stdev=20426.84 00:31:32.520 lat (usec): min=898, max=42919, avg=21857.12, stdev=20426.24 00:31:32.520 clat percentiles (usec): 00:31:32.520 | 1.00th=[ 898], 5.00th=[ 906], 10.00th=[ 906], 20.00th=[ 914], 00:31:32.520 | 30.00th=[ 922], 40.00th=[ 971], 50.00th=[41157], 60.00th=[41157], 00:31:32.520 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:31:32.520 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:31:32.520 | 99.99th=[42730] 00:31:32.520 bw ( KiB/s): min= 512, max= 768, per=65.52%, avg=732.80, stdev=60.45, samples=20 00:31:32.520 iops : min= 128, max= 192, avg=183.20, stdev=15.11, samples=20 00:31:32.520 lat (usec) : 1000=41.99% 00:31:32.520 lat (msec) : 2=6.81%, 50=51.20% 00:31:32.520 cpu : usr=92.99%, sys=6.75%, ctx=12, majf=0, minf=106 00:31:32.520 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:32.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:32.520 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:32.520 issued rwts: total=1836,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:32.520 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:32.520 00:31:32.520 Run status group 0 (all jobs): 00:31:32.520 READ: bw=1117KiB/s (1144kB/s), 386KiB/s-732KiB/s (395kB/s-749kB/s), io=11.0MiB (11.5MB), run=10029-10039msec 00:31:32.520 15:36:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:31:32.520 15:36:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:31:32.520 15:36:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:32.520 15:36:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:32.520 15:36:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:31:32.520 15:36:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:32.520 15:36:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.520 15:36:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:32.520 15:36:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.520 15:36:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:32.520 15:36:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.520 15:36:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:32.520 15:36:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.520 15:36:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:32.520 15:36:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:32.520 15:36:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:31:32.520 15:36:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:32.520 15:36:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.520 15:36:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:32.520 15:36:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.520 15:36:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:32.520 15:36:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.520 15:36:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:32.520 15:36:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.520 00:31:32.520 real 0m11.514s 00:31:32.520 user 0m27.482s 00:31:32.520 sys 0m1.662s 00:31:32.520 15:36:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:32.520 15:36:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:32.520 ************************************ 00:31:32.520 END TEST fio_dif_1_multi_subsystems 00:31:32.520 ************************************ 00:31:32.520 15:36:36 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:31:32.520 15:36:36 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:31:32.520 15:36:36 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:32.520 15:36:36 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:32.520 15:36:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:32.520 ************************************ 00:31:32.520 START TEST fio_dif_rand_params 00:31:32.520 ************************************ 00:31:32.520 15:36:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:31:32.520 15:36:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:31:32.520 15:36:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:31:32.520 15:36:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:31:32.520 15:36:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:31:32.520 15:36:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:31:32.520 15:36:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:31:32.520 15:36:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:31:32.520 15:36:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:31:32.520 15:36:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:32.520 15:36:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:32.520 15:36:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:32.520 15:36:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:32.520 15:36:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:32.520 15:36:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.520 15:36:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:32.520 bdev_null0 00:31:32.780 15:36:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.780 15:36:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:32.780 15:36:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.780 15:36:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:32.780 15:36:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.780 15:36:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:32.780 15:36:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.780 15:36:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:32.780 15:36:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.780 15:36:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:32.780 15:36:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.780 15:36:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:32.780 [2024-07-15 15:36:36.450545] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:32.780 15:36:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.780 15:36:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:31:32.780 15:36:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:31:32.780 15:36:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:32.780 15:36:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:32.780 15:36:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:32.780 15:36:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:32.780 15:36:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:32.780 15:36:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:32.780 { 00:31:32.780 "params": { 00:31:32.780 "name": "Nvme$subsystem", 00:31:32.780 "trtype": "$TEST_TRANSPORT", 00:31:32.780 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:32.780 "adrfam": "ipv4", 00:31:32.780 "trsvcid": "$NVMF_PORT", 00:31:32.780 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:32.780 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:32.780 "hdgst": ${hdgst:-false}, 00:31:32.780 "ddgst": ${ddgst:-false} 00:31:32.780 }, 00:31:32.780 "method": "bdev_nvme_attach_controller" 00:31:32.780 } 00:31:32.780 EOF 00:31:32.780 )") 00:31:32.780 15:36:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:32.780 15:36:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:32.780 15:36:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:32.780 15:36:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:32.780 15:36:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:32.780 15:36:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:32.780 15:36:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:32.780 15:36:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:32.780 15:36:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:31:32.780 15:36:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:32.780 15:36:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:32.780 15:36:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:32.780 15:36:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:32.780 15:36:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:32.780 15:36:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:32.780 15:36:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:31:32.780 15:36:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:32.780 15:36:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:32.781 15:36:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:32.781 15:36:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:32.781 "params": { 00:31:32.781 "name": "Nvme0", 00:31:32.781 "trtype": "tcp", 00:31:32.781 "traddr": "10.0.0.2", 00:31:32.781 "adrfam": "ipv4", 00:31:32.781 "trsvcid": "4420", 00:31:32.781 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:32.781 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:32.781 "hdgst": false, 00:31:32.781 "ddgst": false 00:31:32.781 }, 00:31:32.781 "method": "bdev_nvme_attach_controller" 00:31:32.781 }' 00:31:32.781 15:36:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:32.781 15:36:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:32.781 15:36:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:32.781 15:36:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:32.781 15:36:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:32.781 15:36:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:32.781 15:36:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:32.781 15:36:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:32.781 15:36:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:32.781 15:36:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:33.039 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:33.039 ... 00:31:33.039 fio-3.35 00:31:33.039 Starting 3 threads 00:31:33.039 EAL: No free 2048 kB hugepages reported on node 1 00:31:39.605 00:31:39.605 filename0: (groupid=0, jobs=1): err= 0: pid=3246645: Mon Jul 15 15:36:42 2024 00:31:39.605 read: IOPS=263, BW=32.9MiB/s (34.5MB/s)(165MiB/5001msec) 00:31:39.605 slat (nsec): min=5801, max=62073, avg=10632.50, stdev=5235.35 00:31:39.605 clat (usec): min=4100, max=93194, avg=11376.76, stdev=12662.48 00:31:39.605 lat (usec): min=4107, max=93207, avg=11387.39, stdev=12662.97 00:31:39.605 clat percentiles (usec): 00:31:39.605 | 1.00th=[ 4293], 5.00th=[ 4752], 10.00th=[ 5211], 20.00th=[ 5866], 00:31:39.605 | 30.00th=[ 6521], 40.00th=[ 6915], 50.00th=[ 7570], 60.00th=[ 8094], 00:31:39.605 | 70.00th=[ 8848], 80.00th=[ 9634], 90.00th=[11207], 95.00th=[50070], 00:31:39.605 | 99.00th=[52167], 99.50th=[53216], 99.90th=[54264], 99.95th=[92799], 00:31:39.605 | 99.99th=[92799] 00:31:39.605 bw ( KiB/s): min=22272, max=41984, per=31.34%, avg=32540.44, stdev=6910.16, samples=9 00:31:39.605 iops : min= 174, max= 328, avg=254.22, stdev=53.99, samples=9 00:31:39.605 lat (msec) : 10=84.05%, 20=6.68%, 50=4.10%, 100=5.16% 00:31:39.605 cpu : usr=94.06%, sys=5.52%, ctx=11, majf=0, minf=143 00:31:39.605 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:39.605 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:39.605 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:39.605 issued rwts: total=1317,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:39.605 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:39.605 filename0: (groupid=0, jobs=1): err= 0: pid=3246647: Mon Jul 15 15:36:42 2024 00:31:39.605 read: IOPS=287, BW=36.0MiB/s (37.7MB/s)(181MiB/5023msec) 00:31:39.605 slat (nsec): min=5915, max=73836, avg=12033.92, stdev=6523.03 00:31:39.605 clat (usec): min=3583, max=53216, avg=10411.86, stdev=11469.67 00:31:39.605 lat (usec): min=3589, max=53228, avg=10423.90, stdev=11470.40 00:31:39.605 clat percentiles (usec): 00:31:39.605 | 1.00th=[ 4080], 5.00th=[ 4555], 10.00th=[ 5080], 20.00th=[ 5800], 00:31:39.605 | 30.00th=[ 6325], 40.00th=[ 6718], 50.00th=[ 7111], 60.00th=[ 7570], 00:31:39.605 | 70.00th=[ 8356], 80.00th=[ 9241], 90.00th=[10552], 95.00th=[49021], 00:31:39.605 | 99.00th=[51643], 99.50th=[52167], 99.90th=[52691], 99.95th=[53216], 00:31:39.605 | 99.99th=[53216] 00:31:39.605 bw ( KiB/s): min=23808, max=60160, per=35.55%, avg=36915.20, stdev=12079.29, samples=10 00:31:39.605 iops : min= 186, max= 470, avg=288.40, stdev=94.37, samples=10 00:31:39.605 lat (msec) : 4=0.69%, 10=86.09%, 20=5.54%, 50=4.36%, 100=3.32% 00:31:39.605 cpu : usr=93.07%, sys=6.51%, ctx=9, majf=0, minf=71 00:31:39.605 IO depths : 1=1.8%, 2=98.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:39.605 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:39.605 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:39.605 issued rwts: total=1445,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:39.605 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:39.605 filename0: (groupid=0, jobs=1): err= 0: pid=3246648: Mon Jul 15 15:36:42 2024 00:31:39.605 read: IOPS=261, BW=32.7MiB/s (34.3MB/s)(164MiB/5012msec) 00:31:39.605 slat (nsec): min=5815, max=54686, avg=10498.70, stdev=5113.90 00:31:39.605 clat (usec): min=3742, max=93028, avg=11433.42, stdev=13434.67 00:31:39.605 lat (usec): min=3748, max=93047, avg=11443.92, stdev=13435.07 00:31:39.605 clat percentiles (usec): 00:31:39.605 | 1.00th=[ 4293], 5.00th=[ 4621], 10.00th=[ 5080], 20.00th=[ 5932], 00:31:39.605 | 30.00th=[ 6456], 40.00th=[ 6849], 50.00th=[ 7373], 60.00th=[ 7963], 00:31:39.605 | 70.00th=[ 8717], 80.00th=[ 9634], 90.00th=[12125], 95.00th=[50070], 00:31:39.605 | 99.00th=[52691], 99.50th=[90702], 99.90th=[92799], 99.95th=[92799], 00:31:39.605 | 99.99th=[92799] 00:31:39.605 bw ( KiB/s): min=17664, max=56832, per=32.30%, avg=33543.30, stdev=11660.26, samples=10 00:31:39.605 iops : min= 138, max= 444, avg=262.00, stdev=91.08, samples=10 00:31:39.605 lat (msec) : 4=0.15%, 10=82.86%, 20=7.92%, 50=3.88%, 100=5.18% 00:31:39.605 cpu : usr=94.13%, sys=5.45%, ctx=12, majf=0, minf=179 00:31:39.605 IO depths : 1=1.7%, 2=98.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:39.605 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:39.605 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:39.605 issued rwts: total=1313,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:39.605 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:39.605 00:31:39.605 Run status group 0 (all jobs): 00:31:39.605 READ: bw=101MiB/s (106MB/s), 32.7MiB/s-36.0MiB/s (34.3MB/s-37.7MB/s), io=509MiB (534MB), run=5001-5023msec 00:31:39.605 15:36:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:31:39.605 15:36:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:39.605 15:36:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:39.605 15:36:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:39.605 15:36:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:39.605 15:36:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:39.605 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.605 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:39.605 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.605 15:36:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:39.605 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.605 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:39.605 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.605 15:36:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:31:39.605 15:36:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:31:39.605 15:36:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:31:39.605 15:36:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:31:39.605 15:36:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:31:39.605 15:36:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:31:39.605 15:36:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:31:39.605 15:36:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:39.605 15:36:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:39.605 15:36:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:39.605 15:36:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:39.605 15:36:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:31:39.605 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:39.606 bdev_null0 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:39.606 [2024-07-15 15:36:42.703547] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:39.606 bdev_null1 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:39.606 bdev_null2 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:39.606 { 00:31:39.606 "params": { 00:31:39.606 "name": "Nvme$subsystem", 00:31:39.606 "trtype": "$TEST_TRANSPORT", 00:31:39.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:39.606 "adrfam": "ipv4", 00:31:39.606 "trsvcid": "$NVMF_PORT", 00:31:39.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:39.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:39.606 "hdgst": ${hdgst:-false}, 00:31:39.606 "ddgst": ${ddgst:-false} 00:31:39.606 }, 00:31:39.606 "method": "bdev_nvme_attach_controller" 00:31:39.606 } 00:31:39.606 EOF 00:31:39.606 )") 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:39.606 { 00:31:39.606 "params": { 00:31:39.606 "name": "Nvme$subsystem", 00:31:39.606 "trtype": "$TEST_TRANSPORT", 00:31:39.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:39.606 "adrfam": "ipv4", 00:31:39.606 "trsvcid": "$NVMF_PORT", 00:31:39.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:39.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:39.606 "hdgst": ${hdgst:-false}, 00:31:39.606 "ddgst": ${ddgst:-false} 00:31:39.606 }, 00:31:39.606 "method": "bdev_nvme_attach_controller" 00:31:39.606 } 00:31:39.606 EOF 00:31:39.606 )") 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:39.606 { 00:31:39.606 "params": { 00:31:39.606 "name": "Nvme$subsystem", 00:31:39.606 "trtype": "$TEST_TRANSPORT", 00:31:39.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:39.606 "adrfam": "ipv4", 00:31:39.606 "trsvcid": "$NVMF_PORT", 00:31:39.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:39.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:39.606 "hdgst": ${hdgst:-false}, 00:31:39.606 "ddgst": ${ddgst:-false} 00:31:39.606 }, 00:31:39.606 "method": "bdev_nvme_attach_controller" 00:31:39.606 } 00:31:39.606 EOF 00:31:39.606 )") 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:39.606 15:36:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:39.606 "params": { 00:31:39.606 "name": "Nvme0", 00:31:39.606 "trtype": "tcp", 00:31:39.606 "traddr": "10.0.0.2", 00:31:39.606 "adrfam": "ipv4", 00:31:39.606 "trsvcid": "4420", 00:31:39.606 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:39.606 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:39.606 "hdgst": false, 00:31:39.606 "ddgst": false 00:31:39.606 }, 00:31:39.606 "method": "bdev_nvme_attach_controller" 00:31:39.606 },{ 00:31:39.606 "params": { 00:31:39.606 "name": "Nvme1", 00:31:39.606 "trtype": "tcp", 00:31:39.606 "traddr": "10.0.0.2", 00:31:39.606 "adrfam": "ipv4", 00:31:39.607 "trsvcid": "4420", 00:31:39.607 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:39.607 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:39.607 "hdgst": false, 00:31:39.607 "ddgst": false 00:31:39.607 }, 00:31:39.607 "method": "bdev_nvme_attach_controller" 00:31:39.607 },{ 00:31:39.607 "params": { 00:31:39.607 "name": "Nvme2", 00:31:39.607 "trtype": "tcp", 00:31:39.607 "traddr": "10.0.0.2", 00:31:39.607 "adrfam": "ipv4", 00:31:39.607 "trsvcid": "4420", 00:31:39.607 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:39.607 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:39.607 "hdgst": false, 00:31:39.607 "ddgst": false 00:31:39.607 }, 00:31:39.607 "method": "bdev_nvme_attach_controller" 00:31:39.607 }' 00:31:39.607 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:39.607 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:39.607 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:39.607 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:39.607 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:39.607 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:39.607 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:39.607 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:39.607 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:39.607 15:36:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:39.607 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:39.607 ... 00:31:39.607 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:39.607 ... 00:31:39.607 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:39.607 ... 00:31:39.607 fio-3.35 00:31:39.607 Starting 24 threads 00:31:39.607 EAL: No free 2048 kB hugepages reported on node 1 00:31:51.848 00:31:51.848 filename0: (groupid=0, jobs=1): err= 0: pid=3247954: Mon Jul 15 15:36:54 2024 00:31:51.848 read: IOPS=620, BW=2482KiB/s (2542kB/s)(24.3MiB/10019msec) 00:31:51.848 slat (nsec): min=6465, max=46534, avg=12027.73, stdev=4808.08 00:31:51.848 clat (usec): min=6710, max=51585, avg=25687.06, stdev=5315.10 00:31:51.848 lat (usec): min=6718, max=51599, avg=25699.09, stdev=5315.62 00:31:51.848 clat percentiles (usec): 00:31:51.848 | 1.00th=[ 8586], 5.00th=[15270], 10.00th=[20841], 20.00th=[25297], 00:31:51.848 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26084], 60.00th=[26346], 00:31:51.848 | 70.00th=[26346], 80.00th=[26608], 90.00th=[27919], 95.00th=[33817], 00:31:51.848 | 99.00th=[45351], 99.50th=[46400], 99.90th=[48497], 99.95th=[48497], 00:31:51.848 | 99.99th=[51643] 00:31:51.848 bw ( KiB/s): min= 2376, max= 2768, per=4.27%, avg=2480.40, stdev=88.84, samples=20 00:31:51.848 iops : min= 594, max= 692, avg=620.10, stdev=22.21, samples=20 00:31:51.848 lat (msec) : 10=2.35%, 20=7.33%, 50=90.28%, 100=0.03% 00:31:51.848 cpu : usr=96.80%, sys=2.78%, ctx=17, majf=0, minf=52 00:31:51.848 IO depths : 1=3.9%, 2=8.0%, 4=20.0%, 8=59.2%, 16=8.9%, 32=0.0%, >=64=0.0% 00:31:51.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.848 complete : 0=0.0%, 4=93.1%, 8=1.3%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.848 issued rwts: total=6217,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:51.848 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:51.848 filename0: (groupid=0, jobs=1): err= 0: pid=3247955: Mon Jul 15 15:36:54 2024 00:31:51.848 read: IOPS=604, BW=2419KiB/s (2477kB/s)(23.7MiB/10021msec) 00:31:51.848 slat (nsec): min=6527, max=57842, avg=20021.52, stdev=9748.98 00:31:51.848 clat (usec): min=7593, max=65080, avg=26303.85, stdev=3937.18 00:31:51.848 lat (usec): min=7601, max=65098, avg=26323.87, stdev=3937.47 00:31:51.848 clat percentiles (usec): 00:31:51.848 | 1.00th=[12125], 5.00th=[23987], 10.00th=[25035], 20.00th=[25560], 00:31:51.848 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26084], 60.00th=[26346], 00:31:51.848 | 70.00th=[26608], 80.00th=[26870], 90.00th=[27395], 95.00th=[29230], 00:31:51.848 | 99.00th=[43779], 99.50th=[45351], 99.90th=[64750], 99.95th=[65274], 00:31:51.848 | 99.99th=[65274] 00:31:51.848 bw ( KiB/s): min= 2176, max= 2560, per=4.16%, avg=2417.60, stdev=73.58, samples=20 00:31:51.848 iops : min= 544, max= 640, avg=604.40, stdev=18.39, samples=20 00:31:51.848 lat (msec) : 10=0.63%, 20=2.51%, 50=96.57%, 100=0.30% 00:31:51.848 cpu : usr=96.29%, sys=3.26%, ctx=27, majf=0, minf=32 00:31:51.848 IO depths : 1=3.0%, 2=6.2%, 4=17.5%, 8=63.5%, 16=9.9%, 32=0.0%, >=64=0.0% 00:31:51.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.848 complete : 0=0.0%, 4=92.5%, 8=2.1%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.848 issued rwts: total=6060,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:51.848 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:51.848 filename0: (groupid=0, jobs=1): err= 0: pid=3247956: Mon Jul 15 15:36:54 2024 00:31:51.848 read: IOPS=613, BW=2453KiB/s (2512kB/s)(24.0MiB/10016msec) 00:31:51.848 slat (nsec): min=6495, max=62319, avg=14581.44, stdev=7511.13 00:31:51.848 clat (usec): min=4304, max=49780, avg=25989.93, stdev=3900.57 00:31:51.848 lat (usec): min=4314, max=49804, avg=26004.51, stdev=3900.98 00:31:51.848 clat percentiles (usec): 00:31:51.848 | 1.00th=[12256], 5.00th=[19006], 10.00th=[24511], 20.00th=[25560], 00:31:51.848 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26084], 60.00th=[26346], 00:31:51.848 | 70.00th=[26608], 80.00th=[26870], 90.00th=[27395], 95.00th=[29230], 00:31:51.848 | 99.00th=[42730], 99.50th=[47449], 99.90th=[49546], 99.95th=[49546], 00:31:51.848 | 99.99th=[49546] 00:31:51.848 bw ( KiB/s): min= 2352, max= 2640, per=4.22%, avg=2450.65, stdev=73.13, samples=20 00:31:51.848 iops : min= 588, max= 660, avg=612.65, stdev=18.26, samples=20 00:31:51.848 lat (msec) : 10=0.50%, 20=5.11%, 50=94.38% 00:31:51.848 cpu : usr=96.43%, sys=3.17%, ctx=17, majf=0, minf=34 00:31:51.848 IO depths : 1=1.9%, 2=4.6%, 4=17.8%, 8=64.5%, 16=11.2%, 32=0.0%, >=64=0.0% 00:31:51.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.848 complete : 0=0.0%, 4=92.9%, 8=2.0%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.848 issued rwts: total=6142,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:51.848 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:51.848 filename0: (groupid=0, jobs=1): err= 0: pid=3247957: Mon Jul 15 15:36:54 2024 00:31:51.848 read: IOPS=608, BW=2434KiB/s (2492kB/s)(23.8MiB/10020msec) 00:31:51.848 slat (nsec): min=6519, max=59179, avg=23563.48, stdev=8936.00 00:31:51.848 clat (usec): min=14090, max=56643, avg=26107.39, stdev=1914.67 00:31:51.848 lat (usec): min=14107, max=56672, avg=26130.95, stdev=1914.84 00:31:51.848 clat percentiles (usec): 00:31:51.848 | 1.00th=[19792], 5.00th=[25035], 10.00th=[25297], 20.00th=[25560], 00:31:51.848 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26084], 60.00th=[26346], 00:31:51.848 | 70.00th=[26346], 80.00th=[26608], 90.00th=[26870], 95.00th=[27132], 00:31:51.848 | 99.00th=[31327], 99.50th=[35390], 99.90th=[48497], 99.95th=[49021], 00:31:51.848 | 99.99th=[56886] 00:31:51.848 bw ( KiB/s): min= 2304, max= 2560, per=4.19%, avg=2432.00, stdev=87.02, samples=20 00:31:51.848 iops : min= 576, max= 640, avg=608.00, stdev=21.75, samples=20 00:31:51.848 lat (msec) : 20=1.07%, 50=98.88%, 100=0.05% 00:31:51.848 cpu : usr=96.25%, sys=3.33%, ctx=32, majf=0, minf=61 00:31:51.848 IO depths : 1=5.6%, 2=11.4%, 4=23.6%, 8=52.5%, 16=7.0%, 32=0.0%, >=64=0.0% 00:31:51.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.848 complete : 0=0.0%, 4=93.7%, 8=0.5%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.848 issued rwts: total=6096,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:51.848 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:51.848 filename0: (groupid=0, jobs=1): err= 0: pid=3247958: Mon Jul 15 15:36:54 2024 00:31:51.848 read: IOPS=604, BW=2418KiB/s (2476kB/s)(23.6MiB/10004msec) 00:31:51.848 slat (nsec): min=6570, max=70154, avg=23483.14, stdev=8879.12 00:31:51.848 clat (usec): min=9146, max=57162, avg=26247.82, stdev=1989.25 00:31:51.848 lat (usec): min=9154, max=57183, avg=26271.30, stdev=1988.83 00:31:51.848 clat percentiles (usec): 00:31:51.848 | 1.00th=[23987], 5.00th=[25035], 10.00th=[25297], 20.00th=[25560], 00:31:51.848 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26084], 60.00th=[26346], 00:31:51.848 | 70.00th=[26346], 80.00th=[26608], 90.00th=[26870], 95.00th=[27132], 00:31:51.848 | 99.00th=[31851], 99.50th=[43254], 99.90th=[49546], 99.95th=[49546], 00:31:51.848 | 99.99th=[57410] 00:31:51.848 bw ( KiB/s): min= 2176, max= 2432, per=4.16%, avg=2418.53, stdev=58.73, samples=19 00:31:51.848 iops : min= 544, max= 608, avg=604.63, stdev=14.68, samples=19 00:31:51.848 lat (msec) : 10=0.03%, 20=0.56%, 50=99.36%, 100=0.05% 00:31:51.848 cpu : usr=96.55%, sys=3.05%, ctx=26, majf=0, minf=69 00:31:51.848 IO depths : 1=5.9%, 2=11.9%, 4=24.4%, 8=51.2%, 16=6.6%, 32=0.0%, >=64=0.0% 00:31:51.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.848 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.848 issued rwts: total=6048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:51.848 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:51.848 filename0: (groupid=0, jobs=1): err= 0: pid=3247959: Mon Jul 15 15:36:54 2024 00:31:51.848 read: IOPS=635, BW=2542KiB/s (2603kB/s)(24.9MiB/10025msec) 00:31:51.848 slat (nsec): min=3899, max=42597, avg=11442.99, stdev=4518.56 00:31:51.848 clat (usec): min=3960, max=49065, avg=25067.75, stdev=5373.23 00:31:51.848 lat (usec): min=3969, max=49078, avg=25079.19, stdev=5373.79 00:31:51.849 clat percentiles (usec): 00:31:51.849 | 1.00th=[ 7963], 5.00th=[13435], 10.00th=[18482], 20.00th=[24249], 00:31:51.849 | 30.00th=[25560], 40.00th=[25822], 50.00th=[26084], 60.00th=[26084], 00:31:51.849 | 70.00th=[26346], 80.00th=[26608], 90.00th=[27395], 95.00th=[31851], 00:31:51.849 | 99.00th=[43779], 99.50th=[46924], 99.90th=[49021], 99.95th=[49021], 00:31:51.849 | 99.99th=[49021] 00:31:51.849 bw ( KiB/s): min= 2304, max= 2880, per=4.38%, avg=2544.80, stdev=170.08, samples=20 00:31:51.849 iops : min= 576, max= 720, avg=636.20, stdev=42.52, samples=20 00:31:51.849 lat (msec) : 4=0.08%, 10=3.09%, 20=8.66%, 50=88.17% 00:31:51.849 cpu : usr=96.37%, sys=3.21%, ctx=25, majf=0, minf=70 00:31:51.849 IO depths : 1=3.3%, 2=6.7%, 4=16.6%, 8=63.4%, 16=10.0%, 32=0.0%, >=64=0.0% 00:31:51.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.849 complete : 0=0.0%, 4=92.2%, 8=2.8%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.849 issued rwts: total=6372,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:51.849 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:51.849 filename0: (groupid=0, jobs=1): err= 0: pid=3247961: Mon Jul 15 15:36:54 2024 00:31:51.849 read: IOPS=608, BW=2434KiB/s (2492kB/s)(23.8MiB/10013msec) 00:31:51.849 slat (nsec): min=6580, max=61633, avg=19562.51, stdev=8873.39 00:31:51.849 clat (usec): min=10583, max=49317, avg=26150.44, stdev=1585.69 00:31:51.849 lat (usec): min=10591, max=49330, avg=26170.01, stdev=1585.87 00:31:51.849 clat percentiles (usec): 00:31:51.849 | 1.00th=[21103], 5.00th=[24773], 10.00th=[25297], 20.00th=[25822], 00:31:51.849 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26084], 60.00th=[26346], 00:31:51.849 | 70.00th=[26346], 80.00th=[26608], 90.00th=[27132], 95.00th=[27395], 00:31:51.849 | 99.00th=[30540], 99.50th=[31851], 99.90th=[36963], 99.95th=[49021], 00:31:51.849 | 99.99th=[49546] 00:31:51.849 bw ( KiB/s): min= 2304, max= 2565, per=4.19%, avg=2430.40, stdev=43.28, samples=20 00:31:51.849 iops : min= 576, max= 641, avg=607.55, stdev=10.78, samples=20 00:31:51.849 lat (msec) : 20=0.80%, 50=99.20% 00:31:51.849 cpu : usr=95.96%, sys=3.61%, ctx=39, majf=0, minf=82 00:31:51.849 IO depths : 1=4.7%, 2=9.6%, 4=21.2%, 8=56.6%, 16=7.9%, 32=0.0%, >=64=0.0% 00:31:51.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.849 complete : 0=0.0%, 4=93.2%, 8=1.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.849 issued rwts: total=6092,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:51.849 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:51.849 filename0: (groupid=0, jobs=1): err= 0: pid=3247962: Mon Jul 15 15:36:54 2024 00:31:51.849 read: IOPS=605, BW=2421KiB/s (2479kB/s)(23.7MiB/10020msec) 00:31:51.849 slat (nsec): min=6735, max=59190, avg=24551.02, stdev=8592.21 00:31:51.849 clat (usec): min=19430, max=65093, avg=26238.17, stdev=2175.16 00:31:51.849 lat (usec): min=19446, max=65108, avg=26262.72, stdev=2174.46 00:31:51.849 clat percentiles (usec): 00:31:51.849 | 1.00th=[23987], 5.00th=[25297], 10.00th=[25560], 20.00th=[25822], 00:31:51.849 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26084], 60.00th=[26346], 00:31:51.849 | 70.00th=[26346], 80.00th=[26608], 90.00th=[26870], 95.00th=[27132], 00:31:51.849 | 99.00th=[29754], 99.50th=[32113], 99.90th=[65274], 99.95th=[65274], 00:31:51.849 | 99.99th=[65274] 00:31:51.849 bw ( KiB/s): min= 2176, max= 2560, per=4.17%, avg=2419.20, stdev=86.02, samples=20 00:31:51.849 iops : min= 544, max= 640, avg=604.80, stdev=21.51, samples=20 00:31:51.849 lat (msec) : 20=0.26%, 50=99.47%, 100=0.26% 00:31:51.849 cpu : usr=96.32%, sys=3.27%, ctx=25, majf=0, minf=66 00:31:51.849 IO depths : 1=6.0%, 2=12.0%, 4=24.4%, 8=51.1%, 16=6.5%, 32=0.0%, >=64=0.0% 00:31:51.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.849 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.849 issued rwts: total=6064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:51.849 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:51.849 filename1: (groupid=0, jobs=1): err= 0: pid=3247963: Mon Jul 15 15:36:54 2024 00:31:51.849 read: IOPS=596, BW=2387KiB/s (2444kB/s)(23.3MiB/10004msec) 00:31:51.849 slat (nsec): min=5805, max=60223, avg=20215.35, stdev=9773.66 00:31:51.849 clat (usec): min=6246, max=49345, avg=26652.30, stdev=4574.21 00:31:51.849 lat (usec): min=6254, max=49359, avg=26672.51, stdev=4573.75 00:31:51.849 clat percentiles (usec): 00:31:51.849 | 1.00th=[ 9634], 5.00th=[23987], 10.00th=[25297], 20.00th=[25560], 00:31:51.849 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26084], 60.00th=[26346], 00:31:51.849 | 70.00th=[26608], 80.00th=[26870], 90.00th=[27919], 95.00th=[35914], 00:31:51.849 | 99.00th=[43779], 99.50th=[46400], 99.90th=[47973], 99.95th=[49546], 00:31:51.849 | 99.99th=[49546] 00:31:51.849 bw ( KiB/s): min= 2048, max= 2512, per=4.08%, avg=2371.79, stdev=107.97, samples=19 00:31:51.849 iops : min= 512, max= 628, avg=592.95, stdev=26.99, samples=19 00:31:51.849 lat (msec) : 10=1.37%, 20=2.46%, 50=96.16% 00:31:51.849 cpu : usr=96.83%, sys=2.77%, ctx=17, majf=0, minf=60 00:31:51.849 IO depths : 1=3.4%, 2=6.8%, 4=17.3%, 8=62.0%, 16=10.4%, 32=0.0%, >=64=0.0% 00:31:51.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.849 complete : 0=0.0%, 4=92.5%, 8=3.1%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.849 issued rwts: total=5969,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:51.849 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:51.849 filename1: (groupid=0, jobs=1): err= 0: pid=3247964: Mon Jul 15 15:36:54 2024 00:31:51.849 read: IOPS=607, BW=2432KiB/s (2490kB/s)(23.8MiB/10015msec) 00:31:51.849 slat (nsec): min=6451, max=55643, avg=15085.83, stdev=7579.29 00:31:51.849 clat (usec): min=6256, max=50568, avg=26210.12, stdev=3557.14 00:31:51.849 lat (usec): min=6273, max=50583, avg=26225.20, stdev=3557.52 00:31:51.849 clat percentiles (usec): 00:31:51.849 | 1.00th=[13698], 5.00th=[21365], 10.00th=[24511], 20.00th=[25560], 00:31:51.849 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26346], 60.00th=[26346], 00:31:51.849 | 70.00th=[26608], 80.00th=[26870], 90.00th=[27919], 95.00th=[31589], 00:31:51.849 | 99.00th=[40109], 99.50th=[42730], 99.90th=[46924], 99.95th=[46924], 00:31:51.849 | 99.99th=[50594] 00:31:51.849 bw ( KiB/s): min= 2304, max= 2640, per=4.18%, avg=2428.80, stdev=69.18, samples=20 00:31:51.849 iops : min= 576, max= 660, avg=607.20, stdev=17.29, samples=20 00:31:51.849 lat (msec) : 10=0.51%, 20=3.38%, 50=96.07%, 100=0.03% 00:31:51.849 cpu : usr=96.74%, sys=2.84%, ctx=15, majf=0, minf=69 00:31:51.849 IO depths : 1=3.2%, 2=6.8%, 4=18.8%, 8=61.4%, 16=9.8%, 32=0.0%, >=64=0.0% 00:31:51.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.849 complete : 0=0.0%, 4=92.9%, 8=1.8%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.849 issued rwts: total=6088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:51.849 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:51.849 filename1: (groupid=0, jobs=1): err= 0: pid=3247965: Mon Jul 15 15:36:54 2024 00:31:51.849 read: IOPS=609, BW=2438KiB/s (2497kB/s)(23.8MiB/10013msec) 00:31:51.849 slat (nsec): min=6430, max=59867, avg=19395.00, stdev=8580.37 00:31:51.849 clat (usec): min=9265, max=43880, avg=26103.78, stdev=2184.04 00:31:51.849 lat (usec): min=9278, max=43887, avg=26123.18, stdev=2184.38 00:31:51.849 clat percentiles (usec): 00:31:51.849 | 1.00th=[16188], 5.00th=[24773], 10.00th=[25297], 20.00th=[25560], 00:31:51.849 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26084], 60.00th=[26346], 00:31:51.849 | 70.00th=[26346], 80.00th=[26608], 90.00th=[27132], 95.00th=[27657], 00:31:51.849 | 99.00th=[34341], 99.50th=[34866], 99.90th=[41681], 99.95th=[42730], 00:31:51.849 | 99.99th=[43779] 00:31:51.849 bw ( KiB/s): min= 2427, max= 2488, per=4.19%, avg=2434.55, stdev=12.63, samples=20 00:31:51.849 iops : min= 606, max= 622, avg=608.60, stdev= 3.19, samples=20 00:31:51.849 lat (msec) : 10=0.11%, 20=1.88%, 50=98.00% 00:31:51.849 cpu : usr=96.39%, sys=3.21%, ctx=27, majf=0, minf=57 00:31:51.849 IO depths : 1=4.7%, 2=9.7%, 4=22.6%, 8=55.1%, 16=7.8%, 32=0.0%, >=64=0.0% 00:31:51.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.849 complete : 0=0.0%, 4=93.7%, 8=0.5%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.849 issued rwts: total=6103,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:51.849 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:51.849 filename1: (groupid=0, jobs=1): err= 0: pid=3247966: Mon Jul 15 15:36:54 2024 00:31:51.849 read: IOPS=583, BW=2334KiB/s (2390kB/s)(22.8MiB/10002msec) 00:31:51.849 slat (nsec): min=6456, max=59537, avg=15658.51, stdev=9751.34 00:31:51.849 clat (usec): min=3262, max=80253, avg=27325.97, stdev=5606.23 00:31:51.849 lat (usec): min=3269, max=80271, avg=27341.62, stdev=5605.98 00:31:51.849 clat percentiles (usec): 00:31:51.849 | 1.00th=[ 8848], 5.00th=[21627], 10.00th=[25035], 20.00th=[25560], 00:31:51.849 | 30.00th=[26084], 40.00th=[26084], 50.00th=[26346], 60.00th=[26608], 00:31:51.849 | 70.00th=[26870], 80.00th=[27919], 90.00th=[33424], 95.00th=[38011], 00:31:51.849 | 99.00th=[44827], 99.50th=[46400], 99.90th=[63177], 99.95th=[63177], 00:31:51.849 | 99.99th=[80217] 00:31:51.849 bw ( KiB/s): min= 2100, max= 2432, per=4.01%, avg=2329.89, stdev=85.17, samples=19 00:31:51.849 iops : min= 525, max= 608, avg=582.47, stdev=21.29, samples=19 00:31:51.849 lat (msec) : 4=0.03%, 10=1.49%, 20=2.98%, 50=95.22%, 100=0.27% 00:31:51.849 cpu : usr=96.30%, sys=3.24%, ctx=23, majf=0, minf=91 00:31:51.849 IO depths : 1=0.4%, 2=1.0%, 4=10.2%, 8=74.5%, 16=13.9%, 32=0.0%, >=64=0.0% 00:31:51.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.849 complete : 0=0.0%, 4=91.2%, 8=4.8%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.849 issued rwts: total=5837,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:51.849 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:51.849 filename1: (groupid=0, jobs=1): err= 0: pid=3247967: Mon Jul 15 15:36:54 2024 00:31:51.849 read: IOPS=597, BW=2391KiB/s (2448kB/s)(23.4MiB/10002msec) 00:31:51.849 slat (nsec): min=6452, max=62892, avg=18943.99, stdev=10230.11 00:31:51.849 clat (usec): min=5655, max=54606, avg=26640.69, stdev=4056.34 00:31:51.849 lat (usec): min=5669, max=54626, avg=26659.64, stdev=4055.78 00:31:51.849 clat percentiles (usec): 00:31:51.849 | 1.00th=[13829], 5.00th=[23725], 10.00th=[25035], 20.00th=[25560], 00:31:51.849 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26346], 60.00th=[26346], 00:31:51.849 | 70.00th=[26608], 80.00th=[26870], 90.00th=[28443], 95.00th=[32637], 00:31:51.849 | 99.00th=[43779], 99.50th=[46400], 99.90th=[54789], 99.95th=[54789], 00:31:51.849 | 99.99th=[54789] 00:31:51.849 bw ( KiB/s): min= 2176, max= 2432, per=4.10%, avg=2382.32, stdev=67.44, samples=19 00:31:51.849 iops : min= 544, max= 608, avg=595.58, stdev=16.86, samples=19 00:31:51.849 lat (msec) : 10=0.64%, 20=1.87%, 50=97.16%, 100=0.33% 00:31:51.849 cpu : usr=96.24%, sys=3.30%, ctx=22, majf=0, minf=48 00:31:51.849 IO depths : 1=1.7%, 2=3.7%, 4=13.5%, 8=69.4%, 16=11.7%, 32=0.0%, >=64=0.0% 00:31:51.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.849 complete : 0=0.0%, 4=91.6%, 8=3.6%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.849 issued rwts: total=5978,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:51.849 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:51.849 filename1: (groupid=0, jobs=1): err= 0: pid=3247968: Mon Jul 15 15:36:54 2024 00:31:51.849 read: IOPS=597, BW=2390KiB/s (2447kB/s)(23.3MiB/10003msec) 00:31:51.849 slat (nsec): min=6151, max=59190, avg=20242.71, stdev=10209.84 00:31:51.849 clat (usec): min=7536, max=55396, avg=26626.25, stdev=3719.41 00:31:51.850 lat (usec): min=7555, max=55412, avg=26646.49, stdev=3718.66 00:31:51.850 clat percentiles (usec): 00:31:51.850 | 1.00th=[15926], 5.00th=[24249], 10.00th=[25297], 20.00th=[25560], 00:31:51.850 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26084], 60.00th=[26346], 00:31:51.850 | 70.00th=[26608], 80.00th=[26870], 90.00th=[27657], 95.00th=[32113], 00:31:51.850 | 99.00th=[44303], 99.50th=[45876], 99.90th=[49546], 99.95th=[55313], 00:31:51.850 | 99.99th=[55313] 00:31:51.850 bw ( KiB/s): min= 2176, max= 2480, per=4.10%, avg=2381.89, stdev=73.85, samples=19 00:31:51.850 iops : min= 544, max= 620, avg=595.47, stdev=18.46, samples=19 00:31:51.850 lat (msec) : 10=0.47%, 20=2.06%, 50=97.39%, 100=0.08% 00:31:51.850 cpu : usr=96.51%, sys=3.07%, ctx=15, majf=0, minf=59 00:31:51.850 IO depths : 1=2.9%, 2=5.8%, 4=14.5%, 8=65.2%, 16=11.6%, 32=0.0%, >=64=0.0% 00:31:51.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.850 complete : 0=0.0%, 4=91.8%, 8=4.4%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.850 issued rwts: total=5977,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:51.850 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:51.850 filename1: (groupid=0, jobs=1): err= 0: pid=3247969: Mon Jul 15 15:36:54 2024 00:31:51.850 read: IOPS=605, BW=2424KiB/s (2482kB/s)(23.7MiB/10008msec) 00:31:51.850 slat (nsec): min=6664, max=55861, avg=23473.47, stdev=8679.94 00:31:51.850 clat (usec): min=7191, max=51804, avg=26192.28, stdev=1857.77 00:31:51.850 lat (usec): min=7200, max=51825, avg=26215.76, stdev=1857.49 00:31:51.850 clat percentiles (usec): 00:31:51.850 | 1.00th=[22676], 5.00th=[25297], 10.00th=[25560], 20.00th=[25560], 00:31:51.850 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26084], 60.00th=[26346], 00:31:51.850 | 70.00th=[26346], 80.00th=[26608], 90.00th=[26870], 95.00th=[27132], 00:31:51.850 | 99.00th=[28181], 99.50th=[33424], 99.90th=[51643], 99.95th=[51643], 00:31:51.850 | 99.99th=[51643] 00:31:51.850 bw ( KiB/s): min= 2304, max= 2480, per=4.17%, avg=2421.05, stdev=42.68, samples=19 00:31:51.850 iops : min= 576, max= 620, avg=605.26, stdev=10.67, samples=19 00:31:51.850 lat (msec) : 10=0.07%, 20=0.36%, 50=99.31%, 100=0.26% 00:31:51.850 cpu : usr=96.63%, sys=2.98%, ctx=13, majf=0, minf=53 00:31:51.850 IO depths : 1=6.0%, 2=11.9%, 4=24.2%, 8=51.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:31:51.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.850 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.850 issued rwts: total=6064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:51.850 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:51.850 filename1: (groupid=0, jobs=1): err= 0: pid=3247970: Mon Jul 15 15:36:54 2024 00:31:51.850 read: IOPS=606, BW=2427KiB/s (2485kB/s)(23.8MiB/10020msec) 00:31:51.850 slat (nsec): min=6755, max=60249, avg=25584.23, stdev=8487.43 00:31:51.850 clat (usec): min=14062, max=48939, avg=26144.59, stdev=1480.85 00:31:51.850 lat (usec): min=14070, max=48955, avg=26170.18, stdev=1480.55 00:31:51.850 clat percentiles (usec): 00:31:51.850 | 1.00th=[23987], 5.00th=[25035], 10.00th=[25297], 20.00th=[25560], 00:31:51.850 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26084], 60.00th=[26346], 00:31:51.850 | 70.00th=[26346], 80.00th=[26608], 90.00th=[26870], 95.00th=[27132], 00:31:51.850 | 99.00th=[27657], 99.50th=[27919], 99.90th=[49021], 99.95th=[49021], 00:31:51.850 | 99.99th=[49021] 00:31:51.850 bw ( KiB/s): min= 2304, max= 2560, per=4.18%, avg=2425.60, stdev=77.42, samples=20 00:31:51.850 iops : min= 576, max= 640, avg=606.40, stdev=19.35, samples=20 00:31:51.850 lat (msec) : 20=0.46%, 50=99.54% 00:31:51.850 cpu : usr=96.74%, sys=2.84%, ctx=19, majf=0, minf=38 00:31:51.850 IO depths : 1=6.2%, 2=12.4%, 4=24.8%, 8=50.3%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:51.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.850 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.850 issued rwts: total=6080,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:51.850 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:51.850 filename2: (groupid=0, jobs=1): err= 0: pid=3247971: Mon Jul 15 15:36:54 2024 00:31:51.850 read: IOPS=591, BW=2365KiB/s (2422kB/s)(23.1MiB/10010msec) 00:31:51.850 slat (nsec): min=6487, max=57840, avg=21445.54, stdev=10272.18 00:31:51.850 clat (usec): min=8891, max=54430, avg=26892.83, stdev=3973.86 00:31:51.850 lat (usec): min=8900, max=54450, avg=26914.28, stdev=3972.68 00:31:51.850 clat percentiles (usec): 00:31:51.850 | 1.00th=[18482], 5.00th=[24511], 10.00th=[25297], 20.00th=[25822], 00:31:51.850 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26346], 60.00th=[26346], 00:31:51.850 | 70.00th=[26608], 80.00th=[26870], 90.00th=[28967], 95.00th=[32637], 00:31:51.850 | 99.00th=[44303], 99.50th=[48497], 99.90th=[54264], 99.95th=[54264], 00:31:51.850 | 99.99th=[54264] 00:31:51.850 bw ( KiB/s): min= 2048, max= 2432, per=4.06%, avg=2357.05, stdev=111.75, samples=19 00:31:51.850 iops : min= 512, max= 608, avg=589.26, stdev=27.94, samples=19 00:31:51.850 lat (msec) : 10=0.10%, 20=1.57%, 50=97.99%, 100=0.34% 00:31:51.850 cpu : usr=96.70%, sys=2.91%, ctx=18, majf=0, minf=44 00:31:51.850 IO depths : 1=3.6%, 2=7.1%, 4=17.2%, 8=62.1%, 16=10.0%, 32=0.0%, >=64=0.0% 00:31:51.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.850 complete : 0=0.0%, 4=92.5%, 8=2.8%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.850 issued rwts: total=5918,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:51.850 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:51.850 filename2: (groupid=0, jobs=1): err= 0: pid=3247972: Mon Jul 15 15:36:54 2024 00:31:51.850 read: IOPS=586, BW=2347KiB/s (2403kB/s)(22.9MiB/10002msec) 00:31:51.850 slat (nsec): min=6455, max=56851, avg=16639.45, stdev=9696.79 00:31:51.850 clat (usec): min=5335, max=62692, avg=27183.16, stdev=4558.55 00:31:51.850 lat (usec): min=5342, max=62710, avg=27199.80, stdev=4557.97 00:31:51.850 clat percentiles (usec): 00:31:51.850 | 1.00th=[10814], 5.00th=[24773], 10.00th=[25297], 20.00th=[25822], 00:31:51.850 | 30.00th=[26084], 40.00th=[26346], 50.00th=[26346], 60.00th=[26608], 00:31:51.850 | 70.00th=[26608], 80.00th=[27132], 90.00th=[30802], 95.00th=[36439], 00:31:51.850 | 99.00th=[45351], 99.50th=[46400], 99.90th=[54789], 99.95th=[62653], 00:31:51.850 | 99.99th=[62653] 00:31:51.850 bw ( KiB/s): min= 2052, max= 2480, per=4.02%, avg=2332.42, stdev=115.98, samples=19 00:31:51.850 iops : min= 513, max= 620, avg=583.11, stdev=29.00, samples=19 00:31:51.850 lat (msec) : 10=0.87%, 20=1.09%, 50=97.77%, 100=0.27% 00:31:51.850 cpu : usr=96.36%, sys=3.22%, ctx=15, majf=0, minf=90 00:31:51.850 IO depths : 1=0.5%, 2=1.1%, 4=6.4%, 8=76.4%, 16=15.7%, 32=0.0%, >=64=0.0% 00:31:51.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.850 complete : 0=0.0%, 4=90.5%, 8=7.3%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.850 issued rwts: total=5869,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:51.850 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:51.850 filename2: (groupid=0, jobs=1): err= 0: pid=3247974: Mon Jul 15 15:36:54 2024 00:31:51.850 read: IOPS=591, BW=2365KiB/s (2422kB/s)(23.1MiB/10008msec) 00:31:51.850 slat (nsec): min=5919, max=56005, avg=17373.56, stdev=9330.18 00:31:51.850 clat (usec): min=4868, max=60693, avg=26953.57, stdev=5262.87 00:31:51.850 lat (usec): min=4879, max=60721, avg=26970.95, stdev=5262.25 00:31:51.850 clat percentiles (usec): 00:31:51.850 | 1.00th=[11338], 5.00th=[20841], 10.00th=[24511], 20.00th=[25560], 00:31:51.850 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26346], 60.00th=[26346], 00:31:51.850 | 70.00th=[26608], 80.00th=[27132], 90.00th=[31589], 95.00th=[36963], 00:31:51.850 | 99.00th=[47449], 99.50th=[48497], 99.90th=[60556], 99.95th=[60556], 00:31:51.850 | 99.99th=[60556] 00:31:51.850 bw ( KiB/s): min= 2176, max= 2504, per=4.05%, avg=2352.84, stdev=81.75, samples=19 00:31:51.850 iops : min= 544, max= 626, avg=588.21, stdev=20.44, samples=19 00:31:51.850 lat (msec) : 10=0.93%, 20=3.30%, 50=95.40%, 100=0.37% 00:31:51.850 cpu : usr=96.04%, sys=3.54%, ctx=33, majf=0, minf=46 00:31:51.850 IO depths : 1=0.8%, 2=1.9%, 4=9.8%, 8=74.8%, 16=12.6%, 32=0.0%, >=64=0.0% 00:31:51.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.850 complete : 0=0.0%, 4=90.8%, 8=4.4%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.850 issued rwts: total=5918,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:51.850 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:51.850 filename2: (groupid=0, jobs=1): err= 0: pid=3247975: Mon Jul 15 15:36:54 2024 00:31:51.850 read: IOPS=600, BW=2402KiB/s (2460kB/s)(23.5MiB/10004msec) 00:31:51.850 slat (nsec): min=6131, max=59124, avg=20366.26, stdev=9960.12 00:31:51.850 clat (usec): min=6302, max=56299, avg=26469.78, stdev=3714.11 00:31:51.850 lat (usec): min=6315, max=56317, avg=26490.15, stdev=3713.61 00:31:51.850 clat percentiles (usec): 00:31:51.850 | 1.00th=[13042], 5.00th=[24511], 10.00th=[25297], 20.00th=[25560], 00:31:51.850 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26084], 60.00th=[26346], 00:31:51.850 | 70.00th=[26346], 80.00th=[26608], 90.00th=[27395], 95.00th=[30802], 00:31:51.850 | 99.00th=[43779], 99.50th=[46924], 99.90th=[49021], 99.95th=[49021], 00:31:51.850 | 99.99th=[56361] 00:31:51.850 bw ( KiB/s): min= 2180, max= 2464, per=4.11%, avg=2388.42, stdev=72.26, samples=19 00:31:51.850 iops : min= 545, max= 616, avg=597.11, stdev=18.06, samples=19 00:31:51.850 lat (msec) : 10=0.63%, 20=1.91%, 50=97.40%, 100=0.05% 00:31:51.850 cpu : usr=96.99%, sys=2.61%, ctx=16, majf=0, minf=55 00:31:51.850 IO depths : 1=3.8%, 2=7.7%, 4=17.3%, 8=61.2%, 16=9.9%, 32=0.0%, >=64=0.0% 00:31:51.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.850 complete : 0=0.0%, 4=92.4%, 8=3.1%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.850 issued rwts: total=6008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:51.850 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:51.850 filename2: (groupid=0, jobs=1): err= 0: pid=3247976: Mon Jul 15 15:36:54 2024 00:31:51.850 read: IOPS=606, BW=2427KiB/s (2486kB/s)(23.8MiB/10019msec) 00:31:51.850 slat (nsec): min=6844, max=66710, avg=23342.65, stdev=9139.60 00:31:51.850 clat (usec): min=13217, max=48704, avg=26180.28, stdev=1878.51 00:31:51.850 lat (usec): min=13231, max=48739, avg=26203.63, stdev=1877.81 00:31:51.850 clat percentiles (usec): 00:31:51.850 | 1.00th=[21365], 5.00th=[25035], 10.00th=[25560], 20.00th=[25560], 00:31:51.850 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26084], 60.00th=[26346], 00:31:51.850 | 70.00th=[26346], 80.00th=[26608], 90.00th=[26870], 95.00th=[27132], 00:31:51.850 | 99.00th=[30802], 99.50th=[42730], 99.90th=[48497], 99.95th=[48497], 00:31:51.850 | 99.99th=[48497] 00:31:51.850 bw ( KiB/s): min= 2304, max= 2560, per=4.18%, avg=2425.60, stdev=77.42, samples=20 00:31:51.850 iops : min= 576, max= 640, avg=606.40, stdev=19.35, samples=20 00:31:51.850 lat (msec) : 20=0.71%, 50=99.29% 00:31:51.850 cpu : usr=96.41%, sys=3.19%, ctx=23, majf=0, minf=41 00:31:51.850 IO depths : 1=6.0%, 2=11.9%, 4=24.4%, 8=51.0%, 16=6.6%, 32=0.0%, >=64=0.0% 00:31:51.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.850 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.850 issued rwts: total=6080,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:51.850 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:51.850 filename2: (groupid=0, jobs=1): err= 0: pid=3247977: Mon Jul 15 15:36:54 2024 00:31:51.850 read: IOPS=602, BW=2409KiB/s (2467kB/s)(23.5MiB/10002msec) 00:31:51.850 slat (nsec): min=6459, max=56569, avg=18442.25, stdev=9427.21 00:31:51.850 clat (usec): min=5579, max=68353, avg=26402.76, stdev=3243.04 00:31:51.850 lat (usec): min=5591, max=68371, avg=26421.20, stdev=3242.61 00:31:51.850 clat percentiles (usec): 00:31:51.851 | 1.00th=[16581], 5.00th=[25035], 10.00th=[25297], 20.00th=[25560], 00:31:51.851 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26084], 60.00th=[26346], 00:31:51.851 | 70.00th=[26346], 80.00th=[26608], 90.00th=[27132], 95.00th=[27657], 00:31:51.851 | 99.00th=[44827], 99.50th=[45876], 99.90th=[54264], 99.95th=[54264], 00:31:51.851 | 99.99th=[68682] 00:31:51.851 bw ( KiB/s): min= 2180, max= 2560, per=4.15%, avg=2408.63, stdev=81.54, samples=19 00:31:51.851 iops : min= 545, max= 640, avg=602.16, stdev=20.39, samples=19 00:31:51.851 lat (msec) : 10=0.18%, 20=1.49%, 50=98.06%, 100=0.27% 00:31:51.851 cpu : usr=96.89%, sys=2.71%, ctx=12, majf=0, minf=64 00:31:51.851 IO depths : 1=4.4%, 2=9.5%, 4=21.6%, 8=56.0%, 16=8.5%, 32=0.0%, >=64=0.0% 00:31:51.851 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.851 complete : 0=0.0%, 4=93.3%, 8=1.3%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.851 issued rwts: total=6024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:51.851 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:51.851 filename2: (groupid=0, jobs=1): err= 0: pid=3247978: Mon Jul 15 15:36:54 2024 00:31:51.851 read: IOPS=605, BW=2421KiB/s (2479kB/s)(23.7MiB/10020msec) 00:31:51.851 slat (nsec): min=6583, max=63594, avg=24611.88, stdev=9101.95 00:31:51.851 clat (usec): min=16038, max=56909, avg=26229.64, stdev=2107.61 00:31:51.851 lat (usec): min=16045, max=56922, avg=26254.26, stdev=2107.17 00:31:51.851 clat percentiles (usec): 00:31:51.851 | 1.00th=[22414], 5.00th=[25035], 10.00th=[25297], 20.00th=[25560], 00:31:51.851 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26084], 60.00th=[26346], 00:31:51.851 | 70.00th=[26346], 80.00th=[26608], 90.00th=[26870], 95.00th=[27132], 00:31:51.851 | 99.00th=[33817], 99.50th=[38536], 99.90th=[56886], 99.95th=[56886], 00:31:51.851 | 99.99th=[56886] 00:31:51.851 bw ( KiB/s): min= 2176, max= 2560, per=4.17%, avg=2419.20, stdev=91.93, samples=20 00:31:51.851 iops : min= 544, max= 640, avg=604.80, stdev=22.98, samples=20 00:31:51.851 lat (msec) : 20=0.63%, 50=99.11%, 100=0.26% 00:31:51.851 cpu : usr=96.80%, sys=2.78%, ctx=14, majf=0, minf=38 00:31:51.851 IO depths : 1=5.6%, 2=11.2%, 4=23.3%, 8=52.9%, 16=7.0%, 32=0.0%, >=64=0.0% 00:31:51.851 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.851 complete : 0=0.0%, 4=93.7%, 8=0.5%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.851 issued rwts: total=6064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:51.851 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:51.851 filename2: (groupid=0, jobs=1): err= 0: pid=3247979: Mon Jul 15 15:36:54 2024 00:31:51.851 read: IOPS=642, BW=2568KiB/s (2630kB/s)(25.1MiB/10020msec) 00:31:51.851 slat (nsec): min=3092, max=52815, avg=12416.75, stdev=5479.33 00:31:51.851 clat (usec): min=5660, max=50398, avg=24810.65, stdev=4878.18 00:31:51.851 lat (usec): min=5669, max=50409, avg=24823.07, stdev=4879.26 00:31:51.851 clat percentiles (usec): 00:31:51.851 | 1.00th=[ 7635], 5.00th=[14615], 10.00th=[18220], 20.00th=[23462], 00:31:51.851 | 30.00th=[25297], 40.00th=[25822], 50.00th=[26084], 60.00th=[26084], 00:31:51.851 | 70.00th=[26346], 80.00th=[26608], 90.00th=[27395], 95.00th=[28967], 00:31:51.851 | 99.00th=[39060], 99.50th=[43779], 99.90th=[49546], 99.95th=[50594], 00:31:51.851 | 99.99th=[50594] 00:31:51.851 bw ( KiB/s): min= 2336, max= 2832, per=4.43%, avg=2569.85, stdev=127.92, samples=20 00:31:51.851 iops : min= 584, max= 708, avg=642.45, stdev=31.98, samples=20 00:31:51.851 lat (msec) : 10=2.36%, 20=9.34%, 50=88.20%, 100=0.09% 00:31:51.851 cpu : usr=96.41%, sys=3.19%, ctx=28, majf=0, minf=54 00:31:51.851 IO depths : 1=3.2%, 2=6.5%, 4=16.2%, 8=64.2%, 16=9.9%, 32=0.0%, >=64=0.0% 00:31:51.851 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.851 complete : 0=0.0%, 4=91.9%, 8=3.0%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.851 issued rwts: total=6434,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:51.851 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:51.851 00:31:51.851 Run status group 0 (all jobs): 00:31:51.851 READ: bw=56.7MiB/s (59.4MB/s), 2334KiB/s-2568KiB/s (2390kB/s-2630kB/s), io=568MiB (596MB), run=10002-10025msec 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:51.851 bdev_null0 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:51.851 [2024-07-15 15:36:54.400540] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:51.851 bdev_null1 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:31:51.851 15:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:31:51.852 15:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:51.852 15:36:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:51.852 15:36:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:51.852 15:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:51.852 15:36:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:51.852 15:36:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:51.852 { 00:31:51.852 "params": { 00:31:51.852 "name": "Nvme$subsystem", 00:31:51.852 "trtype": "$TEST_TRANSPORT", 00:31:51.852 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:51.852 "adrfam": "ipv4", 00:31:51.852 "trsvcid": "$NVMF_PORT", 00:31:51.852 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:51.852 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:51.852 "hdgst": ${hdgst:-false}, 00:31:51.852 "ddgst": ${ddgst:-false} 00:31:51.852 }, 00:31:51.852 "method": "bdev_nvme_attach_controller" 00:31:51.852 } 00:31:51.852 EOF 00:31:51.852 )") 00:31:51.852 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:51.852 15:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:51.852 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:51.852 15:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:51.852 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:51.852 15:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:51.852 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:51.852 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:51.852 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:31:51.852 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:51.852 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:51.852 15:36:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:51.852 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:51.852 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:31:51.852 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:51.852 15:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:51.852 15:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:51.852 15:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:51.852 15:36:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:51.852 15:36:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:51.852 { 00:31:51.852 "params": { 00:31:51.852 "name": "Nvme$subsystem", 00:31:51.852 "trtype": "$TEST_TRANSPORT", 00:31:51.852 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:51.852 "adrfam": "ipv4", 00:31:51.852 "trsvcid": "$NVMF_PORT", 00:31:51.852 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:51.852 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:51.852 "hdgst": ${hdgst:-false}, 00:31:51.852 "ddgst": ${ddgst:-false} 00:31:51.852 }, 00:31:51.852 "method": "bdev_nvme_attach_controller" 00:31:51.852 } 00:31:51.852 EOF 00:31:51.852 )") 00:31:51.852 15:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:51.852 15:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:51.852 15:36:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:51.852 15:36:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:51.852 15:36:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:51.852 15:36:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:51.852 "params": { 00:31:51.852 "name": "Nvme0", 00:31:51.852 "trtype": "tcp", 00:31:51.852 "traddr": "10.0.0.2", 00:31:51.852 "adrfam": "ipv4", 00:31:51.852 "trsvcid": "4420", 00:31:51.852 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:51.852 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:51.852 "hdgst": false, 00:31:51.852 "ddgst": false 00:31:51.852 }, 00:31:51.852 "method": "bdev_nvme_attach_controller" 00:31:51.852 },{ 00:31:51.852 "params": { 00:31:51.852 "name": "Nvme1", 00:31:51.852 "trtype": "tcp", 00:31:51.852 "traddr": "10.0.0.2", 00:31:51.852 "adrfam": "ipv4", 00:31:51.852 "trsvcid": "4420", 00:31:51.852 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:51.852 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:51.852 "hdgst": false, 00:31:51.852 "ddgst": false 00:31:51.852 }, 00:31:51.852 "method": "bdev_nvme_attach_controller" 00:31:51.852 }' 00:31:51.852 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:51.852 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:51.852 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:51.852 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:51.852 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:51.852 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:51.852 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:51.852 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:51.852 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:51.852 15:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:51.852 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:51.852 ... 00:31:51.852 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:51.852 ... 00:31:51.852 fio-3.35 00:31:51.852 Starting 4 threads 00:31:51.852 EAL: No free 2048 kB hugepages reported on node 1 00:31:57.111 00:31:57.111 filename0: (groupid=0, jobs=1): err= 0: pid=3250002: Mon Jul 15 15:37:00 2024 00:31:57.111 read: IOPS=2930, BW=22.9MiB/s (24.0MB/s)(115MiB/5003msec) 00:31:57.111 slat (nsec): min=5831, max=40955, avg=8384.58, stdev=2830.40 00:31:57.111 clat (usec): min=1391, max=4641, avg=2707.65, stdev=472.73 00:31:57.111 lat (usec): min=1402, max=4647, avg=2716.03, stdev=472.63 00:31:57.111 clat percentiles (usec): 00:31:57.111 | 1.00th=[ 1827], 5.00th=[ 2008], 10.00th=[ 2114], 20.00th=[ 2278], 00:31:57.111 | 30.00th=[ 2409], 40.00th=[ 2540], 50.00th=[ 2671], 60.00th=[ 2802], 00:31:57.111 | 70.00th=[ 2933], 80.00th=[ 3097], 90.00th=[ 3359], 95.00th=[ 3556], 00:31:57.111 | 99.00th=[ 3916], 99.50th=[ 4113], 99.90th=[ 4359], 99.95th=[ 4555], 00:31:57.111 | 99.99th=[ 4621] 00:31:57.111 bw ( KiB/s): min=23104, max=23776, per=27.10%, avg=23443.20, stdev=199.95, samples=10 00:31:57.111 iops : min= 2888, max= 2972, avg=2930.40, stdev=24.99, samples=10 00:31:57.111 lat (msec) : 2=4.67%, 4=94.64%, 10=0.69% 00:31:57.111 cpu : usr=92.60%, sys=7.06%, ctx=11, majf=0, minf=26 00:31:57.111 IO depths : 1=0.2%, 2=1.7%, 4=67.1%, 8=31.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:57.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:57.111 complete : 0=0.0%, 4=95.0%, 8=5.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:57.111 issued rwts: total=14660,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:57.111 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:57.111 filename0: (groupid=0, jobs=1): err= 0: pid=3250003: Mon Jul 15 15:37:00 2024 00:31:57.111 read: IOPS=2329, BW=18.2MiB/s (19.1MB/s)(91.0MiB/5002msec) 00:31:57.111 slat (nsec): min=5800, max=28165, avg=8270.74, stdev=2704.85 00:31:57.111 clat (usec): min=1639, max=8161, avg=3409.73, stdev=699.83 00:31:57.111 lat (usec): min=1645, max=8188, avg=3418.00, stdev=699.87 00:31:57.111 clat percentiles (usec): 00:31:57.111 | 1.00th=[ 2147], 5.00th=[ 2442], 10.00th=[ 2606], 20.00th=[ 2835], 00:31:57.111 | 30.00th=[ 2966], 40.00th=[ 3163], 50.00th=[ 3326], 60.00th=[ 3490], 00:31:57.111 | 70.00th=[ 3687], 80.00th=[ 3949], 90.00th=[ 4293], 95.00th=[ 4621], 00:31:57.111 | 99.00th=[ 5538], 99.50th=[ 5932], 99.90th=[ 6718], 99.95th=[ 6783], 00:31:57.111 | 99.99th=[ 8094] 00:31:57.112 bw ( KiB/s): min=18288, max=19152, per=21.54%, avg=18635.60, stdev=292.77, samples=10 00:31:57.112 iops : min= 2286, max= 2394, avg=2329.40, stdev=36.65, samples=10 00:31:57.112 lat (msec) : 2=0.39%, 4=81.54%, 10=18.07% 00:31:57.112 cpu : usr=93.54%, sys=6.14%, ctx=6, majf=0, minf=35 00:31:57.112 IO depths : 1=0.3%, 2=1.5%, 4=71.6%, 8=26.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:57.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:57.112 complete : 0=0.0%, 4=91.5%, 8=8.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:57.112 issued rwts: total=11651,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:57.112 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:57.112 filename1: (groupid=0, jobs=1): err= 0: pid=3250004: Mon Jul 15 15:37:00 2024 00:31:57.112 read: IOPS=2677, BW=20.9MiB/s (21.9MB/s)(105MiB/5001msec) 00:31:57.112 slat (nsec): min=5843, max=78898, avg=8396.00, stdev=2803.22 00:31:57.112 clat (usec): min=1429, max=47600, avg=2966.63, stdev=1188.09 00:31:57.112 lat (usec): min=1435, max=47626, avg=2975.02, stdev=1188.16 00:31:57.112 clat percentiles (usec): 00:31:57.112 | 1.00th=[ 1991], 5.00th=[ 2212], 10.00th=[ 2343], 20.00th=[ 2507], 00:31:57.112 | 30.00th=[ 2671], 40.00th=[ 2802], 50.00th=[ 2900], 60.00th=[ 3032], 00:31:57.112 | 70.00th=[ 3195], 80.00th=[ 3326], 90.00th=[ 3556], 95.00th=[ 3752], 00:31:57.112 | 99.00th=[ 4146], 99.50th=[ 4293], 99.90th=[ 4883], 99.95th=[47449], 00:31:57.112 | 99.99th=[47449] 00:31:57.112 bw ( KiB/s): min=19575, max=21856, per=24.75%, avg=21411.90, stdev=658.85, samples=10 00:31:57.112 iops : min= 2446, max= 2732, avg=2676.40, stdev=82.63, samples=10 00:31:57.112 lat (msec) : 2=1.02%, 4=97.13%, 10=1.79%, 50=0.06% 00:31:57.112 cpu : usr=93.60%, sys=6.08%, ctx=9, majf=0, minf=64 00:31:57.112 IO depths : 1=0.2%, 2=1.7%, 4=66.5%, 8=31.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:57.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:57.112 complete : 0=0.0%, 4=95.5%, 8=4.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:57.112 issued rwts: total=13388,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:57.112 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:57.112 filename1: (groupid=0, jobs=1): err= 0: pid=3250005: Mon Jul 15 15:37:00 2024 00:31:57.112 read: IOPS=2880, BW=22.5MiB/s (23.6MB/s)(113MiB/5002msec) 00:31:57.112 slat (nsec): min=5854, max=58858, avg=8367.15, stdev=2785.52 00:31:57.112 clat (usec): min=1253, max=5479, avg=2754.83, stdev=467.34 00:31:57.112 lat (usec): min=1259, max=5485, avg=2763.19, stdev=467.27 00:31:57.112 clat percentiles (usec): 00:31:57.112 | 1.00th=[ 1844], 5.00th=[ 2040], 10.00th=[ 2180], 20.00th=[ 2343], 00:31:57.112 | 30.00th=[ 2474], 40.00th=[ 2606], 50.00th=[ 2704], 60.00th=[ 2868], 00:31:57.112 | 70.00th=[ 2966], 80.00th=[ 3163], 90.00th=[ 3392], 95.00th=[ 3589], 00:31:57.112 | 99.00th=[ 3949], 99.50th=[ 4080], 99.90th=[ 4293], 99.95th=[ 4424], 00:31:57.112 | 99.99th=[ 5473] 00:31:57.112 bw ( KiB/s): min=22320, max=23504, per=26.63%, avg=23044.80, stdev=332.56, samples=10 00:31:57.112 iops : min= 2790, max= 2938, avg=2880.60, stdev=41.57, samples=10 00:31:57.112 lat (msec) : 2=3.92%, 4=95.25%, 10=0.83% 00:31:57.112 cpu : usr=92.86%, sys=6.80%, ctx=7, majf=0, minf=37 00:31:57.112 IO depths : 1=0.3%, 2=1.7%, 4=66.9%, 8=31.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:57.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:57.112 complete : 0=0.0%, 4=95.2%, 8=4.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:57.112 issued rwts: total=14409,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:57.112 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:57.112 00:31:57.112 Run status group 0 (all jobs): 00:31:57.112 READ: bw=84.5MiB/s (88.6MB/s), 18.2MiB/s-22.9MiB/s (19.1MB/s-24.0MB/s), io=423MiB (443MB), run=5001-5003msec 00:31:57.112 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:31:57.112 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:57.112 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:57.112 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:57.112 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:57.112 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:57.112 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.112 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:57.112 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.112 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:57.112 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.112 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:57.112 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.112 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:57.112 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:57.112 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:57.112 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:57.112 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.112 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:57.112 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.112 15:37:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:57.112 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.112 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:57.112 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.112 00:31:57.112 real 0m24.455s 00:31:57.112 user 4m53.205s 00:31:57.112 sys 0m10.782s 00:31:57.112 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:57.112 15:37:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:57.112 ************************************ 00:31:57.112 END TEST fio_dif_rand_params 00:31:57.112 ************************************ 00:31:57.112 15:37:00 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:31:57.112 15:37:00 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:31:57.112 15:37:00 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:57.112 15:37:00 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:57.112 15:37:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:57.112 ************************************ 00:31:57.112 START TEST fio_dif_digest 00:31:57.112 ************************************ 00:31:57.112 15:37:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:31:57.112 15:37:00 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:31:57.112 15:37:00 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:31:57.112 15:37:00 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:31:57.112 15:37:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:31:57.112 15:37:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:31:57.112 15:37:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:31:57.112 15:37:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:31:57.112 15:37:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:31:57.112 15:37:00 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:31:57.112 15:37:00 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:31:57.112 15:37:00 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:31:57.112 15:37:00 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:31:57.112 15:37:00 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:31:57.112 15:37:00 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:31:57.112 15:37:00 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:31:57.112 15:37:00 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:57.112 15:37:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.112 15:37:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:57.112 bdev_null0 00:31:57.112 15:37:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.112 15:37:00 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:57.112 15:37:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.112 15:37:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:57.112 15:37:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.112 15:37:00 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:57.112 15:37:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.112 15:37:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:57.112 15:37:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.112 15:37:00 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:57.112 15:37:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.112 15:37:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:57.112 [2024-07-15 15:37:00.989004] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:57.112 15:37:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.112 15:37:00 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:31:57.112 15:37:00 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:31:57.112 15:37:00 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:57.112 15:37:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:31:57.112 15:37:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:31:57.112 15:37:00 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:57.112 15:37:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:57.112 15:37:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:57.112 { 00:31:57.112 "params": { 00:31:57.112 "name": "Nvme$subsystem", 00:31:57.112 "trtype": "$TEST_TRANSPORT", 00:31:57.112 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:57.112 "adrfam": "ipv4", 00:31:57.112 "trsvcid": "$NVMF_PORT", 00:31:57.112 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:57.112 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:57.112 "hdgst": ${hdgst:-false}, 00:31:57.112 "ddgst": ${ddgst:-false} 00:31:57.112 }, 00:31:57.112 "method": "bdev_nvme_attach_controller" 00:31:57.112 } 00:31:57.112 EOF 00:31:57.112 )") 00:31:57.113 15:37:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:57.113 15:37:00 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:31:57.113 15:37:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:57.113 15:37:00 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:31:57.113 15:37:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:57.113 15:37:00 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:31:57.113 15:37:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:57.113 15:37:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:57.113 15:37:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:31:57.113 15:37:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:57.113 15:37:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:57.113 15:37:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:31:57.113 15:37:01 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:31:57.113 15:37:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:57.113 15:37:01 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:31:57.113 15:37:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:31:57.113 15:37:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:57.113 15:37:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:31:57.113 15:37:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:31:57.113 15:37:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:57.113 "params": { 00:31:57.113 "name": "Nvme0", 00:31:57.113 "trtype": "tcp", 00:31:57.113 "traddr": "10.0.0.2", 00:31:57.113 "adrfam": "ipv4", 00:31:57.113 "trsvcid": "4420", 00:31:57.113 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:57.113 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:57.113 "hdgst": true, 00:31:57.113 "ddgst": true 00:31:57.113 }, 00:31:57.113 "method": "bdev_nvme_attach_controller" 00:31:57.113 }' 00:31:57.370 15:37:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:57.370 15:37:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:57.370 15:37:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:57.370 15:37:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:57.370 15:37:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:57.370 15:37:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:57.370 15:37:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:57.370 15:37:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:57.370 15:37:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:57.370 15:37:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:57.627 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:57.627 ... 00:31:57.627 fio-3.35 00:31:57.627 Starting 3 threads 00:31:57.627 EAL: No free 2048 kB hugepages reported on node 1 00:32:09.824 00:32:09.824 filename0: (groupid=0, jobs=1): err= 0: pid=3251033: Mon Jul 15 15:37:11 2024 00:32:09.824 read: IOPS=300, BW=37.5MiB/s (39.4MB/s)(377MiB/10048msec) 00:32:09.824 slat (nsec): min=3891, max=44661, avg=10610.57, stdev=2200.48 00:32:09.824 clat (usec): min=4753, max=93574, avg=9950.96, stdev=4855.50 00:32:09.824 lat (usec): min=4763, max=93586, avg=9961.57, stdev=4855.67 00:32:09.824 clat percentiles (usec): 00:32:09.824 | 1.00th=[ 5473], 5.00th=[ 7111], 10.00th=[ 7439], 20.00th=[ 7898], 00:32:09.824 | 30.00th=[ 8455], 40.00th=[ 9241], 50.00th=[ 9765], 60.00th=[10159], 00:32:09.824 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11338], 95.00th=[11731], 00:32:09.824 | 99.00th=[50070], 99.50th=[52167], 99.90th=[53216], 99.95th=[54789], 00:32:09.824 | 99.99th=[93848] 00:32:09.824 bw ( KiB/s): min=28160, max=43264, per=37.73%, avg=38579.20, stdev=4246.58, samples=20 00:32:09.824 iops : min= 220, max= 338, avg=301.40, stdev=33.18, samples=20 00:32:09.824 lat (msec) : 10=54.42%, 20=44.48%, 50=0.10%, 100=0.99% 00:32:09.824 cpu : usr=90.78%, sys=8.85%, ctx=18, majf=0, minf=116 00:32:09.824 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:09.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.824 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.824 issued rwts: total=3017,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:09.824 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:09.824 filename0: (groupid=0, jobs=1): err= 0: pid=3251034: Mon Jul 15 15:37:11 2024 00:32:09.824 read: IOPS=298, BW=37.3MiB/s (39.1MB/s)(374MiB/10044msec) 00:32:09.824 slat (nsec): min=6089, max=27736, avg=10580.28, stdev=2075.35 00:32:09.824 clat (usec): min=5813, max=94947, avg=10034.23, stdev=4083.15 00:32:09.824 lat (usec): min=5819, max=94961, avg=10044.81, stdev=4083.47 00:32:09.824 clat percentiles (usec): 00:32:09.824 | 1.00th=[ 6783], 5.00th=[ 7308], 10.00th=[ 7635], 20.00th=[ 8094], 00:32:09.824 | 30.00th=[ 8717], 40.00th=[ 9503], 50.00th=[10028], 60.00th=[10421], 00:32:09.824 | 70.00th=[10814], 80.00th=[11076], 90.00th=[11600], 95.00th=[12125], 00:32:09.824 | 99.00th=[13173], 99.50th=[52167], 99.90th=[54264], 99.95th=[94897], 00:32:09.824 | 99.99th=[94897] 00:32:09.824 bw ( KiB/s): min=26112, max=41984, per=37.47%, avg=38310.40, stdev=3596.75, samples=20 00:32:09.824 iops : min= 204, max= 328, avg=299.30, stdev=28.10, samples=20 00:32:09.824 lat (msec) : 10=48.41%, 20=50.98%, 50=0.07%, 100=0.53% 00:32:09.824 cpu : usr=90.46%, sys=9.20%, ctx=18, majf=0, minf=120 00:32:09.824 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:09.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.824 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.824 issued rwts: total=2995,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:09.824 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:09.824 filename0: (groupid=0, jobs=1): err= 0: pid=3251035: Mon Jul 15 15:37:11 2024 00:32:09.824 read: IOPS=200, BW=25.1MiB/s (26.3MB/s)(252MiB/10042msec) 00:32:09.824 slat (nsec): min=6129, max=31038, avg=11079.75, stdev=1948.14 00:32:09.824 clat (usec): min=6216, max=96370, avg=14944.22, stdev=12946.77 00:32:09.824 lat (usec): min=6224, max=96381, avg=14955.30, stdev=12946.87 00:32:09.824 clat percentiles (usec): 00:32:09.824 | 1.00th=[ 7242], 5.00th=[ 8160], 10.00th=[ 9241], 20.00th=[10290], 00:32:09.824 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11207], 60.00th=[11469], 00:32:09.824 | 70.00th=[11863], 80.00th=[12387], 90.00th=[13566], 95.00th=[52691], 00:32:09.824 | 99.00th=[54789], 99.50th=[56886], 99.90th=[95945], 99.95th=[95945], 00:32:09.824 | 99.99th=[95945] 00:32:09.824 bw ( KiB/s): min=16896, max=35072, per=25.18%, avg=25743.65, stdev=5137.52, samples=20 00:32:09.824 iops : min= 132, max= 274, avg=201.10, stdev=40.12, samples=20 00:32:09.824 lat (msec) : 10=15.99%, 20=74.88%, 50=0.15%, 100=8.99% 00:32:09.824 cpu : usr=91.64%, sys=8.04%, ctx=16, majf=0, minf=141 00:32:09.824 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:09.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.824 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.824 issued rwts: total=2014,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:09.824 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:09.824 00:32:09.824 Run status group 0 (all jobs): 00:32:09.824 READ: bw=99.8MiB/s (105MB/s), 25.1MiB/s-37.5MiB/s (26.3MB/s-39.4MB/s), io=1003MiB (1052MB), run=10042-10048msec 00:32:09.824 15:37:12 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:32:09.824 15:37:12 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:32:09.824 15:37:12 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:32:09.824 15:37:12 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:09.824 15:37:12 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:32:09.824 15:37:12 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:09.824 15:37:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.824 15:37:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:09.824 15:37:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.824 15:37:12 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:09.824 15:37:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.824 15:37:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:09.824 15:37:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.824 00:32:09.824 real 0m11.097s 00:32:09.824 user 0m36.423s 00:32:09.824 sys 0m2.962s 00:32:09.824 15:37:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:09.824 15:37:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:09.824 ************************************ 00:32:09.824 END TEST fio_dif_digest 00:32:09.824 ************************************ 00:32:09.824 15:37:12 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:32:09.824 15:37:12 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:32:09.824 15:37:12 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:32:09.824 15:37:12 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:09.824 15:37:12 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:32:09.824 15:37:12 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:09.824 15:37:12 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:32:09.824 15:37:12 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:09.824 15:37:12 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:09.824 rmmod nvme_tcp 00:32:09.824 rmmod nvme_fabrics 00:32:09.824 rmmod nvme_keyring 00:32:09.824 15:37:12 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:09.824 15:37:12 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:32:09.824 15:37:12 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:32:09.824 15:37:12 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 3242187 ']' 00:32:09.824 15:37:12 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 3242187 00:32:09.824 15:37:12 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 3242187 ']' 00:32:09.824 15:37:12 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 3242187 00:32:09.824 15:37:12 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:32:09.824 15:37:12 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:09.824 15:37:12 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3242187 00:32:09.824 15:37:12 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:09.824 15:37:12 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:09.824 15:37:12 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3242187' 00:32:09.824 killing process with pid 3242187 00:32:09.824 15:37:12 nvmf_dif -- common/autotest_common.sh@967 -- # kill 3242187 00:32:09.824 15:37:12 nvmf_dif -- common/autotest_common.sh@972 -- # wait 3242187 00:32:09.824 15:37:12 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:32:09.824 15:37:12 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:11.721 Waiting for block devices as requested 00:32:11.978 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:11.978 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:11.979 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:11.979 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:12.236 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:12.236 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:12.236 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:12.494 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:12.494 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:12.494 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:12.750 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:12.750 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:12.750 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:12.750 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:13.006 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:13.006 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:13.264 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:32:13.264 15:37:17 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:13.264 15:37:17 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:13.264 15:37:17 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:13.264 15:37:17 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:13.264 15:37:17 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:13.264 15:37:17 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:13.264 15:37:17 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:15.786 15:37:19 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:15.786 00:32:15.786 real 1m16.279s 00:32:15.786 user 7m13.675s 00:32:15.786 sys 0m31.674s 00:32:15.786 15:37:19 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:15.786 15:37:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:15.786 ************************************ 00:32:15.786 END TEST nvmf_dif 00:32:15.786 ************************************ 00:32:15.786 15:37:19 -- common/autotest_common.sh@1142 -- # return 0 00:32:15.786 15:37:19 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:15.787 15:37:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:15.787 15:37:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:15.787 15:37:19 -- common/autotest_common.sh@10 -- # set +x 00:32:15.787 ************************************ 00:32:15.787 START TEST nvmf_abort_qd_sizes 00:32:15.787 ************************************ 00:32:15.787 15:37:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:15.787 * Looking for test storage... 00:32:15.787 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:15.787 15:37:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:15.787 15:37:19 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:32:15.787 15:37:19 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:15.787 15:37:19 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:15.787 15:37:19 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:15.787 15:37:19 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:15.787 15:37:19 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:15.787 15:37:19 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:15.787 15:37:19 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:15.787 15:37:19 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:15.787 15:37:19 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:15.787 15:37:19 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:15.787 15:37:19 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:32:15.787 15:37:19 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:32:15.787 15:37:19 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:15.787 15:37:19 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:15.787 15:37:19 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:15.787 15:37:19 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:15.787 15:37:19 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:15.787 15:37:19 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:15.787 15:37:19 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:15.787 15:37:19 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:15.787 15:37:19 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.787 15:37:19 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.787 15:37:19 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.787 15:37:19 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:32:15.787 15:37:19 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:15.787 15:37:19 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:32:15.787 15:37:19 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:15.787 15:37:19 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:15.787 15:37:19 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:15.787 15:37:19 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:15.787 15:37:19 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:15.787 15:37:19 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:15.787 15:37:19 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:15.787 15:37:19 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:15.787 15:37:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:32:15.787 15:37:19 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:15.787 15:37:19 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:15.787 15:37:19 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:15.787 15:37:19 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:15.787 15:37:19 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:15.787 15:37:19 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:15.787 15:37:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:15.787 15:37:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:15.787 15:37:19 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:15.787 15:37:19 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:15.787 15:37:19 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:32:15.787 15:37:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:22.348 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:22.348 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:32:22.348 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:22.348 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:22.348 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:22.348 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:22.348 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:22.348 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:32:22.348 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:22.348 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:32:22.348 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:32:22.348 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:32:22.348 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:32:22.348 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:32:22.348 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:32:22.348 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:22.348 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:22.348 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:22.348 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:22.348 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:22.349 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:22.349 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:22.349 Found net devices under 0000:af:00.0: cvl_0_0 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:22.349 Found net devices under 0000:af:00.1: cvl_0_1 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:22.349 15:37:25 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:22.349 15:37:26 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:22.349 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:22.349 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:32:22.349 00:32:22.349 --- 10.0.0.2 ping statistics --- 00:32:22.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:22.349 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:32:22.349 15:37:26 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:22.349 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:22.349 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:32:22.349 00:32:22.349 --- 10.0.0.1 ping statistics --- 00:32:22.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:22.349 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:32:22.349 15:37:26 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:22.349 15:37:26 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:32:22.349 15:37:26 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:32:22.349 15:37:26 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:25.678 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:32:25.678 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:32:25.678 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:32:25.678 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:32:25.678 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:32:25.678 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:32:25.678 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:32:25.678 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:32:25.678 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:32:25.678 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:32:25.678 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:32:25.678 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:32:25.678 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:32:25.678 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:32:25.678 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:32:25.678 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:32:27.056 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:32:27.056 15:37:30 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:27.056 15:37:30 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:27.056 15:37:30 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:27.056 15:37:30 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:27.056 15:37:30 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:27.056 15:37:30 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:27.314 15:37:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:32:27.314 15:37:30 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:27.314 15:37:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:27.314 15:37:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:27.314 15:37:30 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=3259274 00:32:27.314 15:37:30 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:32:27.314 15:37:30 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 3259274 00:32:27.314 15:37:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 3259274 ']' 00:32:27.314 15:37:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:27.314 15:37:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:27.314 15:37:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:27.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:27.314 15:37:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:27.314 15:37:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:27.314 [2024-07-15 15:37:31.026893] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:32:27.315 [2024-07-15 15:37:31.026943] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:27.315 EAL: No free 2048 kB hugepages reported on node 1 00:32:27.315 [2024-07-15 15:37:31.102564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:27.315 [2024-07-15 15:37:31.177877] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:27.315 [2024-07-15 15:37:31.177915] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:27.315 [2024-07-15 15:37:31.177924] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:27.315 [2024-07-15 15:37:31.177932] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:27.315 [2024-07-15 15:37:31.177939] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:27.315 [2024-07-15 15:37:31.177981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:27.315 [2024-07-15 15:37:31.178076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:27.315 [2024-07-15 15:37:31.178104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:32:27.315 [2024-07-15 15:37:31.178106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:28.247 15:37:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:28.247 15:37:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:32:28.247 15:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:28.247 15:37:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:28.247 15:37:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:28.247 15:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:28.247 15:37:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:32:28.247 15:37:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:32:28.247 15:37:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:32:28.247 15:37:31 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:32:28.247 15:37:31 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:32:28.247 15:37:31 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:d8:00.0 ]] 00:32:28.247 15:37:31 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:32:28.247 15:37:31 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:32:28.247 15:37:31 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:d8:00.0 ]] 00:32:28.247 15:37:31 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:32:28.247 15:37:31 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:32:28.247 15:37:31 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:32:28.247 15:37:31 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:32:28.247 15:37:31 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:d8:00.0 00:32:28.247 15:37:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:32:28.247 15:37:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:d8:00.0 00:32:28.247 15:37:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:32:28.247 15:37:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:28.247 15:37:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:28.247 15:37:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:28.247 ************************************ 00:32:28.247 START TEST spdk_target_abort 00:32:28.247 ************************************ 00:32:28.247 15:37:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:32:28.248 15:37:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:32:28.248 15:37:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:d8:00.0 -b spdk_target 00:32:28.248 15:37:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.248 15:37:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:31.525 spdk_targetn1 00:32:31.525 15:37:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.525 15:37:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:31.525 15:37:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.525 15:37:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:31.525 [2024-07-15 15:37:34.778364] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:31.525 15:37:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.525 15:37:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:32:31.525 15:37:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.525 15:37:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:31.525 15:37:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.525 15:37:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:32:31.525 15:37:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.525 15:37:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:31.525 15:37:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.525 15:37:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:32:31.525 15:37:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.525 15:37:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:31.525 [2024-07-15 15:37:34.810602] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:31.525 15:37:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.525 15:37:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:32:31.525 15:37:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:31.525 15:37:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:31.525 15:37:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:32:31.525 15:37:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:31.525 15:37:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:31.525 15:37:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:31.525 15:37:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:31.525 15:37:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:31.525 15:37:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:31.525 15:37:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:31.525 15:37:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:31.525 15:37:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:31.525 15:37:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:31.525 15:37:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:32:31.525 15:37:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:31.525 15:37:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:31.525 15:37:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:31.525 15:37:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:31.525 15:37:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:31.525 15:37:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:31.525 EAL: No free 2048 kB hugepages reported on node 1 00:32:34.804 Initializing NVMe Controllers 00:32:34.804 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:34.804 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:34.804 Initialization complete. Launching workers. 00:32:34.804 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9710, failed: 0 00:32:34.804 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1988, failed to submit 7722 00:32:34.804 success 857, unsuccess 1131, failed 0 00:32:34.804 15:37:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:34.804 15:37:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:34.804 EAL: No free 2048 kB hugepages reported on node 1 00:32:38.083 Initializing NVMe Controllers 00:32:38.083 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:38.083 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:38.083 Initialization complete. Launching workers. 00:32:38.083 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8592, failed: 0 00:32:38.083 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1261, failed to submit 7331 00:32:38.083 success 340, unsuccess 921, failed 0 00:32:38.083 15:37:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:38.083 15:37:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:38.083 EAL: No free 2048 kB hugepages reported on node 1 00:32:41.356 Initializing NVMe Controllers 00:32:41.356 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:41.356 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:41.356 Initialization complete. Launching workers. 00:32:41.356 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38604, failed: 0 00:32:41.356 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2829, failed to submit 35775 00:32:41.356 success 578, unsuccess 2251, failed 0 00:32:41.356 15:37:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:32:41.356 15:37:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.356 15:37:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:41.356 15:37:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.356 15:37:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:32:41.356 15:37:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.356 15:37:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:42.727 15:37:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.727 15:37:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3259274 00:32:42.727 15:37:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 3259274 ']' 00:32:42.727 15:37:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 3259274 00:32:42.727 15:37:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:32:42.727 15:37:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:42.727 15:37:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3259274 00:32:42.985 15:37:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:42.985 15:37:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:42.985 15:37:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3259274' 00:32:42.985 killing process with pid 3259274 00:32:42.985 15:37:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 3259274 00:32:42.985 15:37:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 3259274 00:32:42.985 00:32:42.985 real 0m14.920s 00:32:42.985 user 0m59.020s 00:32:42.985 sys 0m2.777s 00:32:42.985 15:37:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:42.985 15:37:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:42.985 ************************************ 00:32:42.985 END TEST spdk_target_abort 00:32:42.985 ************************************ 00:32:43.243 15:37:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:32:43.243 15:37:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:32:43.243 15:37:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:43.243 15:37:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:43.243 15:37:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:43.243 ************************************ 00:32:43.243 START TEST kernel_target_abort 00:32:43.243 ************************************ 00:32:43.243 15:37:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:32:43.243 15:37:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:32:43.243 15:37:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:32:43.243 15:37:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:43.243 15:37:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:43.243 15:37:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:43.243 15:37:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:43.243 15:37:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:43.243 15:37:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:43.243 15:37:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:43.243 15:37:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:43.243 15:37:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:43.243 15:37:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:43.243 15:37:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:43.243 15:37:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:43.243 15:37:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:43.243 15:37:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:43.243 15:37:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:43.243 15:37:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:32:43.243 15:37:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:43.243 15:37:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:43.243 15:37:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:43.243 15:37:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:46.520 Waiting for block devices as requested 00:32:46.520 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:46.520 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:46.520 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:46.520 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:46.520 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:46.520 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:46.520 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:46.777 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:46.777 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:46.777 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:47.035 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:47.035 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:47.035 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:47.293 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:47.293 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:47.293 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:47.551 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:32:47.551 15:37:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:47.551 15:37:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:47.551 15:37:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:47.551 15:37:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:32:47.551 15:37:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:47.551 15:37:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:32:47.551 15:37:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:47.551 15:37:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:47.551 15:37:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:47.551 No valid GPT data, bailing 00:32:47.551 15:37:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:47.551 15:37:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:32:47.551 15:37:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:32:47.551 15:37:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:47.551 15:37:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:47.551 15:37:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:47.551 15:37:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:47.551 15:37:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:47.551 15:37:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:47.551 15:37:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:32:47.551 15:37:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:47.551 15:37:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:32:47.551 15:37:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:47.551 15:37:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:32:47.551 15:37:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:32:47.551 15:37:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:32:47.551 15:37:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:47.809 15:37:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.1 -t tcp -s 4420 00:32:47.809 00:32:47.809 Discovery Log Number of Records 2, Generation counter 2 00:32:47.809 =====Discovery Log Entry 0====== 00:32:47.809 trtype: tcp 00:32:47.809 adrfam: ipv4 00:32:47.809 subtype: current discovery subsystem 00:32:47.809 treq: not specified, sq flow control disable supported 00:32:47.809 portid: 1 00:32:47.809 trsvcid: 4420 00:32:47.809 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:47.809 traddr: 10.0.0.1 00:32:47.809 eflags: none 00:32:47.809 sectype: none 00:32:47.809 =====Discovery Log Entry 1====== 00:32:47.809 trtype: tcp 00:32:47.809 adrfam: ipv4 00:32:47.809 subtype: nvme subsystem 00:32:47.809 treq: not specified, sq flow control disable supported 00:32:47.809 portid: 1 00:32:47.809 trsvcid: 4420 00:32:47.809 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:47.809 traddr: 10.0.0.1 00:32:47.809 eflags: none 00:32:47.809 sectype: none 00:32:47.809 15:37:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:32:47.809 15:37:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:47.809 15:37:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:47.809 15:37:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:32:47.809 15:37:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:47.809 15:37:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:47.809 15:37:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:47.809 15:37:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:47.809 15:37:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:47.809 15:37:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:47.809 15:37:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:47.809 15:37:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:47.809 15:37:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:47.809 15:37:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:47.809 15:37:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:32:47.809 15:37:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:47.809 15:37:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:32:47.809 15:37:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:47.809 15:37:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:47.809 15:37:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:47.809 15:37:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:47.809 EAL: No free 2048 kB hugepages reported on node 1 00:32:51.116 Initializing NVMe Controllers 00:32:51.116 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:51.116 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:51.116 Initialization complete. Launching workers. 00:32:51.116 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 70932, failed: 0 00:32:51.116 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 70932, failed to submit 0 00:32:51.116 success 0, unsuccess 70932, failed 0 00:32:51.116 15:37:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:51.116 15:37:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:51.116 EAL: No free 2048 kB hugepages reported on node 1 00:32:54.389 Initializing NVMe Controllers 00:32:54.389 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:54.389 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:54.389 Initialization complete. Launching workers. 00:32:54.389 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 123712, failed: 0 00:32:54.389 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 31250, failed to submit 92462 00:32:54.389 success 0, unsuccess 31250, failed 0 00:32:54.389 15:37:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:54.389 15:37:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:54.389 EAL: No free 2048 kB hugepages reported on node 1 00:32:56.907 Initializing NVMe Controllers 00:32:56.907 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:56.907 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:56.907 Initialization complete. Launching workers. 00:32:56.907 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 118847, failed: 0 00:32:56.907 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29742, failed to submit 89105 00:32:56.907 success 0, unsuccess 29742, failed 0 00:32:56.907 15:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:32:57.164 15:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:57.164 15:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:32:57.164 15:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:57.164 15:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:57.164 15:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:57.164 15:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:57.164 15:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:57.164 15:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:57.164 15:38:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:00.430 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:33:00.430 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:33:00.430 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:33:00.430 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:33:00.430 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:33:00.430 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:33:00.430 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:33:00.430 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:33:00.430 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:33:00.430 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:33:00.430 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:33:00.430 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:33:00.430 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:33:00.430 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:33:00.430 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:33:00.430 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:33:02.328 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:33:02.328 00:33:02.328 real 0m18.881s 00:33:02.328 user 0m7.566s 00:33:02.328 sys 0m6.030s 00:33:02.328 15:38:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:02.328 15:38:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:02.328 ************************************ 00:33:02.328 END TEST kernel_target_abort 00:33:02.328 ************************************ 00:33:02.328 15:38:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:33:02.328 15:38:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:33:02.328 15:38:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:33:02.328 15:38:05 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:02.328 15:38:05 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:33:02.328 15:38:05 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:02.328 15:38:05 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:33:02.328 15:38:05 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:02.328 15:38:05 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:02.328 rmmod nvme_tcp 00:33:02.328 rmmod nvme_fabrics 00:33:02.328 rmmod nvme_keyring 00:33:02.328 15:38:05 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:02.328 15:38:05 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:33:02.328 15:38:05 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:33:02.328 15:38:05 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 3259274 ']' 00:33:02.328 15:38:05 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 3259274 00:33:02.328 15:38:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 3259274 ']' 00:33:02.328 15:38:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 3259274 00:33:02.328 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3259274) - No such process 00:33:02.328 15:38:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 3259274 is not found' 00:33:02.328 Process with pid 3259274 is not found 00:33:02.328 15:38:05 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:33:02.328 15:38:05 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:04.855 Waiting for block devices as requested 00:33:04.855 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:05.112 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:05.112 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:05.112 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:05.369 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:05.369 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:05.369 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:05.626 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:05.626 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:05.626 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:05.626 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:05.883 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:05.883 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:05.883 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:06.141 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:06.141 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:06.141 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:33:06.398 15:38:10 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:06.398 15:38:10 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:06.398 15:38:10 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:06.398 15:38:10 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:06.398 15:38:10 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:06.398 15:38:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:06.398 15:38:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:08.921 15:38:12 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:08.921 00:33:08.921 real 0m53.012s 00:33:08.921 user 1m10.816s 00:33:08.921 sys 0m18.774s 00:33:08.921 15:38:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:08.921 15:38:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:08.921 ************************************ 00:33:08.921 END TEST nvmf_abort_qd_sizes 00:33:08.921 ************************************ 00:33:08.921 15:38:12 -- common/autotest_common.sh@1142 -- # return 0 00:33:08.921 15:38:12 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:33:08.921 15:38:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:08.921 15:38:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:08.921 15:38:12 -- common/autotest_common.sh@10 -- # set +x 00:33:08.921 ************************************ 00:33:08.921 START TEST keyring_file 00:33:08.921 ************************************ 00:33:08.921 15:38:12 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:33:08.921 * Looking for test storage... 00:33:08.921 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:33:08.921 15:38:12 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:33:08.921 15:38:12 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:08.921 15:38:12 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:33:08.921 15:38:12 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:08.921 15:38:12 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:08.921 15:38:12 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:08.921 15:38:12 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:08.921 15:38:12 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:08.921 15:38:12 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:08.921 15:38:12 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:08.921 15:38:12 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:08.921 15:38:12 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:08.921 15:38:12 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:08.921 15:38:12 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:33:08.921 15:38:12 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:33:08.921 15:38:12 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:08.921 15:38:12 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:08.921 15:38:12 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:08.921 15:38:12 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:08.921 15:38:12 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:08.921 15:38:12 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:08.921 15:38:12 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:08.921 15:38:12 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:08.921 15:38:12 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.921 15:38:12 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.921 15:38:12 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.921 15:38:12 keyring_file -- paths/export.sh@5 -- # export PATH 00:33:08.921 15:38:12 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.921 15:38:12 keyring_file -- nvmf/common.sh@47 -- # : 0 00:33:08.921 15:38:12 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:08.921 15:38:12 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:08.921 15:38:12 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:08.921 15:38:12 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:08.921 15:38:12 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:08.921 15:38:12 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:08.921 15:38:12 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:08.921 15:38:12 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:08.921 15:38:12 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:33:08.921 15:38:12 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:33:08.921 15:38:12 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:33:08.921 15:38:12 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:33:08.921 15:38:12 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:33:08.921 15:38:12 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:33:08.921 15:38:12 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:08.921 15:38:12 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:08.921 15:38:12 keyring_file -- keyring/common.sh@17 -- # name=key0 00:33:08.921 15:38:12 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:08.921 15:38:12 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:08.921 15:38:12 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:08.921 15:38:12 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.NrIOSW3zB6 00:33:08.921 15:38:12 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:08.921 15:38:12 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:08.921 15:38:12 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:33:08.921 15:38:12 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:08.921 15:38:12 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:33:08.921 15:38:12 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:33:08.921 15:38:12 keyring_file -- nvmf/common.sh@705 -- # python - 00:33:08.921 15:38:12 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.NrIOSW3zB6 00:33:08.921 15:38:12 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.NrIOSW3zB6 00:33:08.921 15:38:12 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.NrIOSW3zB6 00:33:08.921 15:38:12 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:33:08.921 15:38:12 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:08.921 15:38:12 keyring_file -- keyring/common.sh@17 -- # name=key1 00:33:08.921 15:38:12 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:33:08.921 15:38:12 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:08.921 15:38:12 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:08.921 15:38:12 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ZQmDk49Bs0 00:33:08.921 15:38:12 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:33:08.921 15:38:12 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:33:08.921 15:38:12 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:33:08.921 15:38:12 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:08.921 15:38:12 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:33:08.921 15:38:12 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:33:08.921 15:38:12 keyring_file -- nvmf/common.sh@705 -- # python - 00:33:08.921 15:38:12 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ZQmDk49Bs0 00:33:08.921 15:38:12 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ZQmDk49Bs0 00:33:08.921 15:38:12 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.ZQmDk49Bs0 00:33:08.921 15:38:12 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:33:08.921 15:38:12 keyring_file -- keyring/file.sh@30 -- # tgtpid=3268540 00:33:08.921 15:38:12 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3268540 00:33:08.921 15:38:12 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3268540 ']' 00:33:08.921 15:38:12 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:08.921 15:38:12 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:08.921 15:38:12 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:08.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:08.921 15:38:12 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:08.921 15:38:12 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:08.921 [2024-07-15 15:38:12.581045] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:33:08.921 [2024-07-15 15:38:12.581100] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3268540 ] 00:33:08.921 EAL: No free 2048 kB hugepages reported on node 1 00:33:08.921 [2024-07-15 15:38:12.649440] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:08.921 [2024-07-15 15:38:12.724306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:09.486 15:38:13 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:09.486 15:38:13 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:33:09.486 15:38:13 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:33:09.486 15:38:13 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.486 15:38:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:09.486 [2024-07-15 15:38:13.377345] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:09.743 null0 00:33:09.743 [2024-07-15 15:38:13.409404] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:09.743 [2024-07-15 15:38:13.409635] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:09.743 [2024-07-15 15:38:13.417414] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:33:09.743 15:38:13 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.743 15:38:13 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:09.743 15:38:13 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:33:09.743 15:38:13 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:09.743 15:38:13 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:09.743 15:38:13 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:09.743 15:38:13 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:09.743 15:38:13 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:09.743 15:38:13 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:09.743 15:38:13 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.743 15:38:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:09.743 [2024-07-15 15:38:13.429441] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:33:09.743 request: 00:33:09.743 { 00:33:09.743 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:33:09.743 "secure_channel": false, 00:33:09.743 "listen_address": { 00:33:09.743 "trtype": "tcp", 00:33:09.743 "traddr": "127.0.0.1", 00:33:09.743 "trsvcid": "4420" 00:33:09.743 }, 00:33:09.743 "method": "nvmf_subsystem_add_listener", 00:33:09.743 "req_id": 1 00:33:09.743 } 00:33:09.743 Got JSON-RPC error response 00:33:09.743 response: 00:33:09.743 { 00:33:09.743 "code": -32602, 00:33:09.743 "message": "Invalid parameters" 00:33:09.743 } 00:33:09.743 15:38:13 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:09.743 15:38:13 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:33:09.743 15:38:13 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:09.743 15:38:13 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:09.743 15:38:13 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:09.743 15:38:13 keyring_file -- keyring/file.sh@46 -- # bperfpid=3268662 00:33:09.743 15:38:13 keyring_file -- keyring/file.sh@48 -- # waitforlisten 3268662 /var/tmp/bperf.sock 00:33:09.743 15:38:13 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:33:09.743 15:38:13 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3268662 ']' 00:33:09.743 15:38:13 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:09.743 15:38:13 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:09.743 15:38:13 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:09.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:09.743 15:38:13 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:09.743 15:38:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:09.743 [2024-07-15 15:38:13.487147] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:33:09.743 [2024-07-15 15:38:13.487193] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3268662 ] 00:33:09.743 EAL: No free 2048 kB hugepages reported on node 1 00:33:09.743 [2024-07-15 15:38:13.555786] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:09.743 [2024-07-15 15:38:13.624458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:10.674 15:38:14 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:10.674 15:38:14 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:33:10.674 15:38:14 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.NrIOSW3zB6 00:33:10.674 15:38:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.NrIOSW3zB6 00:33:10.674 15:38:14 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.ZQmDk49Bs0 00:33:10.674 15:38:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.ZQmDk49Bs0 00:33:10.674 15:38:14 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:33:10.674 15:38:14 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:33:10.930 15:38:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:10.930 15:38:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:10.930 15:38:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:10.930 15:38:14 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.NrIOSW3zB6 == \/\t\m\p\/\t\m\p\.\N\r\I\O\S\W\3\z\B\6 ]] 00:33:10.930 15:38:14 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:33:10.930 15:38:14 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:33:10.930 15:38:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:10.930 15:38:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:10.930 15:38:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:11.187 15:38:14 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.ZQmDk49Bs0 == \/\t\m\p\/\t\m\p\.\Z\Q\m\D\k\4\9\B\s\0 ]] 00:33:11.188 15:38:14 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:33:11.188 15:38:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:11.188 15:38:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:11.188 15:38:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:11.188 15:38:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:11.188 15:38:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:11.445 15:38:15 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:33:11.445 15:38:15 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:33:11.445 15:38:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:11.445 15:38:15 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:11.445 15:38:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:11.445 15:38:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:11.445 15:38:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:11.445 15:38:15 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:33:11.445 15:38:15 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:11.445 15:38:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:11.703 [2024-07-15 15:38:15.430137] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:11.703 nvme0n1 00:33:11.703 15:38:15 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:33:11.703 15:38:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:11.703 15:38:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:11.703 15:38:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:11.703 15:38:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:11.703 15:38:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:11.960 15:38:15 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:33:11.960 15:38:15 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:33:11.960 15:38:15 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:11.960 15:38:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:11.960 15:38:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:11.960 15:38:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:11.960 15:38:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:11.960 15:38:15 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:33:11.960 15:38:15 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:12.217 Running I/O for 1 seconds... 00:33:13.195 00:33:13.195 Latency(us) 00:33:13.195 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:13.195 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:33:13.195 nvme0n1 : 1.01 12383.95 48.37 0.00 0.00 10282.56 6973.03 18140.36 00:33:13.195 =================================================================================================================== 00:33:13.195 Total : 12383.95 48.37 0.00 0.00 10282.56 6973.03 18140.36 00:33:13.195 0 00:33:13.195 15:38:16 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:13.195 15:38:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:13.453 15:38:17 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:33:13.453 15:38:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:13.453 15:38:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:13.453 15:38:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:13.453 15:38:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:13.453 15:38:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:13.453 15:38:17 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:33:13.453 15:38:17 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:33:13.453 15:38:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:13.453 15:38:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:13.453 15:38:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:13.453 15:38:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:13.453 15:38:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:13.710 15:38:17 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:33:13.710 15:38:17 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:13.710 15:38:17 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:33:13.710 15:38:17 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:13.710 15:38:17 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:33:13.710 15:38:17 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:13.710 15:38:17 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:33:13.710 15:38:17 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:13.710 15:38:17 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:13.710 15:38:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:13.967 [2024-07-15 15:38:17.646204] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:13.967 [2024-07-15 15:38:17.646926] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe8d840 (107): Transport endpoint is not connected 00:33:13.967 [2024-07-15 15:38:17.647919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe8d840 (9): Bad file descriptor 00:33:13.967 [2024-07-15 15:38:17.648919] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:13.967 [2024-07-15 15:38:17.648931] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:13.967 [2024-07-15 15:38:17.648940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:13.967 request: 00:33:13.967 { 00:33:13.967 "name": "nvme0", 00:33:13.967 "trtype": "tcp", 00:33:13.967 "traddr": "127.0.0.1", 00:33:13.967 "adrfam": "ipv4", 00:33:13.967 "trsvcid": "4420", 00:33:13.967 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:13.967 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:13.967 "prchk_reftag": false, 00:33:13.967 "prchk_guard": false, 00:33:13.967 "hdgst": false, 00:33:13.967 "ddgst": false, 00:33:13.967 "psk": "key1", 00:33:13.967 "method": "bdev_nvme_attach_controller", 00:33:13.967 "req_id": 1 00:33:13.967 } 00:33:13.967 Got JSON-RPC error response 00:33:13.967 response: 00:33:13.967 { 00:33:13.967 "code": -5, 00:33:13.967 "message": "Input/output error" 00:33:13.967 } 00:33:13.967 15:38:17 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:33:13.967 15:38:17 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:13.967 15:38:17 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:13.967 15:38:17 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:13.967 15:38:17 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:33:13.967 15:38:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:13.967 15:38:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:13.967 15:38:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:13.967 15:38:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:13.967 15:38:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:13.967 15:38:17 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:33:13.967 15:38:17 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:33:13.967 15:38:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:13.967 15:38:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:13.967 15:38:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:13.967 15:38:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:13.967 15:38:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:14.225 15:38:18 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:33:14.225 15:38:18 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:33:14.225 15:38:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:14.482 15:38:18 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:33:14.482 15:38:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:33:14.483 15:38:18 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:33:14.483 15:38:18 keyring_file -- keyring/file.sh@77 -- # jq length 00:33:14.483 15:38:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:14.740 15:38:18 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:33:14.740 15:38:18 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.NrIOSW3zB6 00:33:14.740 15:38:18 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.NrIOSW3zB6 00:33:14.740 15:38:18 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:33:14.740 15:38:18 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.NrIOSW3zB6 00:33:14.740 15:38:18 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:33:14.740 15:38:18 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:14.740 15:38:18 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:33:14.740 15:38:18 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:14.740 15:38:18 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.NrIOSW3zB6 00:33:14.740 15:38:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.NrIOSW3zB6 00:33:14.997 [2024-07-15 15:38:18.697960] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.NrIOSW3zB6': 0100660 00:33:14.997 [2024-07-15 15:38:18.697985] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:33:14.997 request: 00:33:14.997 { 00:33:14.997 "name": "key0", 00:33:14.997 "path": "/tmp/tmp.NrIOSW3zB6", 00:33:14.997 "method": "keyring_file_add_key", 00:33:14.997 "req_id": 1 00:33:14.997 } 00:33:14.997 Got JSON-RPC error response 00:33:14.997 response: 00:33:14.997 { 00:33:14.997 "code": -1, 00:33:14.997 "message": "Operation not permitted" 00:33:14.997 } 00:33:14.997 15:38:18 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:33:14.997 15:38:18 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:14.997 15:38:18 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:14.997 15:38:18 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:14.997 15:38:18 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.NrIOSW3zB6 00:33:14.997 15:38:18 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.NrIOSW3zB6 00:33:14.997 15:38:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.NrIOSW3zB6 00:33:14.997 15:38:18 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.NrIOSW3zB6 00:33:14.997 15:38:18 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:33:14.997 15:38:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:14.997 15:38:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:14.997 15:38:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:14.997 15:38:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:14.997 15:38:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:15.255 15:38:19 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:33:15.255 15:38:19 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:15.255 15:38:19 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:33:15.255 15:38:19 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:15.255 15:38:19 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:33:15.255 15:38:19 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:15.255 15:38:19 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:33:15.255 15:38:19 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:15.255 15:38:19 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:15.255 15:38:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:15.512 [2024-07-15 15:38:19.227358] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.NrIOSW3zB6': No such file or directory 00:33:15.512 [2024-07-15 15:38:19.227385] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:33:15.512 [2024-07-15 15:38:19.227423] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:33:15.512 [2024-07-15 15:38:19.227431] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:15.512 [2024-07-15 15:38:19.227439] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:33:15.512 request: 00:33:15.512 { 00:33:15.512 "name": "nvme0", 00:33:15.512 "trtype": "tcp", 00:33:15.512 "traddr": "127.0.0.1", 00:33:15.512 "adrfam": "ipv4", 00:33:15.512 "trsvcid": "4420", 00:33:15.512 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:15.512 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:15.512 "prchk_reftag": false, 00:33:15.512 "prchk_guard": false, 00:33:15.512 "hdgst": false, 00:33:15.512 "ddgst": false, 00:33:15.512 "psk": "key0", 00:33:15.512 "method": "bdev_nvme_attach_controller", 00:33:15.512 "req_id": 1 00:33:15.512 } 00:33:15.512 Got JSON-RPC error response 00:33:15.512 response: 00:33:15.512 { 00:33:15.512 "code": -19, 00:33:15.512 "message": "No such device" 00:33:15.512 } 00:33:15.512 15:38:19 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:33:15.513 15:38:19 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:15.513 15:38:19 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:15.513 15:38:19 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:15.513 15:38:19 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:33:15.513 15:38:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:15.770 15:38:19 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:15.770 15:38:19 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:15.770 15:38:19 keyring_file -- keyring/common.sh@17 -- # name=key0 00:33:15.770 15:38:19 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:15.770 15:38:19 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:15.770 15:38:19 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:15.770 15:38:19 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.kJuCTbJgzV 00:33:15.770 15:38:19 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:15.770 15:38:19 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:15.770 15:38:19 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:33:15.770 15:38:19 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:15.770 15:38:19 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:33:15.770 15:38:19 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:33:15.770 15:38:19 keyring_file -- nvmf/common.sh@705 -- # python - 00:33:15.770 15:38:19 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.kJuCTbJgzV 00:33:15.770 15:38:19 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.kJuCTbJgzV 00:33:15.770 15:38:19 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.kJuCTbJgzV 00:33:15.770 15:38:19 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.kJuCTbJgzV 00:33:15.770 15:38:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.kJuCTbJgzV 00:33:15.770 15:38:19 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:15.770 15:38:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:16.028 nvme0n1 00:33:16.028 15:38:19 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:33:16.028 15:38:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:16.028 15:38:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:16.028 15:38:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:16.028 15:38:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:16.028 15:38:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:16.286 15:38:20 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:33:16.286 15:38:20 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:33:16.286 15:38:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:16.543 15:38:20 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:33:16.543 15:38:20 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:33:16.543 15:38:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:16.543 15:38:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:16.543 15:38:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:16.543 15:38:20 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:33:16.543 15:38:20 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:33:16.543 15:38:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:16.543 15:38:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:16.543 15:38:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:16.543 15:38:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:16.543 15:38:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:16.801 15:38:20 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:33:16.801 15:38:20 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:16.801 15:38:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:17.059 15:38:20 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:33:17.059 15:38:20 keyring_file -- keyring/file.sh@104 -- # jq length 00:33:17.059 15:38:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:17.059 15:38:20 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:33:17.059 15:38:20 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.kJuCTbJgzV 00:33:17.059 15:38:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.kJuCTbJgzV 00:33:17.316 15:38:21 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.ZQmDk49Bs0 00:33:17.316 15:38:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.ZQmDk49Bs0 00:33:17.573 15:38:21 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:17.573 15:38:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:17.573 nvme0n1 00:33:17.830 15:38:21 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:33:17.830 15:38:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:33:17.830 15:38:21 keyring_file -- keyring/file.sh@112 -- # config='{ 00:33:17.830 "subsystems": [ 00:33:17.830 { 00:33:17.830 "subsystem": "keyring", 00:33:17.830 "config": [ 00:33:17.830 { 00:33:17.830 "method": "keyring_file_add_key", 00:33:17.830 "params": { 00:33:17.830 "name": "key0", 00:33:17.830 "path": "/tmp/tmp.kJuCTbJgzV" 00:33:17.830 } 00:33:17.830 }, 00:33:17.830 { 00:33:17.830 "method": "keyring_file_add_key", 00:33:17.830 "params": { 00:33:17.830 "name": "key1", 00:33:17.830 "path": "/tmp/tmp.ZQmDk49Bs0" 00:33:17.830 } 00:33:17.830 } 00:33:17.830 ] 00:33:17.830 }, 00:33:17.830 { 00:33:17.830 "subsystem": "iobuf", 00:33:17.830 "config": [ 00:33:17.830 { 00:33:17.830 "method": "iobuf_set_options", 00:33:17.830 "params": { 00:33:17.830 "small_pool_count": 8192, 00:33:17.830 "large_pool_count": 1024, 00:33:17.830 "small_bufsize": 8192, 00:33:17.830 "large_bufsize": 135168 00:33:17.830 } 00:33:17.830 } 00:33:17.830 ] 00:33:17.830 }, 00:33:17.830 { 00:33:17.830 "subsystem": "sock", 00:33:17.830 "config": [ 00:33:17.830 { 00:33:17.830 "method": "sock_set_default_impl", 00:33:17.830 "params": { 00:33:17.830 "impl_name": "posix" 00:33:17.830 } 00:33:17.830 }, 00:33:17.830 { 00:33:17.830 "method": "sock_impl_set_options", 00:33:17.830 "params": { 00:33:17.830 "impl_name": "ssl", 00:33:17.830 "recv_buf_size": 4096, 00:33:17.830 "send_buf_size": 4096, 00:33:17.830 "enable_recv_pipe": true, 00:33:17.830 "enable_quickack": false, 00:33:17.830 "enable_placement_id": 0, 00:33:17.830 "enable_zerocopy_send_server": true, 00:33:17.830 "enable_zerocopy_send_client": false, 00:33:17.830 "zerocopy_threshold": 0, 00:33:17.830 "tls_version": 0, 00:33:17.830 "enable_ktls": false 00:33:17.830 } 00:33:17.830 }, 00:33:17.830 { 00:33:17.830 "method": "sock_impl_set_options", 00:33:17.830 "params": { 00:33:17.830 "impl_name": "posix", 00:33:17.830 "recv_buf_size": 2097152, 00:33:17.830 "send_buf_size": 2097152, 00:33:17.830 "enable_recv_pipe": true, 00:33:17.830 "enable_quickack": false, 00:33:17.830 "enable_placement_id": 0, 00:33:17.830 "enable_zerocopy_send_server": true, 00:33:17.830 "enable_zerocopy_send_client": false, 00:33:17.830 "zerocopy_threshold": 0, 00:33:17.830 "tls_version": 0, 00:33:17.830 "enable_ktls": false 00:33:17.830 } 00:33:17.830 } 00:33:17.830 ] 00:33:17.830 }, 00:33:17.830 { 00:33:17.830 "subsystem": "vmd", 00:33:17.830 "config": [] 00:33:17.830 }, 00:33:17.830 { 00:33:17.830 "subsystem": "accel", 00:33:17.830 "config": [ 00:33:17.830 { 00:33:17.830 "method": "accel_set_options", 00:33:17.830 "params": { 00:33:17.830 "small_cache_size": 128, 00:33:17.830 "large_cache_size": 16, 00:33:17.830 "task_count": 2048, 00:33:17.830 "sequence_count": 2048, 00:33:17.831 "buf_count": 2048 00:33:17.831 } 00:33:17.831 } 00:33:17.831 ] 00:33:17.831 }, 00:33:17.831 { 00:33:17.831 "subsystem": "bdev", 00:33:17.831 "config": [ 00:33:17.831 { 00:33:17.831 "method": "bdev_set_options", 00:33:17.831 "params": { 00:33:17.831 "bdev_io_pool_size": 65535, 00:33:17.831 "bdev_io_cache_size": 256, 00:33:17.831 "bdev_auto_examine": true, 00:33:17.831 "iobuf_small_cache_size": 128, 00:33:17.831 "iobuf_large_cache_size": 16 00:33:17.831 } 00:33:17.831 }, 00:33:17.831 { 00:33:17.831 "method": "bdev_raid_set_options", 00:33:17.831 "params": { 00:33:17.831 "process_window_size_kb": 1024 00:33:17.831 } 00:33:17.831 }, 00:33:17.831 { 00:33:17.831 "method": "bdev_iscsi_set_options", 00:33:17.831 "params": { 00:33:17.831 "timeout_sec": 30 00:33:17.831 } 00:33:17.831 }, 00:33:17.831 { 00:33:17.831 "method": "bdev_nvme_set_options", 00:33:17.831 "params": { 00:33:17.831 "action_on_timeout": "none", 00:33:17.831 "timeout_us": 0, 00:33:17.831 "timeout_admin_us": 0, 00:33:17.831 "keep_alive_timeout_ms": 10000, 00:33:17.831 "arbitration_burst": 0, 00:33:17.831 "low_priority_weight": 0, 00:33:17.831 "medium_priority_weight": 0, 00:33:17.831 "high_priority_weight": 0, 00:33:17.831 "nvme_adminq_poll_period_us": 10000, 00:33:17.831 "nvme_ioq_poll_period_us": 0, 00:33:17.831 "io_queue_requests": 512, 00:33:17.831 "delay_cmd_submit": true, 00:33:17.831 "transport_retry_count": 4, 00:33:17.831 "bdev_retry_count": 3, 00:33:17.831 "transport_ack_timeout": 0, 00:33:17.831 "ctrlr_loss_timeout_sec": 0, 00:33:17.831 "reconnect_delay_sec": 0, 00:33:17.831 "fast_io_fail_timeout_sec": 0, 00:33:17.831 "disable_auto_failback": false, 00:33:17.831 "generate_uuids": false, 00:33:17.831 "transport_tos": 0, 00:33:17.831 "nvme_error_stat": false, 00:33:17.831 "rdma_srq_size": 0, 00:33:17.831 "io_path_stat": false, 00:33:17.831 "allow_accel_sequence": false, 00:33:17.831 "rdma_max_cq_size": 0, 00:33:17.831 "rdma_cm_event_timeout_ms": 0, 00:33:17.831 "dhchap_digests": [ 00:33:17.831 "sha256", 00:33:17.831 "sha384", 00:33:17.831 "sha512" 00:33:17.831 ], 00:33:17.831 "dhchap_dhgroups": [ 00:33:17.831 "null", 00:33:17.831 "ffdhe2048", 00:33:17.831 "ffdhe3072", 00:33:17.831 "ffdhe4096", 00:33:17.831 "ffdhe6144", 00:33:17.831 "ffdhe8192" 00:33:17.831 ] 00:33:17.831 } 00:33:17.831 }, 00:33:17.831 { 00:33:17.831 "method": "bdev_nvme_attach_controller", 00:33:17.831 "params": { 00:33:17.831 "name": "nvme0", 00:33:17.831 "trtype": "TCP", 00:33:17.831 "adrfam": "IPv4", 00:33:17.831 "traddr": "127.0.0.1", 00:33:17.831 "trsvcid": "4420", 00:33:17.831 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:17.831 "prchk_reftag": false, 00:33:17.831 "prchk_guard": false, 00:33:17.831 "ctrlr_loss_timeout_sec": 0, 00:33:17.831 "reconnect_delay_sec": 0, 00:33:17.831 "fast_io_fail_timeout_sec": 0, 00:33:17.831 "psk": "key0", 00:33:17.831 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:17.831 "hdgst": false, 00:33:17.831 "ddgst": false 00:33:17.831 } 00:33:17.831 }, 00:33:17.831 { 00:33:17.831 "method": "bdev_nvme_set_hotplug", 00:33:17.831 "params": { 00:33:17.831 "period_us": 100000, 00:33:17.831 "enable": false 00:33:17.831 } 00:33:17.831 }, 00:33:17.831 { 00:33:17.831 "method": "bdev_wait_for_examine" 00:33:17.831 } 00:33:17.831 ] 00:33:17.831 }, 00:33:17.831 { 00:33:17.831 "subsystem": "nbd", 00:33:17.831 "config": [] 00:33:17.831 } 00:33:17.831 ] 00:33:17.831 }' 00:33:17.831 15:38:21 keyring_file -- keyring/file.sh@114 -- # killprocess 3268662 00:33:17.831 15:38:21 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3268662 ']' 00:33:17.831 15:38:21 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3268662 00:33:17.831 15:38:21 keyring_file -- common/autotest_common.sh@953 -- # uname 00:33:17.831 15:38:21 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:17.831 15:38:21 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3268662 00:33:18.088 15:38:21 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:18.088 15:38:21 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:18.088 15:38:21 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3268662' 00:33:18.088 killing process with pid 3268662 00:33:18.088 15:38:21 keyring_file -- common/autotest_common.sh@967 -- # kill 3268662 00:33:18.088 Received shutdown signal, test time was about 1.000000 seconds 00:33:18.088 00:33:18.088 Latency(us) 00:33:18.088 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:18.088 =================================================================================================================== 00:33:18.088 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:18.088 15:38:21 keyring_file -- common/autotest_common.sh@972 -- # wait 3268662 00:33:18.088 15:38:21 keyring_file -- keyring/file.sh@117 -- # bperfpid=3270125 00:33:18.088 15:38:21 keyring_file -- keyring/file.sh@119 -- # waitforlisten 3270125 /var/tmp/bperf.sock 00:33:18.088 15:38:21 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3270125 ']' 00:33:18.088 15:38:21 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:18.088 15:38:21 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:18.088 15:38:21 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:33:18.088 15:38:21 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:18.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:18.088 15:38:21 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:18.089 15:38:21 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:33:18.089 "subsystems": [ 00:33:18.089 { 00:33:18.089 "subsystem": "keyring", 00:33:18.089 "config": [ 00:33:18.089 { 00:33:18.089 "method": "keyring_file_add_key", 00:33:18.089 "params": { 00:33:18.089 "name": "key0", 00:33:18.089 "path": "/tmp/tmp.kJuCTbJgzV" 00:33:18.089 } 00:33:18.089 }, 00:33:18.089 { 00:33:18.089 "method": "keyring_file_add_key", 00:33:18.089 "params": { 00:33:18.089 "name": "key1", 00:33:18.089 "path": "/tmp/tmp.ZQmDk49Bs0" 00:33:18.089 } 00:33:18.089 } 00:33:18.089 ] 00:33:18.089 }, 00:33:18.089 { 00:33:18.089 "subsystem": "iobuf", 00:33:18.089 "config": [ 00:33:18.089 { 00:33:18.089 "method": "iobuf_set_options", 00:33:18.089 "params": { 00:33:18.089 "small_pool_count": 8192, 00:33:18.089 "large_pool_count": 1024, 00:33:18.089 "small_bufsize": 8192, 00:33:18.089 "large_bufsize": 135168 00:33:18.089 } 00:33:18.089 } 00:33:18.089 ] 00:33:18.089 }, 00:33:18.089 { 00:33:18.089 "subsystem": "sock", 00:33:18.089 "config": [ 00:33:18.089 { 00:33:18.089 "method": "sock_set_default_impl", 00:33:18.089 "params": { 00:33:18.089 "impl_name": "posix" 00:33:18.089 } 00:33:18.089 }, 00:33:18.089 { 00:33:18.089 "method": "sock_impl_set_options", 00:33:18.089 "params": { 00:33:18.089 "impl_name": "ssl", 00:33:18.089 "recv_buf_size": 4096, 00:33:18.089 "send_buf_size": 4096, 00:33:18.089 "enable_recv_pipe": true, 00:33:18.089 "enable_quickack": false, 00:33:18.089 "enable_placement_id": 0, 00:33:18.089 "enable_zerocopy_send_server": true, 00:33:18.089 "enable_zerocopy_send_client": false, 00:33:18.089 "zerocopy_threshold": 0, 00:33:18.089 "tls_version": 0, 00:33:18.089 "enable_ktls": false 00:33:18.089 } 00:33:18.089 }, 00:33:18.089 { 00:33:18.089 "method": "sock_impl_set_options", 00:33:18.089 "params": { 00:33:18.089 "impl_name": "posix", 00:33:18.089 "recv_buf_size": 2097152, 00:33:18.089 "send_buf_size": 2097152, 00:33:18.089 "enable_recv_pipe": true, 00:33:18.089 "enable_quickack": false, 00:33:18.089 "enable_placement_id": 0, 00:33:18.089 "enable_zerocopy_send_server": true, 00:33:18.089 "enable_zerocopy_send_client": false, 00:33:18.089 "zerocopy_threshold": 0, 00:33:18.089 "tls_version": 0, 00:33:18.089 "enable_ktls": false 00:33:18.089 } 00:33:18.089 } 00:33:18.089 ] 00:33:18.089 }, 00:33:18.089 { 00:33:18.089 "subsystem": "vmd", 00:33:18.089 "config": [] 00:33:18.089 }, 00:33:18.089 { 00:33:18.089 "subsystem": "accel", 00:33:18.089 "config": [ 00:33:18.089 { 00:33:18.089 "method": "accel_set_options", 00:33:18.089 "params": { 00:33:18.089 "small_cache_size": 128, 00:33:18.089 "large_cache_size": 16, 00:33:18.089 "task_count": 2048, 00:33:18.089 "sequence_count": 2048, 00:33:18.089 "buf_count": 2048 00:33:18.089 } 00:33:18.089 } 00:33:18.089 ] 00:33:18.089 }, 00:33:18.089 { 00:33:18.089 "subsystem": "bdev", 00:33:18.089 "config": [ 00:33:18.089 { 00:33:18.089 "method": "bdev_set_options", 00:33:18.089 "params": { 00:33:18.089 "bdev_io_pool_size": 65535, 00:33:18.089 "bdev_io_cache_size": 256, 00:33:18.089 "bdev_auto_examine": true, 00:33:18.089 "iobuf_small_cache_size": 128, 00:33:18.089 "iobuf_large_cache_size": 16 00:33:18.089 } 00:33:18.089 }, 00:33:18.089 { 00:33:18.089 "method": "bdev_raid_set_options", 00:33:18.089 "params": { 00:33:18.089 "process_window_size_kb": 1024 00:33:18.089 } 00:33:18.089 }, 00:33:18.089 { 00:33:18.089 "method": "bdev_iscsi_set_options", 00:33:18.089 "params": { 00:33:18.089 "timeout_sec": 30 00:33:18.089 } 00:33:18.089 }, 00:33:18.089 { 00:33:18.089 "method": "bdev_nvme_set_options", 00:33:18.089 "params": { 00:33:18.089 "action_on_timeout": "none", 00:33:18.089 "timeout_us": 0, 00:33:18.089 "timeout_admin_us": 0, 00:33:18.089 "keep_alive_timeout_ms": 10000, 00:33:18.089 "arbitration_burst": 0, 00:33:18.089 "low_priority_weight": 0, 00:33:18.089 "medium_priority_weight": 0, 00:33:18.089 "high_priority_weight": 0, 00:33:18.089 "nvme_adminq_poll_period_us": 10000, 00:33:18.089 "nvme_ioq_poll_period_us": 0, 00:33:18.089 "io_queue_requests": 512, 00:33:18.089 "delay_cmd_submit": true, 00:33:18.089 "transport_retry_count": 4, 00:33:18.089 "bdev_retry_count": 3, 00:33:18.089 "transport_ack_timeout": 0, 00:33:18.089 "ctrlr_loss_timeout_sec": 0, 00:33:18.089 "reconnect_delay_sec": 0, 00:33:18.089 "fast_io_fail_timeout_sec": 0, 00:33:18.089 "disable_auto_failback": false, 00:33:18.089 "generate_uuids": false, 00:33:18.089 "transport_tos": 0, 00:33:18.089 "nvme_error_stat": false, 00:33:18.089 "rdma_srq_size": 0, 00:33:18.089 "io_path_stat": false, 00:33:18.089 "allow_accel_sequence": false, 00:33:18.089 "rdma_max_cq_size": 0, 00:33:18.089 "rdma_cm_event_timeout_ms": 0, 00:33:18.089 "dhchap_digests": [ 00:33:18.089 "sha256", 00:33:18.089 "sha384", 00:33:18.089 "sha512" 00:33:18.089 ], 00:33:18.089 "dhchap_dhgroups": [ 00:33:18.089 "null", 00:33:18.089 "ffdhe2048", 00:33:18.089 "ffdhe3072", 00:33:18.089 "ffdhe4096", 00:33:18.089 "ffdhe6144", 00:33:18.089 "ffdhe8192" 00:33:18.089 ] 00:33:18.089 } 00:33:18.089 }, 00:33:18.089 { 00:33:18.089 "method": "bdev_nvme_attach_controller", 00:33:18.089 "params": { 00:33:18.089 "name": "nvme0", 00:33:18.089 "trtype": "TCP", 00:33:18.089 "adrfam": "IPv4", 00:33:18.089 "traddr": "127.0.0.1", 00:33:18.089 "trsvcid": "4420", 00:33:18.089 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:18.089 "prchk_reftag": false, 00:33:18.089 "prchk_guard": false, 00:33:18.089 "ctrlr_loss_timeout_sec": 0, 00:33:18.089 "reconnect_delay_sec": 0, 00:33:18.089 "fast_io_fail_timeout_sec": 0, 00:33:18.089 "psk": "key0", 00:33:18.089 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:18.089 "hdgst": false, 00:33:18.089 "ddgst": false 00:33:18.089 } 00:33:18.089 }, 00:33:18.089 { 00:33:18.089 "method": "bdev_nvme_set_hotplug", 00:33:18.089 "params": { 00:33:18.089 "period_us": 100000, 00:33:18.089 "enable": false 00:33:18.089 } 00:33:18.089 }, 00:33:18.089 { 00:33:18.089 "method": "bdev_wait_for_examine" 00:33:18.089 } 00:33:18.089 ] 00:33:18.089 }, 00:33:18.089 { 00:33:18.089 "subsystem": "nbd", 00:33:18.089 "config": [] 00:33:18.089 } 00:33:18.089 ] 00:33:18.089 }' 00:33:18.089 15:38:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:18.347 [2024-07-15 15:38:21.998729] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:33:18.347 [2024-07-15 15:38:21.998787] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3270125 ] 00:33:18.347 EAL: No free 2048 kB hugepages reported on node 1 00:33:18.347 [2024-07-15 15:38:22.068599] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:18.347 [2024-07-15 15:38:22.141594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:18.604 [2024-07-15 15:38:22.299951] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:19.168 15:38:22 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:19.168 15:38:22 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:33:19.168 15:38:22 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:33:19.168 15:38:22 keyring_file -- keyring/file.sh@120 -- # jq length 00:33:19.168 15:38:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:19.168 15:38:22 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:33:19.168 15:38:22 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:33:19.168 15:38:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:19.168 15:38:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:19.168 15:38:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:19.168 15:38:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:19.168 15:38:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:19.425 15:38:23 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:33:19.425 15:38:23 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:33:19.425 15:38:23 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:19.425 15:38:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:19.425 15:38:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:19.425 15:38:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:19.425 15:38:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:19.425 15:38:23 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:33:19.425 15:38:23 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:33:19.425 15:38:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:33:19.425 15:38:23 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:33:19.682 15:38:23 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:33:19.682 15:38:23 keyring_file -- keyring/file.sh@1 -- # cleanup 00:33:19.682 15:38:23 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.kJuCTbJgzV /tmp/tmp.ZQmDk49Bs0 00:33:19.682 15:38:23 keyring_file -- keyring/file.sh@20 -- # killprocess 3270125 00:33:19.682 15:38:23 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3270125 ']' 00:33:19.682 15:38:23 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3270125 00:33:19.682 15:38:23 keyring_file -- common/autotest_common.sh@953 -- # uname 00:33:19.682 15:38:23 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:19.682 15:38:23 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3270125 00:33:19.682 15:38:23 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:19.682 15:38:23 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:19.682 15:38:23 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3270125' 00:33:19.682 killing process with pid 3270125 00:33:19.682 15:38:23 keyring_file -- common/autotest_common.sh@967 -- # kill 3270125 00:33:19.682 Received shutdown signal, test time was about 1.000000 seconds 00:33:19.682 00:33:19.682 Latency(us) 00:33:19.682 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:19.682 =================================================================================================================== 00:33:19.682 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:19.682 15:38:23 keyring_file -- common/autotest_common.sh@972 -- # wait 3270125 00:33:19.938 15:38:23 keyring_file -- keyring/file.sh@21 -- # killprocess 3268540 00:33:19.938 15:38:23 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3268540 ']' 00:33:19.938 15:38:23 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3268540 00:33:19.938 15:38:23 keyring_file -- common/autotest_common.sh@953 -- # uname 00:33:19.938 15:38:23 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:19.938 15:38:23 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3268540 00:33:19.938 15:38:23 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:19.938 15:38:23 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:19.938 15:38:23 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3268540' 00:33:19.938 killing process with pid 3268540 00:33:19.938 15:38:23 keyring_file -- common/autotest_common.sh@967 -- # kill 3268540 00:33:19.938 [2024-07-15 15:38:23.739446] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:33:19.938 15:38:23 keyring_file -- common/autotest_common.sh@972 -- # wait 3268540 00:33:20.195 00:33:20.195 real 0m11.752s 00:33:20.195 user 0m27.015s 00:33:20.195 sys 0m3.293s 00:33:20.195 15:38:24 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:20.195 15:38:24 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:20.195 ************************************ 00:33:20.195 END TEST keyring_file 00:33:20.195 ************************************ 00:33:20.195 15:38:24 -- common/autotest_common.sh@1142 -- # return 0 00:33:20.195 15:38:24 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:33:20.195 15:38:24 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:33:20.195 15:38:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:20.195 15:38:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:20.195 15:38:24 -- common/autotest_common.sh@10 -- # set +x 00:33:20.453 ************************************ 00:33:20.453 START TEST keyring_linux 00:33:20.453 ************************************ 00:33:20.453 15:38:24 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:33:20.453 * Looking for test storage... 00:33:20.453 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:33:20.453 15:38:24 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:33:20.453 15:38:24 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:20.453 15:38:24 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:33:20.453 15:38:24 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:20.453 15:38:24 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:20.453 15:38:24 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:20.453 15:38:24 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:20.453 15:38:24 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:20.453 15:38:24 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:20.453 15:38:24 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:20.453 15:38:24 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:20.453 15:38:24 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:20.453 15:38:24 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:20.453 15:38:24 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:33:20.453 15:38:24 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:33:20.453 15:38:24 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:20.453 15:38:24 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:20.453 15:38:24 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:20.453 15:38:24 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:20.453 15:38:24 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:20.453 15:38:24 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:20.453 15:38:24 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:20.453 15:38:24 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:20.453 15:38:24 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:20.453 15:38:24 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:20.453 15:38:24 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:20.453 15:38:24 keyring_linux -- paths/export.sh@5 -- # export PATH 00:33:20.453 15:38:24 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:20.453 15:38:24 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:33:20.453 15:38:24 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:20.453 15:38:24 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:20.453 15:38:24 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:20.453 15:38:24 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:20.453 15:38:24 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:20.453 15:38:24 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:20.453 15:38:24 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:20.453 15:38:24 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:20.453 15:38:24 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:33:20.453 15:38:24 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:33:20.453 15:38:24 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:33:20.453 15:38:24 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:33:20.453 15:38:24 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:33:20.453 15:38:24 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:33:20.453 15:38:24 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:33:20.453 15:38:24 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:33:20.453 15:38:24 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:33:20.453 15:38:24 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:20.453 15:38:24 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:33:20.453 15:38:24 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:33:20.453 15:38:24 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:20.453 15:38:24 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:20.453 15:38:24 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:33:20.453 15:38:24 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:20.453 15:38:24 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:33:20.453 15:38:24 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:33:20.453 15:38:24 keyring_linux -- nvmf/common.sh@705 -- # python - 00:33:20.453 15:38:24 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:33:20.453 15:38:24 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:33:20.453 /tmp/:spdk-test:key0 00:33:20.453 15:38:24 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:33:20.453 15:38:24 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:33:20.453 15:38:24 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:33:20.453 15:38:24 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:33:20.453 15:38:24 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:33:20.453 15:38:24 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:33:20.453 15:38:24 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:33:20.453 15:38:24 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:33:20.453 15:38:24 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:33:20.453 15:38:24 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:20.453 15:38:24 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:33:20.453 15:38:24 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:33:20.453 15:38:24 keyring_linux -- nvmf/common.sh@705 -- # python - 00:33:20.453 15:38:24 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:33:20.453 15:38:24 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:33:20.453 /tmp/:spdk-test:key1 00:33:20.453 15:38:24 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3270729 00:33:20.453 15:38:24 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:33:20.453 15:38:24 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3270729 00:33:20.453 15:38:24 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 3270729 ']' 00:33:20.453 15:38:24 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:20.453 15:38:24 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:20.453 15:38:24 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:20.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:20.453 15:38:24 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:20.453 15:38:24 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:20.453 [2024-07-15 15:38:24.356824] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:33:20.453 [2024-07-15 15:38:24.356882] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3270729 ] 00:33:20.711 EAL: No free 2048 kB hugepages reported on node 1 00:33:20.711 [2024-07-15 15:38:24.424793] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:20.711 [2024-07-15 15:38:24.499625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:21.276 15:38:25 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:21.276 15:38:25 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:33:21.277 15:38:25 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:33:21.277 15:38:25 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.277 15:38:25 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:21.277 [2024-07-15 15:38:25.165322] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:21.533 null0 00:33:21.533 [2024-07-15 15:38:25.197373] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:21.533 [2024-07-15 15:38:25.197767] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:21.533 15:38:25 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.533 15:38:25 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:33:21.533 461511701 00:33:21.533 15:38:25 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:33:21.533 724536096 00:33:21.533 15:38:25 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3270764 00:33:21.533 15:38:25 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3270764 /var/tmp/bperf.sock 00:33:21.533 15:38:25 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 3270764 ']' 00:33:21.533 15:38:25 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:21.533 15:38:25 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:21.533 15:38:25 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:21.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:21.533 15:38:25 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:21.533 15:38:25 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:21.533 15:38:25 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:33:21.533 [2024-07-15 15:38:25.272297] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:33:21.533 [2024-07-15 15:38:25.272353] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3270764 ] 00:33:21.533 EAL: No free 2048 kB hugepages reported on node 1 00:33:21.533 [2024-07-15 15:38:25.342143] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:21.533 [2024-07-15 15:38:25.415573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:22.463 15:38:26 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:22.463 15:38:26 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:33:22.463 15:38:26 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:33:22.463 15:38:26 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:33:22.463 15:38:26 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:33:22.463 15:38:26 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:22.721 15:38:26 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:33:22.721 15:38:26 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:33:22.978 [2024-07-15 15:38:26.638619] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:22.978 nvme0n1 00:33:22.978 15:38:26 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:33:22.978 15:38:26 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:33:22.978 15:38:26 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:33:22.978 15:38:26 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:33:22.978 15:38:26 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:22.978 15:38:26 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:33:23.234 15:38:26 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:33:23.234 15:38:26 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:33:23.234 15:38:26 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:33:23.234 15:38:26 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:33:23.234 15:38:26 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:23.235 15:38:26 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:33:23.235 15:38:26 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:23.235 15:38:27 keyring_linux -- keyring/linux.sh@25 -- # sn=461511701 00:33:23.235 15:38:27 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:33:23.235 15:38:27 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:33:23.235 15:38:27 keyring_linux -- keyring/linux.sh@26 -- # [[ 461511701 == \4\6\1\5\1\1\7\0\1 ]] 00:33:23.235 15:38:27 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 461511701 00:33:23.235 15:38:27 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:33:23.235 15:38:27 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:23.490 Running I/O for 1 seconds... 00:33:24.417 00:33:24.417 Latency(us) 00:33:24.417 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:24.417 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:24.417 nvme0n1 : 1.01 12593.18 49.19 0.00 0.00 10123.23 7811.89 19713.23 00:33:24.417 =================================================================================================================== 00:33:24.417 Total : 12593.18 49.19 0.00 0.00 10123.23 7811.89 19713.23 00:33:24.417 0 00:33:24.417 15:38:28 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:24.417 15:38:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:24.674 15:38:28 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:33:24.674 15:38:28 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:33:24.674 15:38:28 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:33:24.674 15:38:28 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:33:24.674 15:38:28 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:33:24.674 15:38:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:24.674 15:38:28 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:33:24.674 15:38:28 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:33:24.674 15:38:28 keyring_linux -- keyring/linux.sh@23 -- # return 00:33:24.675 15:38:28 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:24.675 15:38:28 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:33:24.675 15:38:28 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:24.675 15:38:28 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:33:24.675 15:38:28 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:24.675 15:38:28 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:33:24.675 15:38:28 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:24.675 15:38:28 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:24.675 15:38:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:24.932 [2024-07-15 15:38:28.730120] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:24.932 [2024-07-15 15:38:28.730733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20dc760 (107): Transport endpoint is not connected 00:33:24.932 [2024-07-15 15:38:28.731727] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20dc760 (9): Bad file descriptor 00:33:24.932 [2024-07-15 15:38:28.732728] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:24.932 [2024-07-15 15:38:28.732740] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:24.932 [2024-07-15 15:38:28.732749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:24.932 request: 00:33:24.932 { 00:33:24.932 "name": "nvme0", 00:33:24.932 "trtype": "tcp", 00:33:24.932 "traddr": "127.0.0.1", 00:33:24.932 "adrfam": "ipv4", 00:33:24.932 "trsvcid": "4420", 00:33:24.932 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:24.932 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:24.932 "prchk_reftag": false, 00:33:24.932 "prchk_guard": false, 00:33:24.932 "hdgst": false, 00:33:24.932 "ddgst": false, 00:33:24.932 "psk": ":spdk-test:key1", 00:33:24.932 "method": "bdev_nvme_attach_controller", 00:33:24.932 "req_id": 1 00:33:24.932 } 00:33:24.932 Got JSON-RPC error response 00:33:24.932 response: 00:33:24.932 { 00:33:24.932 "code": -5, 00:33:24.932 "message": "Input/output error" 00:33:24.932 } 00:33:24.932 15:38:28 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:33:24.932 15:38:28 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:24.932 15:38:28 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:24.932 15:38:28 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:24.932 15:38:28 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:33:24.932 15:38:28 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:33:24.932 15:38:28 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:33:24.932 15:38:28 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:33:24.932 15:38:28 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:33:24.932 15:38:28 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:33:24.932 15:38:28 keyring_linux -- keyring/linux.sh@33 -- # sn=461511701 00:33:24.932 15:38:28 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 461511701 00:33:24.932 1 links removed 00:33:24.932 15:38:28 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:33:24.932 15:38:28 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:33:24.932 15:38:28 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:33:24.932 15:38:28 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:33:24.932 15:38:28 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:33:24.932 15:38:28 keyring_linux -- keyring/linux.sh@33 -- # sn=724536096 00:33:24.932 15:38:28 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 724536096 00:33:24.932 1 links removed 00:33:24.932 15:38:28 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3270764 00:33:24.932 15:38:28 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 3270764 ']' 00:33:24.932 15:38:28 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 3270764 00:33:24.932 15:38:28 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:33:24.932 15:38:28 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:24.932 15:38:28 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3270764 00:33:24.932 15:38:28 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:24.932 15:38:28 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:24.932 15:38:28 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3270764' 00:33:24.932 killing process with pid 3270764 00:33:24.932 15:38:28 keyring_linux -- common/autotest_common.sh@967 -- # kill 3270764 00:33:24.932 Received shutdown signal, test time was about 1.000000 seconds 00:33:24.932 00:33:24.932 Latency(us) 00:33:24.932 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:24.932 =================================================================================================================== 00:33:24.932 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:24.932 15:38:28 keyring_linux -- common/autotest_common.sh@972 -- # wait 3270764 00:33:25.188 15:38:28 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3270729 00:33:25.188 15:38:28 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 3270729 ']' 00:33:25.188 15:38:28 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 3270729 00:33:25.188 15:38:28 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:33:25.188 15:38:29 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:25.188 15:38:29 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3270729 00:33:25.188 15:38:29 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:25.188 15:38:29 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:25.188 15:38:29 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3270729' 00:33:25.188 killing process with pid 3270729 00:33:25.188 15:38:29 keyring_linux -- common/autotest_common.sh@967 -- # kill 3270729 00:33:25.188 15:38:29 keyring_linux -- common/autotest_common.sh@972 -- # wait 3270729 00:33:25.751 00:33:25.751 real 0m5.227s 00:33:25.751 user 0m9.038s 00:33:25.751 sys 0m1.607s 00:33:25.751 15:38:29 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:25.751 15:38:29 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:25.751 ************************************ 00:33:25.751 END TEST keyring_linux 00:33:25.751 ************************************ 00:33:25.751 15:38:29 -- common/autotest_common.sh@1142 -- # return 0 00:33:25.751 15:38:29 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:33:25.751 15:38:29 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:33:25.751 15:38:29 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:33:25.751 15:38:29 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:33:25.751 15:38:29 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:33:25.751 15:38:29 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:33:25.751 15:38:29 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:33:25.751 15:38:29 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:33:25.751 15:38:29 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:33:25.751 15:38:29 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:33:25.751 15:38:29 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:33:25.751 15:38:29 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:33:25.751 15:38:29 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:33:25.751 15:38:29 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:33:25.751 15:38:29 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:33:25.751 15:38:29 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:33:25.751 15:38:29 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:33:25.751 15:38:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:25.751 15:38:29 -- common/autotest_common.sh@10 -- # set +x 00:33:25.751 15:38:29 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:33:25.751 15:38:29 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:33:25.751 15:38:29 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:33:25.751 15:38:29 -- common/autotest_common.sh@10 -- # set +x 00:33:32.302 INFO: APP EXITING 00:33:32.302 INFO: killing all VMs 00:33:32.302 INFO: killing vhost app 00:33:32.302 INFO: EXIT DONE 00:33:35.612 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:33:35.612 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:33:35.612 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:33:35.612 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:33:35.612 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:33:35.612 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:33:35.612 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:33:35.612 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:33:35.612 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:33:35.612 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:33:35.612 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:33:35.612 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:33:35.612 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:33:35.612 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:33:35.612 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:33:35.612 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:33:35.612 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:33:38.895 Cleaning 00:33:38.895 Removing: /var/run/dpdk/spdk0/config 00:33:38.895 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:38.895 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:38.895 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:38.895 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:38.895 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:33:38.895 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:33:38.895 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:33:38.895 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:33:38.895 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:38.895 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:38.895 Removing: /var/run/dpdk/spdk1/config 00:33:38.895 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:33:38.895 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:33:38.895 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:33:38.895 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:33:38.895 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:33:38.895 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:33:38.895 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:33:38.895 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:33:38.895 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:33:38.895 Removing: /var/run/dpdk/spdk1/hugepage_info 00:33:38.895 Removing: /var/run/dpdk/spdk1/mp_socket 00:33:38.895 Removing: /var/run/dpdk/spdk2/config 00:33:38.895 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:33:38.895 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:33:38.895 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:33:38.895 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:33:38.895 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:33:38.895 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:33:38.895 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:33:38.895 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:33:38.895 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:33:38.895 Removing: /var/run/dpdk/spdk2/hugepage_info 00:33:38.895 Removing: /var/run/dpdk/spdk3/config 00:33:38.895 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:33:38.895 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:33:38.895 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:33:38.895 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:33:38.895 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:33:38.895 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:33:38.895 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:33:38.895 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:33:38.895 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:33:38.895 Removing: /var/run/dpdk/spdk3/hugepage_info 00:33:38.895 Removing: /var/run/dpdk/spdk4/config 00:33:38.895 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:33:38.895 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:33:38.895 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:33:38.895 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:33:38.895 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:33:38.895 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:33:38.895 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:33:38.895 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:33:38.895 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:33:38.895 Removing: /var/run/dpdk/spdk4/hugepage_info 00:33:38.895 Removing: /dev/shm/bdev_svc_trace.1 00:33:38.895 Removing: /dev/shm/nvmf_trace.0 00:33:38.896 Removing: /dev/shm/spdk_tgt_trace.pid2866015 00:33:38.896 Removing: /var/run/dpdk/spdk0 00:33:38.896 Removing: /var/run/dpdk/spdk1 00:33:38.896 Removing: /var/run/dpdk/spdk2 00:33:38.896 Removing: /var/run/dpdk/spdk3 00:33:38.896 Removing: /var/run/dpdk/spdk4 00:33:38.896 Removing: /var/run/dpdk/spdk_pid2863563 00:33:38.896 Removing: /var/run/dpdk/spdk_pid2864803 00:33:38.896 Removing: /var/run/dpdk/spdk_pid2866015 00:33:38.896 Removing: /var/run/dpdk/spdk_pid2866708 00:33:38.896 Removing: /var/run/dpdk/spdk_pid2867626 00:33:38.896 Removing: /var/run/dpdk/spdk_pid2867829 00:33:38.896 Removing: /var/run/dpdk/spdk_pid2868910 00:33:38.896 Removing: /var/run/dpdk/spdk_pid2869041 00:33:38.896 Removing: /var/run/dpdk/spdk_pid2869301 00:33:38.896 Removing: /var/run/dpdk/spdk_pid2871008 00:33:38.896 Removing: /var/run/dpdk/spdk_pid2872440 00:33:38.896 Removing: /var/run/dpdk/spdk_pid2872755 00:33:38.896 Removing: /var/run/dpdk/spdk_pid2873075 00:33:38.896 Removing: /var/run/dpdk/spdk_pid2873414 00:33:38.896 Removing: /var/run/dpdk/spdk_pid2873733 00:33:38.896 Removing: /var/run/dpdk/spdk_pid2874023 00:33:38.896 Removing: /var/run/dpdk/spdk_pid2874267 00:33:38.896 Removing: /var/run/dpdk/spdk_pid2874511 00:33:38.896 Removing: /var/run/dpdk/spdk_pid2875418 00:33:38.896 Removing: /var/run/dpdk/spdk_pid2878884 00:33:38.896 Removing: /var/run/dpdk/spdk_pid2879189 00:33:38.896 Removing: /var/run/dpdk/spdk_pid2879507 00:33:38.896 Removing: /var/run/dpdk/spdk_pid2879731 00:33:38.896 Removing: /var/run/dpdk/spdk_pid2880297 00:33:38.896 Removing: /var/run/dpdk/spdk_pid2880413 00:33:38.896 Removing: /var/run/dpdk/spdk_pid2880873 00:33:38.896 Removing: /var/run/dpdk/spdk_pid2881125 00:33:38.896 Removing: /var/run/dpdk/spdk_pid2881423 00:33:38.896 Removing: /var/run/dpdk/spdk_pid2881459 00:33:38.896 Removing: /var/run/dpdk/spdk_pid2881734 00:33:38.896 Removing: /var/run/dpdk/spdk_pid2881980 00:33:38.896 Removing: /var/run/dpdk/spdk_pid2882365 00:33:38.896 Removing: /var/run/dpdk/spdk_pid2882643 00:33:38.896 Removing: /var/run/dpdk/spdk_pid2882975 00:33:38.896 Removing: /var/run/dpdk/spdk_pid2883271 00:33:38.896 Removing: /var/run/dpdk/spdk_pid2883293 00:33:38.896 Removing: /var/run/dpdk/spdk_pid2883547 00:33:38.896 Removing: /var/run/dpdk/spdk_pid2883772 00:33:38.896 Removing: /var/run/dpdk/spdk_pid2883990 00:33:39.153 Removing: /var/run/dpdk/spdk_pid2884214 00:33:39.153 Removing: /var/run/dpdk/spdk_pid2884493 00:33:39.153 Removing: /var/run/dpdk/spdk_pid2884775 00:33:39.153 Removing: /var/run/dpdk/spdk_pid2885052 00:33:39.153 Removing: /var/run/dpdk/spdk_pid2885339 00:33:39.153 Removing: /var/run/dpdk/spdk_pid2885619 00:33:39.153 Removing: /var/run/dpdk/spdk_pid2885904 00:33:39.153 Removing: /var/run/dpdk/spdk_pid2886181 00:33:39.153 Removing: /var/run/dpdk/spdk_pid2886459 00:33:39.153 Removing: /var/run/dpdk/spdk_pid2886695 00:33:39.153 Removing: /var/run/dpdk/spdk_pid2886915 00:33:39.153 Removing: /var/run/dpdk/spdk_pid2887143 00:33:39.153 Removing: /var/run/dpdk/spdk_pid2887371 00:33:39.153 Removing: /var/run/dpdk/spdk_pid2887629 00:33:39.153 Removing: /var/run/dpdk/spdk_pid2887918 00:33:39.153 Removing: /var/run/dpdk/spdk_pid2888200 00:33:39.153 Removing: /var/run/dpdk/spdk_pid2888482 00:33:39.153 Removing: /var/run/dpdk/spdk_pid2888766 00:33:39.153 Removing: /var/run/dpdk/spdk_pid2888877 00:33:39.153 Removing: /var/run/dpdk/spdk_pid2889251 00:33:39.153 Removing: /var/run/dpdk/spdk_pid2893150 00:33:39.153 Removing: /var/run/dpdk/spdk_pid2939960 00:33:39.153 Removing: /var/run/dpdk/spdk_pid2944530 00:33:39.153 Removing: /var/run/dpdk/spdk_pid2954949 00:33:39.153 Removing: /var/run/dpdk/spdk_pid2960568 00:33:39.153 Removing: /var/run/dpdk/spdk_pid2964800 00:33:39.153 Removing: /var/run/dpdk/spdk_pid2965447 00:33:39.153 Removing: /var/run/dpdk/spdk_pid2971946 00:33:39.153 Removing: /var/run/dpdk/spdk_pid2978861 00:33:39.153 Removing: /var/run/dpdk/spdk_pid2978864 00:33:39.153 Removing: /var/run/dpdk/spdk_pid2979663 00:33:39.153 Removing: /var/run/dpdk/spdk_pid2980631 00:33:39.153 Removing: /var/run/dpdk/spdk_pid2981505 00:33:39.153 Removing: /var/run/dpdk/spdk_pid2982038 00:33:39.153 Removing: /var/run/dpdk/spdk_pid2982062 00:33:39.153 Removing: /var/run/dpdk/spdk_pid2982311 00:33:39.153 Removing: /var/run/dpdk/spdk_pid2982571 00:33:39.153 Removing: /var/run/dpdk/spdk_pid2982573 00:33:39.153 Removing: /var/run/dpdk/spdk_pid2983370 00:33:39.153 Removing: /var/run/dpdk/spdk_pid2984363 00:33:39.153 Removing: /var/run/dpdk/spdk_pid2985209 00:33:39.153 Removing: /var/run/dpdk/spdk_pid2985745 00:33:39.153 Removing: /var/run/dpdk/spdk_pid2985833 00:33:39.153 Removing: /var/run/dpdk/spdk_pid2986115 00:33:39.153 Removing: /var/run/dpdk/spdk_pid2987410 00:33:39.153 Removing: /var/run/dpdk/spdk_pid2988523 00:33:39.153 Removing: /var/run/dpdk/spdk_pid2997218 00:33:39.153 Removing: /var/run/dpdk/spdk_pid2997518 00:33:39.153 Removing: /var/run/dpdk/spdk_pid3002011 00:33:39.153 Removing: /var/run/dpdk/spdk_pid3008104 00:33:39.153 Removing: /var/run/dpdk/spdk_pid3010802 00:33:39.153 Removing: /var/run/dpdk/spdk_pid3022141 00:33:39.153 Removing: /var/run/dpdk/spdk_pid3031495 00:33:39.153 Removing: /var/run/dpdk/spdk_pid3033320 00:33:39.153 Removing: /var/run/dpdk/spdk_pid3034137 00:33:39.411 Removing: /var/run/dpdk/spdk_pid3052002 00:33:39.411 Removing: /var/run/dpdk/spdk_pid3055939 00:33:39.411 Removing: /var/run/dpdk/spdk_pid3081593 00:33:39.411 Removing: /var/run/dpdk/spdk_pid3086381 00:33:39.411 Removing: /var/run/dpdk/spdk_pid3087995 00:33:39.411 Removing: /var/run/dpdk/spdk_pid3089977 00:33:39.411 Removing: /var/run/dpdk/spdk_pid3090094 00:33:39.411 Removing: /var/run/dpdk/spdk_pid3090348 00:33:39.411 Removing: /var/run/dpdk/spdk_pid3090614 00:33:39.411 Removing: /var/run/dpdk/spdk_pid3091194 00:33:39.411 Removing: /var/run/dpdk/spdk_pid3093095 00:33:39.411 Removing: /var/run/dpdk/spdk_pid3094159 00:33:39.411 Removing: /var/run/dpdk/spdk_pid3094722 00:33:39.411 Removing: /var/run/dpdk/spdk_pid3096996 00:33:39.411 Removing: /var/run/dpdk/spdk_pid3098229 00:33:39.411 Removing: /var/run/dpdk/spdk_pid3098817 00:33:39.411 Removing: /var/run/dpdk/spdk_pid3103192 00:33:39.411 Removing: /var/run/dpdk/spdk_pid3113757 00:33:39.411 Removing: /var/run/dpdk/spdk_pid3117974 00:33:39.411 Removing: /var/run/dpdk/spdk_pid3124082 00:33:39.411 Removing: /var/run/dpdk/spdk_pid3125541 00:33:39.411 Removing: /var/run/dpdk/spdk_pid3127021 00:33:39.411 Removing: /var/run/dpdk/spdk_pid3131570 00:33:39.411 Removing: /var/run/dpdk/spdk_pid3135822 00:33:39.412 Removing: /var/run/dpdk/spdk_pid3143749 00:33:39.412 Removing: /var/run/dpdk/spdk_pid3143797 00:33:39.412 Removing: /var/run/dpdk/spdk_pid3149114 00:33:39.412 Removing: /var/run/dpdk/spdk_pid3149337 00:33:39.412 Removing: /var/run/dpdk/spdk_pid3149477 00:33:39.412 Removing: /var/run/dpdk/spdk_pid3149914 00:33:39.412 Removing: /var/run/dpdk/spdk_pid3149922 00:33:39.412 Removing: /var/run/dpdk/spdk_pid3154793 00:33:39.412 Removing: /var/run/dpdk/spdk_pid3155350 00:33:39.412 Removing: /var/run/dpdk/spdk_pid3159953 00:33:39.412 Removing: /var/run/dpdk/spdk_pid3162854 00:33:39.412 Removing: /var/run/dpdk/spdk_pid3168411 00:33:39.412 Removing: /var/run/dpdk/spdk_pid3174134 00:33:39.412 Removing: /var/run/dpdk/spdk_pid3183031 00:33:39.412 Removing: /var/run/dpdk/spdk_pid3190507 00:33:39.412 Removing: /var/run/dpdk/spdk_pid3190509 00:33:39.412 Removing: /var/run/dpdk/spdk_pid3210011 00:33:39.412 Removing: /var/run/dpdk/spdk_pid3210680 00:33:39.412 Removing: /var/run/dpdk/spdk_pid3211227 00:33:39.412 Removing: /var/run/dpdk/spdk_pid3212008 00:33:39.412 Removing: /var/run/dpdk/spdk_pid3212871 00:33:39.412 Removing: /var/run/dpdk/spdk_pid3213420 00:33:39.412 Removing: /var/run/dpdk/spdk_pid3214204 00:33:39.412 Removing: /var/run/dpdk/spdk_pid3214751 00:33:39.412 Removing: /var/run/dpdk/spdk_pid3219265 00:33:39.412 Removing: /var/run/dpdk/spdk_pid3219530 00:33:39.412 Removing: /var/run/dpdk/spdk_pid3225864 00:33:39.412 Removing: /var/run/dpdk/spdk_pid3226061 00:33:39.412 Removing: /var/run/dpdk/spdk_pid3228422 00:33:39.412 Removing: /var/run/dpdk/spdk_pid3236416 00:33:39.412 Removing: /var/run/dpdk/spdk_pid3236429 00:33:39.412 Removing: /var/run/dpdk/spdk_pid3242491 00:33:39.412 Removing: /var/run/dpdk/spdk_pid3244484 00:33:39.669 Removing: /var/run/dpdk/spdk_pid3246477 00:33:39.669 Removing: /var/run/dpdk/spdk_pid3247666 00:33:39.669 Removing: /var/run/dpdk/spdk_pid3249666 00:33:39.669 Removing: /var/run/dpdk/spdk_pid3250879 00:33:39.669 Removing: /var/run/dpdk/spdk_pid3260010 00:33:39.669 Removing: /var/run/dpdk/spdk_pid3260536 00:33:39.669 Removing: /var/run/dpdk/spdk_pid3261060 00:33:39.669 Removing: /var/run/dpdk/spdk_pid3263507 00:33:39.669 Removing: /var/run/dpdk/spdk_pid3264035 00:33:39.669 Removing: /var/run/dpdk/spdk_pid3264567 00:33:39.669 Removing: /var/run/dpdk/spdk_pid3268540 00:33:39.669 Removing: /var/run/dpdk/spdk_pid3268662 00:33:39.669 Removing: /var/run/dpdk/spdk_pid3270125 00:33:39.669 Removing: /var/run/dpdk/spdk_pid3270729 00:33:39.669 Removing: /var/run/dpdk/spdk_pid3270764 00:33:39.669 Clean 00:33:39.669 15:38:43 -- common/autotest_common.sh@1451 -- # return 0 00:33:39.669 15:38:43 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:33:39.669 15:38:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:39.669 15:38:43 -- common/autotest_common.sh@10 -- # set +x 00:33:39.669 15:38:43 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:33:39.669 15:38:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:39.669 15:38:43 -- common/autotest_common.sh@10 -- # set +x 00:33:39.669 15:38:43 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:39.669 15:38:43 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:33:39.669 15:38:43 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:33:39.669 15:38:43 -- spdk/autotest.sh@391 -- # hash lcov 00:33:39.669 15:38:43 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:33:39.926 15:38:43 -- spdk/autotest.sh@393 -- # hostname 00:33:39.926 15:38:43 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-22 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:33:39.926 geninfo: WARNING: invalid characters removed from testname! 00:34:01.847 15:39:04 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:02.782 15:39:06 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:04.683 15:39:08 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:06.058 15:39:09 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:07.963 15:39:11 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:09.867 15:39:13 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:11.241 15:39:15 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:11.241 15:39:15 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:11.241 15:39:15 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:34:11.241 15:39:15 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:11.241 15:39:15 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:11.241 15:39:15 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:11.241 15:39:15 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:11.241 15:39:15 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:11.241 15:39:15 -- paths/export.sh@5 -- $ export PATH 00:34:11.241 15:39:15 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:11.241 15:39:15 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:34:11.241 15:39:15 -- common/autobuild_common.sh@444 -- $ date +%s 00:34:11.241 15:39:15 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721050755.XXXXXX 00:34:11.241 15:39:15 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721050755.bkV83b 00:34:11.241 15:39:15 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:34:11.241 15:39:15 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:34:11.241 15:39:15 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:34:11.241 15:39:15 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:34:11.241 15:39:15 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:34:11.241 15:39:15 -- common/autobuild_common.sh@460 -- $ get_config_params 00:34:11.241 15:39:15 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:34:11.241 15:39:15 -- common/autotest_common.sh@10 -- $ set +x 00:34:11.241 15:39:15 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:34:11.241 15:39:15 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:34:11.241 15:39:15 -- pm/common@17 -- $ local monitor 00:34:11.241 15:39:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:11.241 15:39:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:11.241 15:39:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:11.241 15:39:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:11.241 15:39:15 -- pm/common@21 -- $ date +%s 00:34:11.241 15:39:15 -- pm/common@25 -- $ sleep 1 00:34:11.241 15:39:15 -- pm/common@21 -- $ date +%s 00:34:11.241 15:39:15 -- pm/common@21 -- $ date +%s 00:34:11.241 15:39:15 -- pm/common@21 -- $ date +%s 00:34:11.500 15:39:15 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721050755 00:34:11.500 15:39:15 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721050755 00:34:11.500 15:39:15 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721050755 00:34:11.500 15:39:15 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721050755 00:34:11.500 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721050755_collect-vmstat.pm.log 00:34:11.500 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721050755_collect-cpu-load.pm.log 00:34:11.500 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721050755_collect-cpu-temp.pm.log 00:34:11.500 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721050755_collect-bmc-pm.bmc.pm.log 00:34:12.435 15:39:16 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:34:12.435 15:39:16 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:34:12.435 15:39:16 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:12.435 15:39:16 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:34:12.435 15:39:16 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:34:12.435 15:39:16 -- spdk/autopackage.sh@19 -- $ timing_finish 00:34:12.435 15:39:16 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:12.435 15:39:16 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:34:12.435 15:39:16 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:34:12.435 15:39:16 -- spdk/autopackage.sh@20 -- $ exit 0 00:34:12.436 15:39:16 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:34:12.436 15:39:16 -- pm/common@29 -- $ signal_monitor_resources TERM 00:34:12.436 15:39:16 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:34:12.436 15:39:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:12.436 15:39:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:34:12.436 15:39:16 -- pm/common@44 -- $ pid=3282512 00:34:12.436 15:39:16 -- pm/common@50 -- $ kill -TERM 3282512 00:34:12.436 15:39:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:12.436 15:39:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:34:12.436 15:39:16 -- pm/common@44 -- $ pid=3282514 00:34:12.436 15:39:16 -- pm/common@50 -- $ kill -TERM 3282514 00:34:12.436 15:39:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:12.436 15:39:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:34:12.436 15:39:16 -- pm/common@44 -- $ pid=3282515 00:34:12.436 15:39:16 -- pm/common@50 -- $ kill -TERM 3282515 00:34:12.436 15:39:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:12.436 15:39:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:34:12.436 15:39:16 -- pm/common@44 -- $ pid=3282537 00:34:12.436 15:39:16 -- pm/common@50 -- $ sudo -E kill -TERM 3282537 00:34:12.436 + [[ -n 2753731 ]] 00:34:12.436 + sudo kill 2753731 00:34:12.444 [Pipeline] } 00:34:12.464 [Pipeline] // stage 00:34:12.471 [Pipeline] } 00:34:12.489 [Pipeline] // timeout 00:34:12.493 [Pipeline] } 00:34:12.512 [Pipeline] // catchError 00:34:12.516 [Pipeline] } 00:34:12.533 [Pipeline] // wrap 00:34:12.537 [Pipeline] } 00:34:12.548 [Pipeline] // catchError 00:34:12.555 [Pipeline] stage 00:34:12.557 [Pipeline] { (Epilogue) 00:34:12.571 [Pipeline] catchError 00:34:12.573 [Pipeline] { 00:34:12.588 [Pipeline] echo 00:34:12.590 Cleanup processes 00:34:12.597 [Pipeline] sh 00:34:12.932 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:12.933 3282618 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:34:12.933 3282960 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:12.946 [Pipeline] sh 00:34:13.228 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:13.228 ++ grep -v 'sudo pgrep' 00:34:13.228 ++ awk '{print $1}' 00:34:13.228 + sudo kill -9 3282618 00:34:13.243 [Pipeline] sh 00:34:13.529 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:13.529 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:34:18.793 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:34:22.092 [Pipeline] sh 00:34:22.376 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:22.376 Artifacts sizes are good 00:34:22.394 [Pipeline] archiveArtifacts 00:34:22.403 Archiving artifacts 00:34:22.562 [Pipeline] sh 00:34:22.846 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:34:22.862 [Pipeline] cleanWs 00:34:22.874 [WS-CLEANUP] Deleting project workspace... 00:34:22.874 [WS-CLEANUP] Deferred wipeout is used... 00:34:22.880 [WS-CLEANUP] done 00:34:22.882 [Pipeline] } 00:34:22.901 [Pipeline] // catchError 00:34:22.912 [Pipeline] sh 00:34:23.193 + logger -p user.info -t JENKINS-CI 00:34:23.201 [Pipeline] } 00:34:23.218 [Pipeline] // stage 00:34:23.223 [Pipeline] } 00:34:23.237 [Pipeline] // node 00:34:23.243 [Pipeline] End of Pipeline 00:34:23.288 Finished: SUCCESS